10 interesting stories served every morning and every evening.




1 1,624 shares, 68 trendiness

Backing up Spotify

Anna’s Blog

Updates about Anna’s Archive, the largest truly open li­brary in hu­man his­tory.

We backed up Spotify (metadata and mu­sic files). It’s dis­trib­uted in bulk tor­rents (~300TB), grouped by pop­u­lar­ity.

This re­lease in­cludes the largest pub­licly avail­able mu­sic meta­data data­base with 256 mil­lion tracks and 186 mil­lion unique ISRCs.

It’s the world’s first preservation archive” for mu­sic which is fully open (meaning it can eas­ily be mir­rored by any­one with enough disk space), with 86 mil­lion mu­sic files, rep­re­sent­ing around 99.6% of lis­tens.

Anna’s Archive nor­mally fo­cuses on text (e.g. books and pa­pers). We ex­plained in The crit­i­cal win­dow of shadow li­braries” that we do this be­cause text has the high­est in­for­ma­tion den­sity. But our mis­sion (preserving hu­man­i­ty’s knowl­edge and cul­ture) does­n’t dis­tin­guish among me­dia types. Sometimes an op­por­tu­nity comes along out­side of text. This is such a case.

A while ago, we dis­cov­ered a way to scrape Spotify at scale. We saw a role for us here to build a mu­sic archive pri­mar­ily aimed at preser­va­tion.

Generally speak­ing, mu­sic is al­ready fairly well pre­served. There are many mu­sic en­thu­si­asts in the world who dig­i­tized their CD and LP col­lec­tions, shared them through tor­rents or other dig­i­tal means, and metic­u­lously cat­a­logued them.

However, these ex­ist­ing ef­forts have some ma­jor is­sues:

Over-focus on the most pop­u­lar artists. There is a long tail of mu­sic which only gets pre­served when a sin­gle per­son cares enough to share it. And such files are of­ten poorly seeded.

Over-focus on the high­est pos­si­ble qual­ity. Since these are cre­ated by au­dio­philes with high end equip­ment and fans of a par­tic­u­lar artist, they chase the high­est pos­si­ble file qual­ity (e.g. loss­less FLAC). This in­flates the file size and makes it hard to keep a full archive of all mu­sic that hu­man­ity has ever pro­duced.

No au­thor­i­ta­tive list of tor­rents aim­ing to rep­re­sent all mu­sic ever pro­duced. An equiv­a­lent of our book tor­rent list (which ag­gre­gate tor­rents from LibGen, Sci-Hub, Z-Lib, and many more) does not ex­ist for mu­sic.

This Spotify scrape is our hum­ble at­tempt to start such a preservation archive” for mu­sic. Of course Spotify does­n’t have all the mu­sic in the world, but it’s a great start.

Before we dive into the de­tails of this col­lec­tion, here is a quick overview:

Spotify has around 256 mil­lion tracks. This col­lec­tion con­tains meta­data for an es­ti­mated 99.9% of tracks.

We archived around 86 mil­lion mu­sic files, rep­re­sent­ing around 99.6% of lis­tens. It’s a lit­tle un­der 300TB in to­tal size.

We pri­mar­ily used Spotify’s popularity” met­ric to pri­or­i­tize tracks. View the top 10,000 most pop­u­lar songs in this HTML file (13.8MB gzipped).

For pop­u­lar­ity>0, we got close to all tracks on the plat­form. The qual­ity is the orig­i­nal OGG Vorbis at 160kbit/s. Metadata was added with­out reen­cod­ing the au­dio (and an archive of diff files is avail­able to re­con­struct the orig­i­nal files from Spotify, as well as a meta­data file with orig­i­nal hashes and check­sums).

For pop­u­lar­ity=0, we got files rep­re­sent­ing about half the num­ber of lis­tens (either orig­i­nal or a copy with the same ISRC). The au­dio is reen­coded to OGG Opus at 75kbit/s — sound­ing the same to most peo­ple, but no­tice­able to an ex­pert.

The cut­off is 2025-07, any­thing re­leased af­ter that date may not be pre­sent (though in some cases it is).

This is by far the largest mu­sic meta­data data­base that is pub­licly avail­able. For com­par­i­son, we have 256 mil­lion tracks, while oth­ers have 50-150 mil­lion. Our data is well-an­no­tated: MusicBrainz has 5 mil­lion unique ISRCs, while our data­base has 186 mil­lion.

This is the world’s first preservation archive” for mu­sic which is fully open (meaning it can eas­ily be mir­rored by any­one with enough disk space).

The data will be re­leased in dif­fer­ent stages on our Torrents page:

[ ] .zstdpatch files (to re­con­struct orig­i­nal files be­fore we added em­bed­ded meta­data)

For now this is a tor­rents-only archive aimed at preser­va­tion, but if there is enough in­ter­est, we could add down­load­ing of in­di­vid­ual files to Anna’s Archive. Please let us know if you’d like this.

Please help pre­serve these files:

Seed these tor­rents (on the Torrents page of Anna’s Archive). Even a seed­ing a few tor­rents helps!

With your help, hu­man­i­ty’s mu­si­cal her­itage will be for­ever pro­tected from de­struc­tion by nat­ural dis­as­ters, wars, bud­get cuts, and other cat­a­stro­phes.

In this blog we will an­a­lyze the data and look at de­tails of the re­lease. We hope you en­joy.

Let’s dive into the data! Here’s some high-level sta­tis­tics pulled from the meta­data:

The most con­ve­nient avail­able way to sort songs on Spotify is us­ing the pop­u­lar­ity met­ric, de­fined as fol­lows:

The pop­u­lar­ity of a track is a value be­tween 0 and 100, with 100 be­ing the most pop­u­lar. The pop­u­lar­ity is cal­cu­lated by al­go­rithm and is based, in the most part, on the to­tal num­ber of plays the track has had and how re­cent those plays are.

Generally speak­ing, songs that are be­ing played a lot now will have a higher pop­u­lar­ity than songs that were played a lot in the past. Duplicate tracks (e.g. the same track from a sin­gle and an al­bum) are rated in­de­pen­dently. Artist and al­bum pop­u­lar­ity is de­rived math­e­mat­i­cally from track pop­u­lar­ity.

If we group songs by pop­u­lar­ity, we see that there is an ex­tremely large tail end:

≥70% of songs are ones al­most no one ever lis­tens to (stream count < 1000). To see some de­tail, we can plot this on a log­a­rith­mic scale:

The top 10,000 songs span pop­u­lar­i­ties 70-100. You can view them all in this HTML file (13.8MB gzipped).

Additionally, we can es­ti­mate the num­ber of lis­tens per track and to­tal num­ber per pop­u­lar­ity. The stream count data is es­ti­mated since it is dif­fi­cult to fetch at scale, so we sam­pled it ran­domly.

As we can see, most of the lis­tens come from songs with a pop­u­lar­ity be­tween 50 and 80, even though there’s only 210.000 songs with pop­u­lar­ity ≥50, around 0.1% of songs. Note the huge (subjectively es­ti­mated) er­ror bar on pop=0 — the rea­son for this is that Spotify does not pub­lish stream counts for songs with < 1000 streams.

We can also es­ti­mate that the top three songs (as of writ­ing) have a higher to­tal stream count than the bot­tom 20-100 mil­lion songs com­bined:

se­lect json_­group_ar­ray(artists.name), tracks.name, tracks.pop­u­lar­ity

from tracks

join track­_artists on track­_rowid = tracks.rowid

join artists on artist_rowid = artists.rowid

where tracks.id in (select id from tracks or­der by pop­u­lar­ity desc limit 3)

group by tracks.id;

Note that the pop­u­lar­ity is very time-de­pen­dent and not di­rectly trans­lat­able into stream counts, so these top songs are ba­si­cally ar­bi­trary.

We have archived around 86 mil­lion songs from Spotify, or­der­ing by pop­u­lar­ity de­scend­ing. While this only rep­re­sents 37% of songs, it rep­re­sents around 99.6% of lis­tens:

Put an­other way, for any ran­dom song a per­son lis­tens to, there is a 99.6% like­li­hood that it is part of the archive. We ex­pect this num­ber to be higher if you fil­ter to only hu­man-cre­ated songs. Do re­mem­ber though that the er­ror bar on lis­tens for pop­u­lar­ity 0 is large.

For pop­u­lar­ity=0, we or­dered tracks by a sec­ondary im­por­tance met­ric based on artist fol­low­ers and al­bum pop­u­lar­ity, and fetched in de­scend­ing or­der.

We have stopped here due to the long tail end with di­min­ish­ing re­turns (700TB+ ad­di­tional stor­age for mi­nor ben­e­fit), as well as the bad qual­ity of songs with pop­u­lar­ity=0 (many AI gen­er­ated, hard to fil­ter).

Before div­ing into more fun stats, let’s look at how the col­lec­tion it­self is struc­tured. It’s in two parts: meta­data and mu­sic files, both of which are dis­trib­uted through tor­rents.

The meta­data tor­rents con­tain, based on sta­tis­ti­cal analy­sis, around 99.9% of artists, al­bums, tracks. The meta­data is pub­lished as com­pact queryable SQLite data­bases. Care was taken, by do­ing API re­sponse re­con­struc­tion, that there is (almost) no data loss in the con­ver­sion from the API JSON.

The meta­data for artists, al­bums, tracks is less than 200 GB com­pressed. The sec­ondary meta­data of au­dio analy­sis is 4TB com­pressed.

We look at more de­tail at the struc­ture of the meta­data at the end of this blog post.

The data it­self is dis­trib­uted in the Anna’s Archive Containers (AAC) for­mat. This is a stan­dard which we cre­ated a few years ago for dis­trib­ut­ing files across mul­ti­ple tor­rents. It is not to be con­fused with the Advanced Audio Coding (AAC) en­cod­ing for­mat.

Since the orig­i­nal files con­tain zero meta­data, as much meta­data as pos­si­ble was added to the OGG files, in­clud­ing ti­tle, url, ISRC, UPC, al­bum art, re­play­gain in­for­ma­tion, etc. The in­valid OGG data packet Spotify prepends to every track file was stripped — it is pre­sent in the track­_­files db.

For pop­u­lar­ity>0, the qual­ity is the orig­i­nal OGG Vorbis at 160kbit/s. Metadata was added with­out reen­cod­ing the au­dio (and an archive of diff files is avail­able to re­con­struct the orig­i­nal files from Spotify).

For pop­u­lar­ity=0, the au­dio is reen­coded to OGG Opus at 75kbit/s — sound­ing the same to most peo­ple, but no­tice­able to an ex­pert.

There is a known bug where the REPLAYGAIN_ALBUM_PEAK vor­bis­com­ment tag value is a copy-paste of REPLAYGAIN_ALBUM_GAIN in­stead of the cor­rect value for many files.

Many peo­ple com­plain about how Spotify shuf­fles tracks. Since we have meta­data for 99.9+% of tracks on Spotify, we can cre­ate a true shuf­fle across all songs on Spotify!

$ sqlite3 spo­ti­fy_­clean.sqlite3

sqlite> .mode table

sqlite> with ran­dom_ids as (select value as inx, (abs(random())%(select max(rowid) from tracks)) as trowid from gen­er­ate_se­ries(0)) se­lect inx,tracks.id,tracks.pop­u­lar­ity,tracks.name from ran­dom_ids join tracks on tracks.rowid=trowid limit 20;

| inx | id | pop­u­lar­ity | name |

| 0 | 7KS7cm2arAGA2VZaZ2XvNa | 0 | Just Derry |

| 1 | 1BkLS2tmxD088l2ojUW5cv | 0 | Kapitel 37 - Aber erst wird gegessen - Schon wieder Weihnach |

| | | | ten mit der buck­li­gen Verwandtschaft |

| 2 | 5RSU7MELzCaPweG8ALmjLK | 0 | El Buen Pastor |

| 3 | 1YNIl8AKIFltYH8O2coSoT | 0 | You Are The One |

| 4 | 1GxMuEYWs6Lzbn2EcHAYVx | 0 | Waorani |

| 5 | 4NhARf6pjwDpbyQdZeSsW3 | 0 | Magic in the Sand |

| 6 | 7pDrZ6rGaO6FHk6QtTKvQo | 0 | Yo No Fui |

| 7 | 15w4LBQ6rkf3QA2OiSMBRD | 25 | 你走 |

| 8 | 5Tx7jRLKfYlay199QB2MSs | 0 | Soul Clap |

| 9 | 3L7CkCD9595MuM0SVuBZ64 | 1 | Xuân Và Tuổi Trẻ |

| 10 | 4S6EkSnfxlU5UQUOZs7bKR | 1 | Elle était belle |

| 11 | 0ZIOUYrrArvSTq6mrbVqa1 | 0 | Kapitel 7.2 - Die Welt der Magie - 4 in 1 Sammelband: Weiße |

| | | | Magie | Medialität, Channeling & Trance | Divination & Wahrs |

| | | | agen | Energetisches Heilen |

| 12 | 4VfKaW1X1FKv8qlrgKbwfT | 0 | Pura en­er­gia |

| 13 | 1VugH5kD8tnMKAPeeeTK9o | 10 | Dalia |

| 14 | 6NPPbOybTFLL0LzMEbVvuo | 4 | Teil 12 - Folge 2: Arkadien brennt |

| 15 | 1VSVrAbaxNllk7ojNGXDym | 3 | Bre Petrunko |

| 16 | 4NSmBO7uzkuES7vDLvHtX8 | 0 | Paranoia |

| 17 | 7AHhiIXvx09DRZGQIsbcxB | 0 | Sand Underfoot Moments |

| 18 | 0sitt32n4JoSM1ewOWL7hs | 0 | Start Over Again |

| 19 | 080Zimdx271ixXbzdZOqSx | 3 | Auf all eu­ren Wegen |

Or, fil­ter­ing to only some­what pop­u­lar songs

sqlite> with ran­dom_ids as (select value as inx, (abs(random())%(select max(rowid) from tracks)) as trowid from gen­er­ate_se­ries(0)) se­lect inx,tracks.id,tracks.pop­u­lar­ity,al­bums.name as al­bum_­name,tracks.name from ran­dom_ids join tracks on tracks.rowid=trowid join al­bums on al­bums.rowid = al­bum_rowid

where tracks.pop­u­lar­ity >= 10 limit 20;

| inx | id | pop­u­lar­ity | al­bum_­name | name |

| 32 | 1om6LphEpiLpl9irlOsnzb | 23 | The Essential Widespread Panic | Love Tractor |

| 47 | 2PCtPCRDia6spej5xcxbvW | 20 | Desatinos Desplumados | Sirena |

| 65 | 5wmR10WloZqVVdIpYhdaqq | 20 | Um Passeio pela Harpa Cristã - Vol 6 | As Santas Escrituras |

| 89 | 5xCuYNX3QlPsxhKLbWlQO9 | 11 | No Me Amenaces | No Me Amenaces |

| 96 | 2GRmiDIcIwhQnkxakNyUy4 | 16 | Very Bad Truth (Kingston Universi… | Kapitel 8.3 - Very Bad Truth |

| 98 | 5720pe1PjNXoMcbDPmyeLW | 11 | Kleiner Eisbär: Hilf mir fliegen! | Kapitel 06: Hilf mir fliegen! |

| 109 | 1mRXGNVsfD9UtFw6r5YtzF | 11 | Lunar Archive | Outdoor Seating |

| 110 | 5XOQwf6vkcJxWG9zgqVEWI | 19 | Teenage Dream | Firework |

| 125 | 0rbHOp8B4CpPXXZSekySvv | 15 | Previa y Cachengue 2025 | Debi tirar mas fo­tos |

...

Read the original on annas-archive.li »

2 1,350 shares, 56 trendiness

Jmail, logged in as jeevacation@gmail.com

...

Read the original on www.jmail.world »

3 1,333 shares, 32 trendiness

Honest Edition

Guidelines |

FAQ |

Lists |

API |

Security |

Terms no one reads |

Sell 7% for clout |

Overwhelm mods

...

Read the original on dosaygo-studio.github.io »

4 1,153 shares, 46 trendiness

Texas is suing all of the big TV makers for spying on what you watch

is a news writer who cov­ers the stream­ing wars, con­sumer tech, crypto, so­cial me­dia, and much more. Previously, she was a writer and ed­i­tor at MUO.

Posts from this au­thor will be added to your daily email di­gest and your home­page feed.

is a news writer who cov­ers the stream­ing wars, con­sumer tech, crypto, so­cial me­dia, and much more. Previously, she was a writer and ed­i­tor at MUO.

Posts from this au­thor will be added to your daily email di­gest and your home­page feed.

ACR uses vi­sual and au­dio data to iden­tify what you’re watch­ing on TV, in­clud­ing shows and movies on stream­ing ser­vices and ca­ble TV, YouTube videos, Blu-ray discs, and more. Attorney General Paxton al­leges that ACR also cap­tures se­cu­rity and door­bell cam­era streams, me­dia sent us­ing Apple AirPlay or Google Cast, as well as the dis­plays of other de­vices con­nected to the TVs HDMI port, such as lap­tops and game con­soles.

The law­suit ac­cuses Samsung, Sony, LG, Hisense, and TCL of deceptively” prompt­ing users to ac­ti­vate ACR, while disclosures are hid­den, vague, and mis­lead­ing.” Samsung and Hisense, for ex­am­ple, cap­ture screen­shots of a TVs dis­play every 500 mil­lisec­onds,” Paxton claims. The law­suit al­leges that TV man­u­fac­tur­ers siphon view­ing data back to each com­pany without the user’s knowl­edge or con­sent,” which they can then sell for tar­geted ad­ver­tis­ing.

Along with these al­le­ga­tions, Attorney General Paxton also raises con­cerns about TCL and Hisense’s ties to China, as they’re both based in the coun­try. The law­suit claims the TVs made by both com­pa­nies are Chinese-sponsored sur­veil­lance de­vices, record­ing the view­ing habits of Texans at every turn.”

Attorney General Paxton ac­cuses the five TV mak­ers of vi­o­lat­ing the state’s Deceptive Trade Practices Act, which is meant to pro­tect con­sumers from false, de­cep­tive, or mis­lead­ing prac­tices. Paxton asks the court to im­pose a civil penalty and to block each com­pany from col­lect­ing, shar­ing, or sell­ing the ACR data they col­lect about Texas-based con­sumers. Samsung, Sony, LG, Hisense, and TCL did­n’t im­me­di­ately re­spond to a re­quest for com­ment.

Vizio, which is now owned by Walmart, paid $2.2 mil­lion to the Federal Trade Commission and New Jersey in 2017 over sim­i­lar al­le­ga­tions re­lated to ACR.

This con­duct is in­va­sive, de­cep­tive, and un­law­ful,” Paxton says in a state­ment. The fun­da­men­tal right to pri­vacy will be pro­tected in Texas be­cause own­ing a tele­vi­sion does not mean sur­ren­der­ing your per­sonal in­for­ma­tion to Big Tech or for­eign ad­ver­saries.”

Follow top­ics and au­thors from this story to see more like this in your per­son­al­ized home­page feed and to re­ceive email up­dates.

...

Read the original on www.theverge.com »

5 1,064 shares, 41 trendiness

How we pwned X (Twitter), Vercel, Cursor, Discord, and hundreds of companies through a supply-chain attack

Skip to con­tent

You signed in with an­other tab or win­dow. Reload to re­fresh your ses­sion.

You signed out in an­other tab or win­dow. Reload to re­fresh your ses­sion.

You switched ac­counts on an­other tab or win­dow. Reload to re­fresh your ses­sion.

You must be signed in to star a gist

You must be signed in to fork a gist

Embed this gist in your web­site.

Save hack­er­mon­dev/​5e2cd­c32849405ff­f6b46957747a2d28 to your com­puter and use it in GitHub Desktop.

Embed this gist in your web­site.

Save hack­er­mon­dev/​5e2cd­c32849405ff­f6b46957747a2d28 to your com­puter and use it in GitHub Desktop.

How we pwned X (Twitter), Vercel, Cursor, Discord, and hun­dreds of com­pa­nies through a sup­ply-chain at­tack

Sign up for free

to join this con­ver­sa­tion on GitHub.

Already have an ac­count?

Sign in to com­ment

You can’t per­form that ac­tion at this time.

...

Read the original on gist.github.com »

6 1,057 shares, 38 trendiness

frontier intelligence built for speed

Gemini 3 Flash is our lat­est model with fron­tier in­tel­li­gence built for speed that helps every­one learn, build, and plan any­thing — faster.

Senior Director, Product Management, on be­half of the Gemini team

Google is re­leas­ing Gemini 3 Flash, a fast and cost-ef­fec­tive model built for speed. You can now ac­cess Gemini 3 Flash through the Gemini app and AI Mode in Search. Developers can ac­cess it via the Gemini API in Google AI Studio, Google Antigravity, Gemini CLI, Android Studio, Vertex AI and Gemini Enterprise.

Summaries were gen­er­ated by Google AI. Generative AI is ex­per­i­men­tal.

It’s great for cod­ing, com­plex analy­sis, and quick an­swers in in­ter­ac­tive apps.

Gemini 3 Flash is now the de­fault model in the Gemini app and AI Mode in Search.

Developers and every­day users can ac­cess Gemini 3 Flash via var­i­ous Google plat­forms.

Summaries were gen­er­ated by Google AI. Generative AI is ex­per­i­men­tal.

...

Read the original on blog.google »

7 1,014 shares, 41 trendiness

AWS CEO Explains 3 Reasons AI Can’t Replace Junior Devs

AWS CEO Matt Garman out­lined 3 solid rea­sons why com­pa­nies should not fo­cus on cut­ting ju­nior de­vel­oper roles, not­ing that they are ac­tu­ally the most ex­pe­ri­enced with the AI tools”.

In a tech world ob­sessed with AI re­plac­ing hu­man work­ers, Matt Garman, CEO of Amazon Web Services (AWS), is push­ing back against one of the in­dus­try’s most pop­u­lar cost-cut­ting ideas.

Speaking on WIREDs The Big Interview pod­cast, Garman has a bold mes­sage for com­pa­nies rac­ing to cut costs with AI.

He was asked to ex­plain why he once called re­plac­ing ju­nior em­ploy­ees with AI one of the dumb­est ideas” he’d ever heard, and to ex­pand on how he be­lieves agen­tic AI will ac­tu­ally change the work­place in the com­ing years.

First, ju­nior em­ploy­ees are of­ten bet­ter with AI tools than se­nior staff.

Fresh grads have grown up with new tech­nol­ogy, so they can adapt quickly. Many of them learn AI-powered tools while study­ing or dur­ing in­tern­ships. They tend to ex­plore new fea­tures, find quick meth­ods to write code, and fig­ure out how to get the best re­sults from AI agents.

According to the 2025 Stack Overflow Developer Survey, 55.5% of early-ca­reer de­vel­op­ers re­ported us­ing AI tools daily in their de­vel­op­ment process, higher than for the ex­pe­ri­enced folks.

This com­fort with new tools al­lows them to work more ef­fi­ciently. In con­trast, se­nior de­vel­op­ers have es­tab­lished work­flows and may take more time to adopt. Recent re­search shows that over half of Gen Z em­ploy­ees are ac­tu­ally help­ing se­nior col­leagues up­skill in AI.

Second, ju­nior staff are usu­ally the least ex­pen­sive em­ploy­ees.

Junior em­ploy­ees usu­ally get much less in salary and ben­e­fits, so re­mov­ing them does not de­liver huge sav­ings. If a com­pany is try­ing to save money, it does­n’t make that much fi­nan­cial sense.

So, when com­pa­nies talk about in­creas­ing profit mar­gins, ju­nior em­ploy­ees should not be the de­fault or only tar­get. True op­ti­miza­tion, Real cost-cut­ting means look­ing at the whole com­pany be­cause there are plenty of other places where ex­penses can be trimmed.

In fact, 30% of com­pa­nies that laid off work­ers ex­pect­ing sav­ings ended up in­creas­ing ex­penses, and many had to re­hire later.

Think of a com­pany like a sports team. If you only keep vet­eran play­ers and never re­cruit rook­ies, what hap­pens when those vet­er­ans re­tire? You are left with no one who knows how to play the game.

Also, hir­ing peo­ple straight out of col­lege brings new ways of think­ing into the work­place. They have fresh ideas shaped by the lat­est trends, mo­ti­va­tion to in­no­vate.

More im­por­tantly, they form the foun­da­tion of a com­pa­ny’s fu­ture work­force. If a com­pany de­cides to stop hir­ing ju­nior em­ploy­ees al­to­gether, it cuts off its own tal­ent pipeline. Over time, that leads to fewer lead­ers to pro­mote from within.

A Deloitte re­port also notes that the tech work­force is ex­pected to grow at roughly twice the rate of the over­all U. S. work­force, high­light­ing the de­mand for tech tal­ent. Without a strong pipeline of ju­nior de­vel­op­ers com­ing in, com­pa­nies might face a tech tal­ent short­age.

When there are not enough ju­nior hires be­ing trained to­day, teams strug­gle to fill roles to­mor­row, es­pe­cially as pro­jects scale.

This is­n’t just cor­po­rate talk. As the leader of one of the world’s largest cloud com­put­ing plat­forms, serv­ing every­one from Netflix to the U. S. in­tel­li­gence agen­cies, Garman has a front-row seat to how com­pa­nies are ac­tu­ally us­ing AI.

And what he is see­ing makes him wor­ried that short-term think­ing could dam­age busi­nesses for years to come. Garman’s point is grounded in long-term strat­egy. A com­pany that re­lies solely on AI to han­dle tasks with­out train­ing new tal­ent could find it­self short of peo­ple.

Still, Garman ad­mits the next few years will be bumpy. Your job is go­ing to change,” he said. He be­lieves AI will make com­pa­nies more pro­duc­tive as well as the em­ploy­ees.

When tech­nol­ogy makes some­thing eas­ier, peo­ple want more of it. AI en­ables the cre­ation of soft­ware faster, al­low­ing com­pa­nies to de­velop more prod­ucts, en­ter new mar­kets, and serve more cus­tomers.

Developers will be re­spon­si­ble for more than just writ­ing code, with faster adap­ta­tion to new tech­nolo­gies be­com­ing es­sen­tial. But he has a hope­ful mes­sage in the end.

That’s why Geoffrey Hinton has ad­vised that Computer Science de­grees re­main es­sen­tial. This di­rectly sup­ports Matt Garman’s point. Fresh tal­ent with a strong un­der­stand­ing of core fun­da­men­tals be­comes cru­cial for fill­ing these higher-value roles of the fu­ture.

I’m very con­fi­dent in the medium to longer term that AI will def­i­nitely cre­ate more jobs than it re­moves at first,” Garman said.

...

Read the original on www.finalroundai.com »

8 880 shares, 28 trendiness

📝 Is Mozilla trying hard to kill itself?

It may be just me, but I read this as I don’t want to 😜 😜 but I’ll kill AdBlockers in Firefox for buck­eri­nos 😂. This dis­ap­points and sad­dens me a lot, and I hope I’m wrong.

...

Read the original on infosec.press »

9 838 shares, 31 trendiness

ALPR Watch – Track Surveillance Tech in Local Government

Your lo­cal gov­ern­ment might be dis­cussing sur­veil­lance tech like Flock cam­eras, fa­cial recog­ni­tion, or au­to­mated li­cense plate read­ers right now. This map helps you find those meet­ings and take ac­tion.

Why this mat­ters:  Municipalities across the US are qui­etly adopt­ing sur­veil­lance tech­nolo­gies in rapidly grow­ing num­bers with over 80,000 cam­eras al­ready out on the streets. These sys­tems track res­i­dents’ move­ments, col­lect bio­met­ric data, and build mas­sive data­bases of our daily lives.

alpr.watch scans meet­ing agen­das for key­words like flock,” license plate reader,” alpr,” and more. Each pin on the map shows where these con­ver­sa­tions are hap­pen­ing so that you can make a dif­fer­ence.

Zoom in to see ALPR cam­eras

Get Email Alerts for Your Area

Enter your email be­low and we’ll send you a lo­gin link. After log­ging in, you can set your no­ti­fi­ca­tion pref­er­ences.

I agree to the Terms of Service and Privacy Policy

Please agree to the Terms of Service and Privacy Policy to con­tinue.

You’re logged in! Update your no­ti­fi­ca­tion set­tings to re­ceive alerts.

Zoom in to see ALPR sur­veil­lance cam­eras

Data be­fore mid-De­cem­ber may be un­ver­i­fied. All fu­ture flags are 100% mod­er­a­tor ap­proved.

Automated License Plate Recognition (ALPR) sys­tems use cam­eras and ar­ti­fi­cial in­tel­li­gence to cap­ture, read, and store li­cense plate data from every pass­ing ve­hi­cle.

These sys­tems work 24/7 cre­at­ing a mas­sive data­base of where ve­hi­cles, and by ex­ten­sion, peo­ple, travel. Every trip to the gro­cery store, doc­tor’s of­fice, or place of wor­ship gets recorded and stored.

Flock Safety is one of the largest man­u­fac­tur­ers of ALPR cam­eras in the United States, mar­ket­ing their sys­tems to neigh­bor­hoods and law en­force­ment.

Flock cam­eras cap­ture li­cense plates, ve­hi­cle make/​model, color, and other iden­ti­fy­ing fea­tures. This data is shared across a mas­sive net­work of agen­cies and ju­ris­dic­tions, cre­at­ing a sur­veil­lance web that tracks mil­lions of Americans.

History shows that sur­veil­lance sys­tems ex­pand be­yond their orig­i­nal scope:

Systems mar­keted for solving crimes” get used for im­mi­gra­tion en­force­ment

These groups and in­di­vid­u­als are lead­ing the fight against mass sur­veil­lance. Consider sup­port­ing their work or get­ting in­volved lo­cally.

...

Read the original on alpr.watch »

10 791 shares, 32 trendiness

An extremely fast Python type checker and language server

TL;DR: ty is an ex­tremely fast Python type checker and

lan­guage server, writ­ten in Rust, and de­signed as an al­ter­na­tive to tools like mypy, Pyright, and Pylance.

Today, we’re an­nounc­ing the Beta re­lease of ty. We now use ty ex­clu­sively in our own pro­jects and are ready to rec­om­mend it to mo­ti­vated users for pro­duc­tion use.

At Astral, we build high-per­for­mance de­vel­oper tools for the Python ecosys­tem. We’re best known for

uv, our Python pack­age man­ager, and

Ruff, our lin­ter and for­mat­ter.

Today, we’re an­nounc­ing the Beta re­lease of the next tool in the Astral tool­chain: ty, an

ex­tremely fast Python type checker and lan­guage server, writ­ten in Rust.

ty was de­signed from the ground up to power a lan­guage server. The en­tire ty ar­chi­tec­ture is built around incrementality”, en­abling us to se­lec­tively re-run only the nec­es­sary com­pu­ta­tions when a user (e.g.) ed­its a file or mod­i­fies an in­di­vid­ual func­tion. This makes live up­dates ex­tremely fast in the con­text of an ed­i­tor or long-lived process.

You can in­stall ty to­day with uv tool in­stall ty@lat­est, or via our

VS Code ex­ten­sion.

Like Ruff and uv, ty’s im­ple­men­ta­tion was grounded in some of our core prod­uct prin­ci­ples:

An ob­ses­sive fo­cus on per­for­mance. Without caching, ty is con­sis­tently be­tween 10x and 60x faster than mypy and Pyright. When run in an ed­i­tor, the gap is even more dra­matic. As an ex­am­ple, af­ter edit­ing a load-bear­ing file in the PyTorch repos­i­tory, ty re­com­putes di­ag­nos­tics in 4.7ms: 80x faster than Pyright (386ms) and 500x faster than Pyrefly (2.38 sec­onds). ty is very fast!

Correct, prag­matic, and er­gonomic. With fea­tures like

first-class in­ter­sec­tion types,

ad­vanced type nar­row­ing, and

so­phis­ti­cated reach­a­bil­ity analy­sis, ty pushes for­ward the state of the art in Python type check­ing, pro­vid­ing more ac­cu­rate feed­back and avoid­ing as­sump­tions

about user in­tent that of­ten lead to false pos­i­tives. Our goal with ty is not only to build a faster type checker; we want to build a bet­ter type checker, and one that bal­ances cor­rect­ness with a deep fo­cus on the end-user ex­pe­ri­ence.

Built in the open. ty was built by our core team along­side dozens of ac­tive con­trib­u­tors un­der the MIT li­cense, and the same goes for our

ed­i­tor ex­ten­sions. You can run ty any­where that you write Python (including in the browser).

Even com­pared to other Rust-based lan­guage servers like Pyrefly, ty can run or­ders of mag­ni­tude faster when per­form­ing in­cre­men­tal up­dates on large pro­jects.

ty also in­cludes a

best-in-class di­ag­nos­tic sys­tem, in­spired by the Rust com­pil­er’s own world-class er­ror mes­sages. A sin­gle ty di­ag­nos­tic can pull in con­text from mul­ti­ple files at once to ex­plain not only what’s wrong, but why (and, of­ten, how to fix it).

Diagnostic out­put is the pri­mary user in­ter­face for a type checker; we pri­or­i­tized our di­ag­nos­tic sys­tem from the start (with both hu­mans and agents in mind) and view it as a first-class fea­ture in ty.

If you use VS Code, Cursor, or a sim­i­lar ed­i­tor, we rec­om­mend in­stalling the

ty VS Code ex­ten­sion. The ty lan­guage server sup­ports all the ca­pa­bil­i­ties

that you’d ex­pect for a mod­ern lan­guage server (Go to Definition, Symbol Rename, Auto-Complete, Auto-Import, Semantic Syntax Highlighting, Inlay Hints, etc.), and runs in any ed­i­tor that im­ple­ments the Language Server Protocol.

Following the Beta re­lease, our im­me­di­ate pri­or­ity is sup­port­ing early adopters. From there, we’re work­ing to­wards a Stable re­lease next year, with the gap be­tween the

Beta and

Stable mile­stones largely fo­cus­ing on: (1) sta­bil­ity and bug fixes, (2) com­plet­ing the long tail of fea­tures in the

Python typ­ing spec­i­fi­ca­tion, and (3) first-class sup­port for pop­u­lar third-party li­braries like Pydantic and

Django.

On a longer time hori­zon, though, ty will power se­man­tic ca­pa­bil­i­ties across the Astral tool­chain: dead code elim­i­na­tion, un­used de­pen­dency de­tec­tion, SemVer-compatible up­grade en­force­ment, CVE reach­a­bil­ity analy­sis, type-aware lint­ing, and more (including some that are too am­bi­tious to say out loud just yet).

We want to make Python the most pro­duc­tive pro­gram­ming ecosys­tem on Earth. Just as with

Ruff and uv, our com­mit­ment from here is that ty will get sig­nif­i­cantly bet­ter every week by work­ing closely with our users. Thank you for build­ing with us.

ty is the most so­phis­ti­cated prod­uct we’ve built, and its de­sign and im­ple­men­ta­tion have sur­faced some of the hard­est tech­ni­cal prob­lems we’ve seen at Astral. Working on ty re­quires a deep un­der­stand­ing of type the­ory, Python run­time se­man­tics, and how the Python ecosys­tem ac­tu­ally uses Python.

I’d like to thank all those that con­tributed di­rectly to the de­vel­op­ment of ty, in­clud­ing:

Douglas Creager, Alex Waygood,

David Peter, Micha Reiser,

Andrew Gallant, Aria Desires,

Carl Meyer, Zanie Blue,

Ibraheem Ahmed,

Dhruv Manilawala, Jack O’Connor,

Zsolt Dollenstein, Shunsuke Shibayama,

Matthew Mckee, Brent Westbrook,

UnboundVariable,

Shaygan Hooshyari, Justin Chapman,

InSync, Bhuminjay Soni,

Abhijeet Prasad Bodas,

Rasmus Nygren, lipefree,

Eric Mark Martin, Tomer Bin,

Luca Chiodini, Brandt Bucher,

Dylan Wilson, Eric Jolibois,

Felix Scherz, Leandro Braga,

Renkai Ge, Sumana Harihareswara,

Takayuki Maeda, Max Mynter,

med1844, William Woodruff,

Chandra Kiran G, DetachHead,

Emil Sadek, Jo,

Joren Hammudoglu, Mahmoud Saada,

Manuel Mendez, Mark Z. Ding,

Simon Lamon, Suneet Tipirneni,

Francesco Giacometti,

Adam Aaronson, Alperen Keleş,

char­liecloud­berry,

Dan Parizher, Daniel Hollas,

David Sherret, Dmitry,

Eric Botti, Erudit Morina,

François-Guillaume Fernandez,

Fabrizio Damicelli,

Guillaume-Fgt, Hugo van Kemenade,

Josiah Kane, Loïc Riegel,

Ramil Aleskerov, Samuel Rigaud,

Soof Golan, Usul-Dev,

dec­o­ra­tor-fac­tory, om­ahs,

wangx­i­aolei, cake-mo­not­one,

slyces, Chris Krycho,

Mike Perlov, Raphael Gaschignard,

Connor Skees, Aditya Pillai,

Lexxxzy, haarisr,

Joey Bar, Andrii Turov,

Kalmaegi, Trevor Manz,

Teodoro Freund, Hugo Polloli,

Nathaniel Roman, Victor Hugo Gomes,

Nuri Jung, Ivan Yakushev,

Hamir Mahal, Denys Zhak,

Daniel Kongsgaard,

Emily B. Zhang, Ben Bar-Or,

Aleksei Latyshev,

Aditya Pratap Singh, wooly18,

Samodya Abeysiriwardane, and

Pepe Navarro.

We’d also like to thank the Salsa team (especially

Niko Matsakis, David Barsky, and Lukas Wirth) for their sup­port and col­lab­o­ra­tion; the

Elixir team (especially

José Valim, Giuseppe Castagna, and

Guillaume Duboc), whose work strongly in­flu­enced our ap­proach to grad­ual types and in­ter­sec­tions; and a few mem­bers of the broader Python typ­ing com­mu­nity:

Eric Traut, Jelle Zijlstra,

Jia Chen, Sam Goldman,

Shantanu Jain, and Steven Troxler.

...

Read the original on astral.sh »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.