10 interesting stories served every morning and every evening.
Anna’s Blog
Updates about Anna’s Archive, the largest truly open library in human history.
We backed up Spotify (metadata and music files). It’s distributed in bulk torrents (~300TB), grouped by popularity.
This release includes the largest publicly available music metadata database with 256 million tracks and 186 million unique ISRCs.
It’s the world’s first “preservation archive” for music which is fully open (meaning it can easily be mirrored by anyone with enough disk space), with 86 million music files, representing around 99.6% of listens.
Anna’s Archive normally focuses on text (e.g. books and papers). We explained in “The critical window of shadow libraries” that we do this because text has the highest information density. But our mission (preserving humanity’s knowledge and culture) doesn’t distinguish among media types. Sometimes an opportunity comes along outside of text. This is such a case.
A while ago, we discovered a way to scrape Spotify at scale. We saw a role for us here to build a music archive primarily aimed at preservation.
Generally speaking, music is already fairly well preserved. There are many music enthusiasts in the world who digitized their CD and LP collections, shared them through torrents or other digital means, and meticulously catalogued them.
However, these existing efforts have some major issues:
Over-focus on the most popular artists. There is a long tail of music which only gets preserved when a single person cares enough to share it. And such files are often poorly seeded.
Over-focus on the highest possible quality. Since these are created by audiophiles with high end equipment and fans of a particular artist, they chase the highest possible file quality (e.g. lossless FLAC). This inflates the file size and makes it hard to keep a full archive of all music that humanity has ever produced.
No authoritative list of torrents aiming to represent all music ever produced. An equivalent of our book torrent list (which aggregate torrents from LibGen, Sci-Hub, Z-Lib, and many more) does not exist for music.
This Spotify scrape is our humble attempt to start such a “preservation archive” for music. Of course Spotify doesn’t have all the music in the world, but it’s a great start.
Before we dive into the details of this collection, here is a quick overview:
Spotify has around 256 million tracks. This collection contains metadata for an estimated 99.9% of tracks.
We archived around 86 million music files, representing around 99.6% of listens. It’s a little under 300TB in total size.
We primarily used Spotify’s “popularity” metric to prioritize tracks. View the top 10,000 most popular songs in this HTML file (13.8MB gzipped).
For popularity>0, we got close to all tracks on the platform. The quality is the original OGG Vorbis at 160kbit/s. Metadata was added without reencoding the audio (and an archive of diff files is available to reconstruct the original files from Spotify, as well as a metadata file with original hashes and checksums).
For popularity=0, we got files representing about half the number of listens (either original or a copy with the same ISRC). The audio is reencoded to OGG Opus at 75kbit/s — sounding the same to most people, but noticeable to an expert.
The cutoff is 2025-07, anything released after that date may not be present (though in some cases it is).
This is by far the largest music metadata database that is publicly available. For comparison, we have 256 million tracks, while others have 50-150 million. Our data is well-annotated: MusicBrainz has 5 million unique ISRCs, while our database has 186 million.
This is the world’s first “preservation archive” for music which is fully open (meaning it can easily be mirrored by anyone with enough disk space).
The data will be released in different stages on our Torrents page:
[ ] .zstdpatch files (to reconstruct original files before we added embedded metadata)
For now this is a torrents-only archive aimed at preservation, but if there is enough interest, we could add downloading of individual files to Anna’s Archive. Please let us know if you’d like this.
Please help preserve these files:
Seed these torrents (on the Torrents page of Anna’s Archive). Even a seeding a few torrents helps!
With your help, humanity’s musical heritage will be forever protected from destruction by natural disasters, wars, budget cuts, and other catastrophes.
In this blog we will analyze the data and look at details of the release. We hope you enjoy.
Let’s dive into the data! Here’s some high-level statistics pulled from the metadata:
The most convenient available way to sort songs on Spotify is using the popularity metric, defined as follows:
The popularity of a track is a value between 0 and 100, with 100 being the most popular. The popularity is calculated by algorithm and is based, in the most part, on the total number of plays the track has had and how recent those plays are.
Generally speaking, songs that are being played a lot now will have a higher popularity than songs that were played a lot in the past. Duplicate tracks (e.g. the same track from a single and an album) are rated independently. Artist and album popularity is derived mathematically from track popularity.
If we group songs by popularity, we see that there is an extremely large tail end:
≥70% of songs are ones almost no one ever listens to (stream count < 1000). To see some detail, we can plot this on a logarithmic scale:
The top 10,000 songs span popularities 70-100. You can view them all in this HTML file (13.8MB gzipped).
Additionally, we can estimate the number of listens per track and total number per popularity. The stream count data is estimated since it is difficult to fetch at scale, so we sampled it randomly.
As we can see, most of the listens come from songs with a popularity between 50 and 80, even though there’s only 210.000 songs with popularity ≥50, around 0.1% of songs. Note the huge (subjectively estimated) error bar on pop=0 — the reason for this is that Spotify does not publish stream counts for songs with < 1000 streams.
We can also estimate that the top three songs (as of writing) have a higher total stream count than the bottom 20-100 million songs combined:
select json_group_array(artists.name), tracks.name, tracks.popularity
from tracks
join track_artists on track_rowid = tracks.rowid
join artists on artist_rowid = artists.rowid
where tracks.id in (select id from tracks order by popularity desc limit 3)
group by tracks.id;
Note that the popularity is very time-dependent and not directly translatable into stream counts, so these top songs are basically arbitrary.
We have archived around 86 million songs from Spotify, ordering by popularity descending. While this only represents 37% of songs, it represents around 99.6% of listens:
Put another way, for any random song a person listens to, there is a 99.6% likelihood that it is part of the archive. We expect this number to be higher if you filter to only human-created songs. Do remember though that the error bar on listens for popularity 0 is large.
For popularity=0, we ordered tracks by a secondary importance metric based on artist followers and album popularity, and fetched in descending order.
We have stopped here due to the long tail end with diminishing returns (700TB+ additional storage for minor benefit), as well as the bad quality of songs with popularity=0 (many AI generated, hard to filter).
Before diving into more fun stats, let’s look at how the collection itself is structured. It’s in two parts: metadata and music files, both of which are distributed through torrents.
The metadata torrents contain, based on statistical analysis, around 99.9% of artists, albums, tracks. The metadata is published as compact queryable SQLite databases. Care was taken, by doing API response reconstruction, that there is (almost) no data loss in the conversion from the API JSON.
The metadata for artists, albums, tracks is less than 200 GB compressed. The secondary metadata of audio analysis is 4TB compressed.
We look at more detail at the structure of the metadata at the end of this blog post.
The data itself is distributed in the Anna’s Archive Containers (AAC) format. This is a standard which we created a few years ago for distributing files across multiple torrents. It is not to be confused with the Advanced Audio Coding (AAC) encoding format.
Since the original files contain zero metadata, as much metadata as possible was added to the OGG files, including title, url, ISRC, UPC, album art, replaygain information, etc. The invalid OGG data packet Spotify prepends to every track file was stripped — it is present in the track_files db.
For popularity>0, the quality is the original OGG Vorbis at 160kbit/s. Metadata was added without reencoding the audio (and an archive of diff files is available to reconstruct the original files from Spotify).
For popularity=0, the audio is reencoded to OGG Opus at 75kbit/s — sounding the same to most people, but noticeable to an expert.
There is a known bug where the REPLAYGAIN_ALBUM_PEAK vorbiscomment tag value is a copy-paste of REPLAYGAIN_ALBUM_GAIN instead of the correct value for many files.
Many people complain about how Spotify shuffles tracks. Since we have metadata for 99.9+% of tracks on Spotify, we can create a true shuffle across all songs on Spotify!
$ sqlite3 spotify_clean.sqlite3
sqlite> .mode table
sqlite> with random_ids as (select value as inx, (abs(random())%(select max(rowid) from tracks)) as trowid from generate_series(0)) select inx,tracks.id,tracks.popularity,tracks.name from random_ids join tracks on tracks.rowid=trowid limit 20;
| inx | id | popularity | name |
| 0 | 7KS7cm2arAGA2VZaZ2XvNa | 0 | Just Derry |
| 1 | 1BkLS2tmxD088l2ojUW5cv | 0 | Kapitel 37 - Aber erst wird gegessen - Schon wieder Weihnach |
| | | | ten mit der buckligen Verwandtschaft |
| 2 | 5RSU7MELzCaPweG8ALmjLK | 0 | El Buen Pastor |
| 3 | 1YNIl8AKIFltYH8O2coSoT | 0 | You Are The One |
| 4 | 1GxMuEYWs6Lzbn2EcHAYVx | 0 | Waorani |
| 5 | 4NhARf6pjwDpbyQdZeSsW3 | 0 | Magic in the Sand |
| 6 | 7pDrZ6rGaO6FHk6QtTKvQo | 0 | Yo No Fui |
| 7 | 15w4LBQ6rkf3QA2OiSMBRD | 25 | 你走 |
| 8 | 5Tx7jRLKfYlay199QB2MSs | 0 | Soul Clap |
| 9 | 3L7CkCD9595MuM0SVuBZ64 | 1 | Xuân Và Tuổi Trẻ |
| 10 | 4S6EkSnfxlU5UQUOZs7bKR | 1 | Elle était belle |
| 11 | 0ZIOUYrrArvSTq6mrbVqa1 | 0 | Kapitel 7.2 - Die Welt der Magie - 4 in 1 Sammelband: Weiße |
| | | | Magie | Medialität, Channeling & Trance | Divination & Wahrs |
| | | | agen | Energetisches Heilen |
| 12 | 4VfKaW1X1FKv8qlrgKbwfT | 0 | Pura energia |
| 13 | 1VugH5kD8tnMKAPeeeTK9o | 10 | Dalia |
| 14 | 6NPPbOybTFLL0LzMEbVvuo | 4 | Teil 12 - Folge 2: Arkadien brennt |
| 15 | 1VSVrAbaxNllk7ojNGXDym | 3 | Bre Petrunko |
| 16 | 4NSmBO7uzkuES7vDLvHtX8 | 0 | Paranoia |
| 17 | 7AHhiIXvx09DRZGQIsbcxB | 0 | Sand Underfoot Moments |
| 18 | 0sitt32n4JoSM1ewOWL7hs | 0 | Start Over Again |
| 19 | 080Zimdx271ixXbzdZOqSx | 3 | Auf all euren Wegen |
Or, filtering to only somewhat popular songs
sqlite> with random_ids as (select value as inx, (abs(random())%(select max(rowid) from tracks)) as trowid from generate_series(0)) select inx,tracks.id,tracks.popularity,albums.name as album_name,tracks.name from random_ids join tracks on tracks.rowid=trowid join albums on albums.rowid = album_rowid
where tracks.popularity >= 10 limit 20;
| inx | id | popularity | album_name | name |
| 32 | 1om6LphEpiLpl9irlOsnzb | 23 | The Essential Widespread Panic | Love Tractor |
| 47 | 2PCtPCRDia6spej5xcxbvW | 20 | Desatinos Desplumados | Sirena |
| 65 | 5wmR10WloZqVVdIpYhdaqq | 20 | Um Passeio pela Harpa Cristã - Vol 6 | As Santas Escrituras |
| 89 | 5xCuYNX3QlPsxhKLbWlQO9 | 11 | No Me Amenaces | No Me Amenaces |
| 96 | 2GRmiDIcIwhQnkxakNyUy4 | 16 | Very Bad Truth (Kingston Universi… | Kapitel 8.3 - Very Bad Truth |
| 98 | 5720pe1PjNXoMcbDPmyeLW | 11 | Kleiner Eisbär: Hilf mir fliegen! | Kapitel 06: Hilf mir fliegen! |
| 109 | 1mRXGNVsfD9UtFw6r5YtzF | 11 | Lunar Archive | Outdoor Seating |
| 110 | 5XOQwf6vkcJxWG9zgqVEWI | 19 | Teenage Dream | Firework |
| 125 | 0rbHOp8B4CpPXXZSekySvv | 15 | Previa y Cachengue 2025 | Debi tirar mas fotos |
...
Read the original on annas-archive.li »
...
Read the original on www.jmail.world »
Guidelines |
FAQ |
Lists |
API |
Security |
Terms no one reads |
Sell 7% for clout |
Overwhelm mods
...
Read the original on dosaygo-studio.github.io »
is a news writer who covers the streaming wars, consumer tech, crypto, social media, and much more. Previously, she was a writer and editor at MUO.
Posts from this author will be added to your daily email digest and your homepage feed.
is a news writer who covers the streaming wars, consumer tech, crypto, social media, and much more. Previously, she was a writer and editor at MUO.
Posts from this author will be added to your daily email digest and your homepage feed.
ACR uses visual and audio data to identify what you’re watching on TV, including shows and movies on streaming services and cable TV, YouTube videos, Blu-ray discs, and more. Attorney General Paxton alleges that ACR also captures security and doorbell camera streams, media sent using Apple AirPlay or Google Cast, as well as the displays of other devices connected to the TV’s HDMI port, such as laptops and game consoles.
The lawsuit accuses Samsung, Sony, LG, Hisense, and TCL of “deceptively” prompting users to activate ACR, while “disclosures are hidden, vague, and misleading.” Samsung and Hisense, for example, capture screenshots of a TV’s display “every 500 milliseconds,” Paxton claims. The lawsuit alleges that TV manufacturers siphon viewing data back to each company “without the user’s knowledge or consent,” which they can then sell for targeted advertising.
Along with these allegations, Attorney General Paxton also raises concerns about TCL and Hisense’s ties to China, as they’re both based in the country. The lawsuit claims the TVs made by both companies are “Chinese-sponsored surveillance devices, recording the viewing habits of Texans at every turn.”
Attorney General Paxton accuses the five TV makers of violating the state’s Deceptive Trade Practices Act, which is meant to protect consumers from false, deceptive, or misleading practices. Paxton asks the court to impose a civil penalty and to block each company from collecting, sharing, or selling the ACR data they collect about Texas-based consumers. Samsung, Sony, LG, Hisense, and TCL didn’t immediately respond to a request for comment.
Vizio, which is now owned by Walmart, paid $2.2 million to the Federal Trade Commission and New Jersey in 2017 over similar allegations related to ACR.
“This conduct is invasive, deceptive, and unlawful,” Paxton says in a statement. “The fundamental right to privacy will be protected in Texas because owning a television does not mean surrendering your personal information to Big Tech or foreign adversaries.”
Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.
...
Read the original on www.theverge.com »
Skip to content
You signed in with another tab or window. Reload to refresh your session.
You signed out in another tab or window. Reload to refresh your session.
You switched accounts on another tab or window. Reload to refresh your session.
You must be signed in to star a gist
You must be signed in to fork a gist
Embed this gist in your website.
Save hackermondev/5e2cdc32849405fff6b46957747a2d28 to your computer and use it in GitHub Desktop.
Embed this gist in your website.
Save hackermondev/5e2cdc32849405fff6b46957747a2d28 to your computer and use it in GitHub Desktop.
How we pwned X (Twitter), Vercel, Cursor, Discord, and hundreds of companies through a supply-chain attack
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
You can’t perform that action at this time.
...
Read the original on gist.github.com »
Gemini 3 Flash is our latest model with frontier intelligence built for speed that helps everyone learn, build, and plan anything — faster.
Senior Director, Product Management, on behalf of the Gemini team
Google is releasing Gemini 3 Flash, a fast and cost-effective model built for speed. You can now access Gemini 3 Flash through the Gemini app and AI Mode in Search. Developers can access it via the Gemini API in Google AI Studio, Google Antigravity, Gemini CLI, Android Studio, Vertex AI and Gemini Enterprise.
Summaries were generated by Google AI. Generative AI is experimental.
It’s great for coding, complex analysis, and quick answers in interactive apps.
Gemini 3 Flash is now the default model in the Gemini app and AI Mode in Search.
Developers and everyday users can access Gemini 3 Flash via various Google platforms.
Summaries were generated by Google AI. Generative AI is experimental.
...
Read the original on blog.google »
AWS CEO Matt Garman outlined 3 solid reasons why companies should not focus on cutting junior developer roles, noting that they “are actually the most experienced with the AI tools”.
In a tech world obsessed with AI replacing human workers, Matt Garman, CEO of Amazon Web Services (AWS), is pushing back against one of the industry’s most popular cost-cutting ideas.
Speaking on WIRED’s The Big Interview podcast, Garman has a bold message for companies racing to cut costs with AI.
He was asked to explain why he once called replacing junior employees with AI “one of the dumbest ideas” he’d ever heard, and to expand on how he believes agentic AI will actually change the workplace in the coming years.
First, junior employees are often better with AI tools than senior staff.
Fresh grads have grown up with new technology, so they can adapt quickly. Many of them learn AI-powered tools while studying or during internships. They tend to explore new features, find quick methods to write code, and figure out how to get the best results from AI agents.
According to the 2025 Stack Overflow Developer Survey, 55.5% of early-career developers reported using AI tools daily in their development process, higher than for the experienced folks.
This comfort with new tools allows them to work more efficiently. In contrast, senior developers have established workflows and may take more time to adopt. Recent research shows that over half of Gen Z employees are actually helping senior colleagues upskill in AI.
Second, junior staff are usually the least expensive employees.
Junior employees usually get much less in salary and benefits, so removing them does not deliver huge savings. If a company is trying to save money, it doesn’t make that much financial sense.
So, when companies talk about increasing profit margins, junior employees should not be the default or only target. True optimization, Real cost-cutting means looking at the whole company because there are plenty of other places where expenses can be trimmed.
In fact, 30% of companies that laid off workers expecting savings ended up increasing expenses, and many had to rehire later.
Think of a company like a sports team. If you only keep veteran players and never recruit rookies, what happens when those veterans retire? You are left with no one who knows how to play the game.
Also, hiring people straight out of college brings new ways of thinking into the workplace. They have fresh ideas shaped by the latest trends, motivation to innovate.
More importantly, they form the foundation of a company’s future workforce. If a company decides to stop hiring junior employees altogether, it cuts off its own talent pipeline. Over time, that leads to fewer leaders to promote from within.
A Deloitte report also notes that the tech workforce is expected to grow at roughly twice the rate of the overall U. S. workforce, highlighting the demand for tech talent. Without a strong pipeline of junior developers coming in, companies might face a tech talent shortage.
When there are not enough junior hires being trained today, teams struggle to fill roles tomorrow, especially as projects scale.
This isn’t just corporate talk. As the leader of one of the world’s largest cloud computing platforms, serving everyone from Netflix to the U. S. intelligence agencies, Garman has a front-row seat to how companies are actually using AI.
And what he is seeing makes him worried that short-term thinking could damage businesses for years to come. Garman’s point is grounded in long-term strategy. A company that relies solely on AI to handle tasks without training new talent could find itself short of people.
Still, Garman admits the next few years will be bumpy. “Your job is going to change,” he said. He believes AI will make companies more productive as well as the employees.
When technology makes something easier, people want more of it. AI enables the creation of software faster, allowing companies to develop more products, enter new markets, and serve more customers.
Developers will be responsible for more than just writing code, with faster adaptation to new technologies becoming essential. But he has a hopeful message in the end.
That’s why Geoffrey Hinton has advised that Computer Science degrees remain essential. This directly supports Matt Garman’s point. Fresh talent with a strong understanding of core fundamentals becomes crucial for filling these higher-value roles of the future.
“I’m very confident in the medium to longer term that AI will definitely create more jobs than it removes at first,” Garman said.
...
Read the original on www.finalroundai.com »
It may be just me, but I read this as “I don’t want to 😜 😜 but I’ll kill AdBlockers in Firefox for buckerinos 😂”. This disappoints and saddens me a lot, and I hope I’m wrong.
...
Read the original on infosec.press »
Your local government might be discussing surveillance tech like Flock cameras, facial recognition, or automated license plate readers right now. This map helps you find those meetings and take action.
Why this matters: Municipalities across the US are quietly adopting surveillance technologies in rapidly growing numbers with over 80,000 cameras already out on the streets. These systems track residents’ movements, collect biometric data, and build massive databases of our daily lives.
alpr.watch scans meeting agendas for keywords like “flock,” “license plate reader,” “alpr,” and more. Each pin on the map shows where these conversations are happening so that you can make a difference.
Zoom in to see ALPR cameras
Get Email Alerts for Your Area
Enter your email below and we’ll send you a login link. After logging in, you can set your notification preferences.
I agree to the Terms of Service and Privacy Policy
Please agree to the Terms of Service and Privacy Policy to continue.
You’re logged in! Update your notification settings to receive alerts.
Zoom in to see ALPR surveillance cameras
Data before mid-December may be unverified. All future flags are 100% moderator approved.
Automated License Plate Recognition (ALPR) systems use cameras and artificial intelligence to capture, read, and store license plate data from every passing vehicle.
These systems work 24/7 creating a massive database of where vehicles, and by extension, people, travel. Every trip to the grocery store, doctor’s office, or place of worship gets recorded and stored.
Flock Safety is one of the largest manufacturers of ALPR cameras in the United States, marketing their systems to neighborhoods and law enforcement.
Flock cameras capture license plates, vehicle make/model, color, and other identifying features. This data is shared across a massive network of agencies and jurisdictions, creating a surveillance web that tracks millions of Americans.
History shows that surveillance systems expand beyond their original scope:
Systems marketed for “solving crimes” get used for immigration enforcement
These groups and individuals are leading the fight against mass surveillance. Consider supporting their work or getting involved locally.
...
Read the original on alpr.watch »
TL;DR: ty is an extremely fast Python type checker and
language server, written in Rust, and designed as an alternative to tools like mypy, Pyright, and Pylance.
Today, we’re announcing the Beta release of ty. We now use ty exclusively in our own projects and are ready to recommend it to motivated users for production use.
At Astral, we build high-performance developer tools for the Python ecosystem. We’re best known for
uv, our Python package manager, and
Ruff, our linter and formatter.
Today, we’re announcing the Beta release of the next tool in the Astral toolchain: ty, an
extremely fast Python type checker and language server, written in Rust.
ty was designed from the ground up to power a language server. The entire ty architecture is built around “incrementality”, enabling us to selectively re-run only the necessary computations when a user (e.g.) edits a file or modifies an individual function. This makes live updates extremely fast in the context of an editor or long-lived process.
You can install ty today with uv tool install ty@latest, or via our
VS Code extension.
Like Ruff and uv, ty’s implementation was grounded in some of our core product principles:
An obsessive focus on performance. Without caching, ty is consistently between 10x and 60x faster than mypy and Pyright. When run in an editor, the gap is even more dramatic. As an example, after editing a load-bearing file in the PyTorch repository, ty recomputes diagnostics in 4.7ms: 80x faster than Pyright (386ms) and 500x faster than Pyrefly (2.38 seconds). ty is very fast!
Correct, pragmatic, and ergonomic. With features like
first-class intersection types,
advanced type narrowing, and
sophisticated reachability analysis, ty pushes forward the state of the art in Python type checking, providing more accurate feedback and avoiding assumptions
about user intent that often lead to false positives. Our goal with ty is not only to build a faster type checker; we want to build a better type checker, and one that balances correctness with a deep focus on the end-user experience.
Built in the open. ty was built by our core team alongside dozens of active contributors under the MIT license, and the same goes for our
editor extensions. You can run ty anywhere that you write Python (including in the browser).
Even compared to other Rust-based language servers like Pyrefly, ty can run orders of magnitude faster when performing incremental updates on large projects.
ty also includes a
best-in-class diagnostic system, inspired by the Rust compiler’s own world-class error messages. A single ty diagnostic can pull in context from multiple files at once to explain not only what’s wrong, but why (and, often, how to fix it).
Diagnostic output is the primary user interface for a type checker; we prioritized our diagnostic system from the start (with both humans and agents in mind) and view it as a first-class feature in ty.
If you use VS Code, Cursor, or a similar editor, we recommend installing the
ty VS Code extension. The ty language server supports all the capabilities
that you’d expect for a modern language server (Go to Definition, Symbol Rename, Auto-Complete, Auto-Import, Semantic Syntax Highlighting, Inlay Hints, etc.), and runs in any editor that implements the Language Server Protocol.
Following the Beta release, our immediate priority is supporting early adopters. From there, we’re working towards a Stable release next year, with the gap between the
Beta and
Stable milestones largely focusing on: (1) stability and bug fixes, (2) completing the long tail of features in the
Python typing specification, and (3) first-class support for popular third-party libraries like Pydantic and
Django.
On a longer time horizon, though, ty will power semantic capabilities across the Astral toolchain: dead code elimination, unused dependency detection, SemVer-compatible upgrade enforcement, CVE reachability analysis, type-aware linting, and more (including some that are too ambitious to say out loud just yet).
We want to make Python the most productive programming ecosystem on Earth. Just as with
Ruff and uv, our commitment from here is that ty will get significantly better every week by working closely with our users. Thank you for building with us.
ty is the most sophisticated product we’ve built, and its design and implementation have surfaced some of the hardest technical problems we’ve seen at Astral. Working on ty requires a deep understanding of type theory, Python runtime semantics, and how the Python ecosystem actually uses Python.
I’d like to thank all those that contributed directly to the development of ty, including:
Douglas Creager, Alex Waygood,
David Peter, Micha Reiser,
Andrew Gallant, Aria Desires,
Carl Meyer, Zanie Blue,
Ibraheem Ahmed,
Dhruv Manilawala, Jack O’Connor,
Zsolt Dollenstein, Shunsuke Shibayama,
Matthew Mckee, Brent Westbrook,
UnboundVariable,
Shaygan Hooshyari, Justin Chapman,
InSync, Bhuminjay Soni,
Abhijeet Prasad Bodas,
Rasmus Nygren, lipefree,
Eric Mark Martin, Tomer Bin,
Luca Chiodini, Brandt Bucher,
Dylan Wilson, Eric Jolibois,
Felix Scherz, Leandro Braga,
Renkai Ge, Sumana Harihareswara,
Takayuki Maeda, Max Mynter,
med1844, William Woodruff,
Chandra Kiran G, DetachHead,
Emil Sadek, Jo,
Joren Hammudoglu, Mahmoud Saada,
Manuel Mendez, Mark Z. Ding,
Simon Lamon, Suneet Tipirneni,
Francesco Giacometti,
Adam Aaronson, Alperen Keleş,
charliecloudberry,
Dan Parizher, Daniel Hollas,
David Sherret, Dmitry,
Eric Botti, Erudit Morina,
François-Guillaume Fernandez,
Fabrizio Damicelli,
Guillaume-Fgt, Hugo van Kemenade,
Josiah Kane, Loïc Riegel,
Ramil Aleskerov, Samuel Rigaud,
Soof Golan, Usul-Dev,
decorator-factory, omahs,
wangxiaolei, cake-monotone,
slyces, Chris Krycho,
Mike Perlov, Raphael Gaschignard,
Connor Skees, Aditya Pillai,
Lexxxzy, haarisr,
Joey Bar, Andrii Turov,
Kalmaegi, Trevor Manz,
Teodoro Freund, Hugo Polloli,
Nathaniel Roman, Victor Hugo Gomes,
Nuri Jung, Ivan Yakushev,
Hamir Mahal, Denys Zhak,
Daniel Kongsgaard,
Emily B. Zhang, Ben Bar-Or,
Aleksei Latyshev,
Aditya Pratap Singh, wooly18,
Samodya Abeysiriwardane, and
Pepe Navarro.
We’d also like to thank the Salsa team (especially
Niko Matsakis, David Barsky, and Lukas Wirth) for their support and collaboration; the
Elixir team (especially
José Valim, Giuseppe Castagna, and
Guillaume Duboc), whose work strongly influenced our approach to gradual types and intersections; and a few members of the broader Python typing community:
Eric Traut, Jelle Zijlstra,
Jia Chen, Sam Goldman,
Shantanu Jain, and Steven Troxler.
...
Read the original on astral.sh »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.