10 interesting stories served every morning and every evening.
Skip to main content
Ask the publishers to restore access to 500,000+ books.
8 Days Left: The year is almost over—help us finish strong in 2025!
Please Don’t Scroll Past This
Can you chip in? As an independent nonprofit, the Internet Archive is fighting for universal access to quality information. We build and maintain all our own systems, but we don’t charge for access, sell user information, or run ads. We’d be deeply grateful if you’d join the one in a thousand users that support us financially.
We understand that not everyone can donate right now, but if you can afford to contribute this Thursday, we promise it will be put to good use. Our resources are crucial for knowledge lovers everywhere—so if you find all these bits and bytes useful, please pitch in.
Please Don’t Scroll Past This The Internet Archive is working to keep the record straight by recording government websites, news publications, historical documents, and more. If you find our library useful, please pitch in.
Remind Me
By submitting, you agree to receive donor-related emails from the Internet Archive. Your privacy is important to us. We do not sell or trade your information with anyone.
An icon used to represent a menu that can be
toggled by interacting with this icon.
An illustration of an open book.
An illustration of two cells of a film
strip.
An illustration of an audio speaker.
An illustration of two photographs.
An illustration of a person’s head and chest.
An illustration of a horizontal line over an up
pointing arrow.
Search the history of more than 1 trillion web pages.
Capture a web page as it appears now for use as a trusted citation in the future.
Internet Archive’s in-browser video “theater” requires JavaScript to be enabled.
It appears your browser does not have it turned on.
Please see your browser settings for this feature.
Sharyn Alfonsi’s “Inside CECOT” for 60 Minutes, which was censored by Bari Weiss, as it appeared on Canada’s Global TV app.
...
Read the original on archive.org »
...
Read the original on skyview.social »
...
Read the original on www.jmail.world »
To see all available qualifiers, see our documentation.
We read every piece of feedback, and take your input very seriously.
Secure your code as you build
To see all available qualifiers, see our documentation.
We read every piece of feedback, and take your input very seriously.
Secure your code as you build
You signed in with another tab or window. Reload to refresh your session.
You signed out in another tab or window. Reload to refresh your session.
You switched accounts on another tab or window. Reload to refresh your session.
...
Read the original on github.com »
uv installs packages faster than pip by an order of magnitude. The usual explanation is “it’s written in Rust.” That’s true, but it doesn’t explain much. Plenty of tools are written in Rust without being notably fast. The interesting question is what design decisions made the difference.
Charlie Marsh’s Jane Street talk and a Xebia engineering deep-dive cover the technical details well. The interesting parts are the design decisions: standards that enable fast paths, things uv drops that pip supports, and optimizations that don’t require Rust at all.
pip’s slowness isn’t a failure of implementation. For years, Python packaging required executing code to find out what a package needed.
The problem was setup.py. You couldn’t know a package’s dependencies without running its setup script. But you couldn’t run its setup script without installing its build dependencies. PEP 518 in 2016 called this out explicitly: “You can’t execute a setup.py file without knowing its dependencies, but currently there is no standard way to know what those dependencies are in an automated fashion without executing the setup.py file.”
This chicken-and-egg problem forced pip to download packages, execute untrusted code, fail, install missing build tools, and try again. Every install was potentially a cascade of subprocess spawns and arbitrary code execution. Installing a source distribution was essentially curl | bash with extra steps.
The fix came in stages:
* PEP 518 (2016) created pyproject.toml, giving packages a place to declare build dependencies without code execution. The TOML format was borrowed from Rust’s Cargo, which makes a Rust tool returning to fix Python packaging feel less like coincidence.
* PEP 517 (2017) separated build frontends from backends, so pip didn’t need to understand setuptools internals.
* PEP 621 (2020) standardized the [project] table, so dependencies could be read by parsing TOML rather than running Python.
* PEP 658 (2022) put package metadata directly in the Simple Repository API, so resolvers could fetch dependency information without downloading wheels at all.
PEP 658 went live on PyPI in May 2023. uv launched in February 2024. uv could be fast because the ecosystem finally had the infrastructure to support it. A tool like uv couldn’t have shipped in 2020. The standards weren’t there yet.
Other ecosystems figured this out earlier. Cargo has had static metadata from the start. npm’s package.json is declarative. Python’s packaging standards finally bring it to parity.
Speed comes from elimination. Every code path you don’t have is a code path you don’t wait for.
uv’s compatibility documentation is a list of things it doesn’t do:
No .egg support. Eggs were the pre-wheel binary format. pip still handles them; uv doesn’t even try. The format has been obsolete for over a decade.
No pip.conf. uv ignores pip’s configuration files entirely. No parsing, no environment variable lookups, no inheritance from system-wide and per-user locations.
No bytecode compilation by default. pip compiles .py files to .pyc during installation. uv skips this step, shaving time off every install. You can opt in if you want it.
Virtual environments required. pip lets you install into system Python by default. uv inverts this, refusing to touch system Python without explicit flags. This removes a whole category of permission checks and safety code.
Stricter spec enforcement. pip accepts malformed packages that technically violate packaging specs. uv rejects them. Less tolerance means less fallback logic.
Ignoring requires-python upper bounds. When a package says it requires python, uv ignores the upper bound and only checks the lower. This reduces resolver backtracking dramatically since upper bounds are almost always wrong. Packages declare python because they haven’t tested on Python 4, not because they’ll actually break. The constraint is defensive, not predictive.
First-index wins by default. When multiple package indexes are configured, pip checks all of them. uv picks from the first index that has the package, stopping there. This prevents dependency confusion attacks and avoids extra network requests.
Each of these is a code path pip has to execute and uv doesn’t.
Some of uv’s speed comes from Rust. But not as much as you’d think. Several key optimizations could be implemented in pip today:
HTTP range requests for metadata. Wheel files are zip archives, and zip archives put their file listing at the end. uv tries PEP 658 metadata first, falls back to HTTP range requests for the zip central directory, then full wheel download, then building from source. Each step is slower and riskier. The design makes the fast path cover 99% of cases. None of this requires Rust.
Parallel downloads. pip downloads packages one at a time. uv downloads many at once. Any language can do this.
Global cache with hardlinks. pip copies packages into each virtual environment. uv keeps one copy globally and uses hardlinks (or copy-on-write on filesystems that support it). Installing the same package into ten venvs takes the same disk space as one. Any language with filesystem access can do this.
Python-free resolution. pip needs Python running to do anything, and invokes build backends as subprocesses to get metadata from legacy packages. uv parses TOML and wheel metadata natively, only spawning Python when it hits a setup.py-only package that has no other option.
PubGrub resolver. uv uses the PubGrub algorithm, originally from Dart’s pub package manager. Both pip and PubGrub use backtracking, but PubGrub applies conflict-driven clause learning from SAT solvers: when it hits a dead end, it analyzes why and skips similar dead ends later. This makes it faster on complex dependency graphs and better at explaining failures. pip could adopt PubGrub without rewriting in Rust.
Zero-copy deserialization. uv uses rkyv to deserialize cached data without copying it. The data format is the in-memory format. Libraries like FlatBuffers achieve this in other languages, but rkyv integrates tightly with Rust’s type system.
Thread-level parallelism. Python’s GIL forces parallel work into separate processes, with IPC overhead and data copying. Rust can parallelize across threads natively, sharing memory without serialization boundaries. This matters most for resolution, where the solver explores many version combinations.
No interpreter startup. Every time pip spawns a subprocess, it pays Python’s startup cost. uv is a single static binary with no runtime to initialize.
Compact version representation. uv packs versions into u64 integers where possible, making comparison and hashing fast. Over 90% of versions fit in one u64. This is micro-optimization that compounds across millions of comparisons.
These are real advantages. But they’re smaller than the architectural wins from dropping legacy support and exploiting modern standards.
uv is fast because of what it doesn’t do, not because of what language it’s written in. The standards work of PEP 518, 517, 621, and 658 made fast package management possible. Dropping eggs, pip.conf, and permissive parsing made it achievable. Rust makes it a bit faster still.
pip could implement parallel downloads, global caching, and metadata-only resolution tomorrow. It doesn’t, largely because backwards compatibility with fifteen years of edge cases takes precedence. But it means pip will always be slower than a tool that starts fresh with modern assumptions.
Other package managers could learn from this: static metadata, no code execution to discover dependencies, and the ability to resolve everything upfront before downloading. Cargo and npm have operated this way for years. If your ecosystem requires running arbitrary code to find out what a package needs, you’ve already lost.
...
Read the original on nesbitt.io »
Christmas is often regarded as a time for goodwill, but one young UK couple’s act of kindness 50 years ago changed their lives forever.
On 23 December 1975, Rob Parsons and his wife Dianne were preparing for Christmas at their Cardiff home when they heard a knock at the door.
On their doorstep stood a man with a bin bag containing his possessions in his right hand and a frozen chicken in his left.
Rob studied the man’s face and vaguely remembered him as Ronnie Lockwood, someone he would occasionally see at Sunday School as a boy and who he was told to be kind to as he was a “bit different”.
“I said ‘Ronnie, what’s with the chicken?’ He said ‘somebody gave it to me for Christmas’. And then I said two words that changed all of our lives.
“And I’m not exactly sure why I said them. I said come in.”
...
Read the original on www.bbc.co.uk »
People examining documents released by the Department of Justice in the Jeffrey Epstein case discovered that some of the file redaction can be undone with Photoshop techniques, or by simply highlighting text to paste into a word processing file.
Un-redacted text from these documents began circulating through social media on Monday evening. An exhibit in a civil case in the Virgin Islands against Darren K Indyke and Richard D Kahn, two executors of Epstein’s estate, contains redacted allegations explaining how Epstein and his associates had facilitated the sexual abuse of children. The exhibit was the second amended complaint in the state case against Indyke and Kahn.
In section 85, the redacted portion states: “Between September 2015 and June 2019, Indyke signed (FAC) for over $400,000 made payable to young female models and actresses, including a former Russian model who received over $380,000 through monthly payments of $8,333 made over a period of more than three and a half years until the middle of 2019.”
Prosecutors in the Virgin Islands settled its civil sex-trafficking case against Epstein’s estate, Indyke and Kahn in 2022 for $105m, plus one half of the proceeds from the sale of Little St James, the island on which Epstein resided and on which many of his crimes occurred. The justice department press release announcing the settlement did not include an admission of liability.
Indyke, an attorney who represented Epstein for decades, has not been criminally indicted by federal authorities. He was hired by the Parlatore Law Group in 2022, before the justice department settled the Epstein case. That firm represents the defense secretary, Pete Hegseth, and previously represented Donald Trump in his defense against charges stemming from the discovery of classified government documents stored at Trump’s Florida estate. Calls and email seeking comment from Indyke and the Parlatore Law Group have not yet been returned.
Trump has repeatedly denied any knowledge of or involvement in Epstein’s criminal activities and any wrongdoing.
Other sections further allege how Epstein’s enterprise concealed crimes.
“Defendants also attempted to conceal their criminal sex trafficking and abuse, conduct by paying large sums of money to participant-witnesses, including by paying for their attorneys’ fees and case costs in litigation related to this conduct,” reads one redacted passage.
“Epstein also threatened harm to victims and helped release damaging stories about them to damage their credibility when they tried to go public with their stories of being trafficked and sexually abused. Epstein also instructed one or more Epstein Enterprise participant-witnesses to destroy evidence relevant to ongoing court proceedings involving Defendants’ criminal sex trafficking and abuse conduct.”
Redactions of sections 184 through 192 of the document describe property taxes paid by companies incorporated by Epstein on properties that were not on the balance sheet for those firms.
“For instance, Cypress’s Balance Sheet as of December 31, 2018 did not reflect any assets other than cash of $18,824. Further, Cypress reported only $301 in expenses for the year ended December 31, 2018, despite it paying $106,394.60 in Santa Fe property taxes on November 6, 2018,” reads one redacted passage.
“Similarly, in 2017, Cypress reported as its only asset cash in the amount of $29,736 and expenses of $150, despite it paying $55,770.41 and $113,679.56 in Santa Fe property taxes during 2017.”
The Epstein Files Transparency Act signed into law last month permits the Department of Justice “to withhold certain information such as the personal information of victims and materials that would jeopardize an active federal investigation”.
It was unclear how property material complies with the redaction standard under the law. An inquiry to the Department of Justice has not yet been answered.
...
Read the original on www.theguardian.com »
I’ve been reading Lord of the Rings for two months and I’m just at the end of the first part. It’s not because I’m not enjoying it. It’s one of the most enjoyable reading experiences I can remember.
From the beginning, I’ve read the whole thing aloud. I’ve found reading aloud helpful for staying engaged — limiting myself to mouth-speed rather than eye-speed means I won’t rush, miss important details, and then lose interest, which has always been a problem for me.
At first I was anxious to read a 1,500-page book this way, because it would take so long. But, as someone pointed out to me, if I’m enjoying it, why would I want to be done with it sooner?
So I tried slowing down even more, and discovered something. I slowed to a pace that felt almost absurd, treating each sentence as though it might be a particularly important one. I gave each one maybe triple the usual time and attention, ignoring the fact that there are hundreds of pages to go.
This leisurely pace made Middle-Earth blossom before my eyes. When I paused after each comma, and let each sentence ring for a small moment after the period, the events of the story reached me with more weight and strength. That extra time gave space for Tolkien’s images and moods to propagate in my mind, which they did automatically.
Some part of me still wanted to rush and get on with it, to make good time, to gloss over the songs and lore to get to Moria and Mount Doom and the other marquee moments of the story. But the more I ignored that impulse, the better the experience got.
By offering the book about triple the usual amount of attentiveness, I was getting about triple the storyness (i.e. meaning, engagement, literary pleasure). Whatever the thing is that I’m seeking when I pick up a novel in the first place, there’s much more of it available at this pace.
This effect reminded me of a paradox around eating I recognized long ago. When you slow down your eating speed, say to half or a third your default speed, you get much more enjoyment out of a smaller amount of food. The extra attention given to each bite allows more of the “good stuff,” whatever that is exactly, to reach you.
What’s paradoxical is that it’s precisely the seeking of that “good stuff” that normally drives me to eat so quickly, and miss most of what I’m seeking. When you try to barrel ahead to access the good stuff quicker, you get less of it in the end. Slow down and much more of it is released.
And it’s released automatically, in both reading and eating. You don’t have to search it out. The good stuff (the meaning in the text, the pleasure in the eating) just rises up to meet you in that extra time you give it. Slowing down, and offering more time to the act of consumption, immediately increases reading comprehension (and eating comprehension).
Both are analogous to slowing down while you vacuum a carpet. If you pass the vacuum head too quickly, you miss half the dirt. Slow down, and you can hear how much more grit is sent skittering up the tube. The suction and bristles are working, but they need more time to do their work fully, to draw up the deeper-lying stuff.
It seems that my default consumption speeds for reading and eating (and maybe everything else) reduce the rewards of those things significantly, undermining the point of doing either.
Part of it is my own impatience. But I also suspect that modern living, with its infinite supply of consumables, tends to push our rate-of-intake dials too high. I’m not going to run out of books, or snacks, or opportunities to learn something. There’s always more, so not every crust of bread or printed page needs to be appreciated fully.
Internally though, the mind is juggling like Lucy and Ethel on the conveyor belt at the chocolate factory. Our receptors for meaning and appreciation, like the vacuum head, need more time to do their full work, to make all the connections they’re designed to make.
It might sound like I’m just offering clichés — less is more, stop and smell the roses, take your time — and I guess I am. But clichés suffer the same issue: they are often profound insights, consumed and passed on too rapidly for their real meaning to register anymore. You really should stop and smell roses, as you know if you’re in the habit of doing that.
At least see what happens when you reduce your consumption speed — of anything, but especially books, information, and food — by a half, or two thirds. Notice that (1) something in you really wants to plow through at the highest viable setting, and (2) how much more of the reward is released when you slow down anyway.
As far as I can tell, almost everything becomes more satisfying when you give it more time and intention, even things like checking the mailbox or writing a shopping list.
Slowing down your rate of consumption will inevitably change what you want to consume. Reading throwaway news articles or AI slop with great care and attention is only going to show you how empty of value it is. Reading dense writing in inky old books, crafted for your mind by great masters, becomes easier without the rushed pace, and the meaning just blooms out of it.
Same with food. Try to savor a cheap, waxy “chocolate” bar, or a bag of store-brand cheese puffs, and you discover a harsh taste that you don’t want to look at too closely. Enjoy a homemade pastry with great attention, and discover there’s even more in it than you realized.
Mass production is good in so many ways, but the faster we tend to consume its fruits, the more we end up seeking things for their glossy, candied surfaces. The more we go for these surface-level rewards, the more the culture focuses on offering only that part — such as TikTok videos, processed food, CGI-forward movies, and public discourse in the form of unexamined talking points.
Who knows how far we’ve drifted from the best modes of consuming the things we value. Once something becomes a norm, it seems like an appropriate standard, no matter how much has been lost. Apparently, reading silently and alone was unusual until as late as the 18th century. Certainly sit-down meals and cooking at home were.
I don’t mean to sound like a scold. Let’s say none of this is morally good or bad. It’s just that in so much of what we do, we could be getting much more of the part of it that we really seek — but it’s only available at slower speeds.
If you’re curious, try consuming things more slowly, so slowly it seems silly to others — say a third your habitual speed — and see what rises up to meet you.
Recently I opened a discussion forum for Raptitude readers who want to give something up for the month of December (alcohol, social media, snacks, etc).
It’s been a real success, and many people want to do something similar in January. If you want to quit something, or just give it up for a month, you’re invited to join.
Follow this link at the end of this post to get an invite.
...
Read the original on www.raptitude.com »
Flock
Flock Exposed Its AI-Powered Cameras to the Internet. We Tracked Ourselves
Flock left at least 60 of its people-tracking Condor PTZ cameras live streaming and exposed to the open internet.
I am standing on the corner of Harris Road and Young Street outside of the Crossroads Business Park in Bakersfield, California, looking up at a Flock surveillance camera bolted high above a traffic signal. On my phone, I am watching myself in real time as the camera records and livestreams me—without any password or login—to the open internet. I wander into the intersection, stare at the camera and wave. On the livestream, I can see myself clearly. Hundreds of miles away, my colleagues are remotely watching me too through the exposed feed. Flock left livestreams and administrator control panels for at least 60 of its AI-enabled Condor cameras around the country exposed to the open internet, where anyone could watch them, download 30 days worth of video archive, and change settings, see log files, and run diagnostics. Unlike many of Flock’s cameras, which are designed to capture license plates as people drive by, Flock’s Condor cameras are pan-tilt-zoom (PTZ) cameras designed to record and track people, not vehicles. Condor cameras can be set to automatically zoom in on people’s faces as they walk through a parking lot, down a public street, or play on a playground, or they can be controlled manually, according to marketing material on Flock’s website. We watched Condor cameras zoom in on a woman walking her dog on a bike path in suburban Atlanta; a camera followed a man walking through a Macy’s parking lot in Bakersfield; surveil children swinging on a swingset at a playground; and film high-res video of people sitting at a stoplight in traffic. In one case, we were able to watch a man rollerblade down Brookhaven, Georgia’s Peachtree Creek Greenway bike path. The Flock camera zoomed in on him and tracked him as he rolled past. Minutes later, he showed up on another exposed camera livestream further down the bike path. The camera’s resolution was good enough that we were able to see that, when he stopped beneath one of the cameras, he was watching rollerblading videos on his phone.The exposure was initially discovered by YouTuber and technologist Benn Jordan and was shared with security researcher Jon “GainSec” Gaines, who recently found numerous vulnerabilities in several other models of Flock’s automated license plate reader (ALPR) cameras. They shared the details of what they found with me, and I verified many of the details seen in the exposed portals by driving to Bakersfield to walk in front of two cameras there while I watched myself on the livestream. I also pulled Flock’s contracts with cities for Condor cameras, pulled details from company presentations about the technology, and geolocated a handful of the cameras to cities and towns across the United States. Jordan also filmed himself in front of several of the cameras on the Peachtree Creek Greenway bike path. Jordan said he and Gaines discovered many of the exposed cameras with Shodan, an internet of things search engine that researchers regularly use to identify improperly secured devices. After finding links to the feed, “immediately, we were just without any username, without any password, we were just seeing everything from playgrounds to parking lots with people, Christmas shopping and unloading their stuff into cars,” Jordan told me in an interview. “I think it was like the first time that I actually got like immediately scared … I think the one that affected me most was as playground. You could see unattended kids, and that’s something I want people to know about so they can understand how dangerous this is.” In a YouTube video about his research, Jordan said he was able to use footage pulled from the exposed feed to identify specific people using open source investigation tools in order to show how trivially an exposure like this could be abused.
This post is for paid members only
Become a paid member for unlimited ad-free access to articles, bonus podcast content, and more.
Subscribe
Sign up for free access to this post
Free members get access to posts like this one along with an email round-up of our week’s stories.
Subscribe
Already have an account? Sign in
More like this
Flock Uses Overseas Gig Workers to Build its Surveillance AI
Flock accidentally exposed training materials and a panel which tracked what its AI annotators were working on. It showed that Flock, which has cameras in thousands of U.S. communities, is using workers in the Philippines to review and classify footage.
Cops Used Flock to Monitor No Kings Protests Around the Country
A massive cache of Flock lookups collated by the Electronic Frontier Foundation (EFF) shows as many as 50 federal, state, and local agencies used Flock during protests over the last year.
“Most drivers are unaware that San Jose’s Police Department is tracking their locations and do not know all that their saved location data can reveal about their private lives and activities.”
Why I Quit Streaming And Got Back Into Cassettes
In the age of Spotify and AI slop, tapes remind us what we’re missing when we stop taking risks.
Podcast: We Tracked Ourselves with Exposed Flock Cameras
How we tracked ourselves with exposed Flock cameras; a year in review; and our personal recommendations on all sorts of things.
iCloud, Mega, and as a torrent. Archivists have uploaded the 60 Minutes episode Bari Weiss spiked.
...
Read the original on www.404media.co »
We are pleased to announce the release of Ruby 4.0.0. Ruby 4.0 introduces “Ruby Box” and “ZJIT”, and adds many improvements.
Ruby Box is a new (experimental) feature to provide separation about definitions. Ruby Box is enabled when an environment variable RUBY_BOX=1 is specified. The class is Ruby::Box.
Definitions loaded in a box are isolated in the box. Ruby Box can isolate/separate monkey patches, changes of global/class variables, class/module definitions, and loaded native/ruby libraries from other boxes.
* Run test cases in box to protect other tests when the test case uses monkey patches to override something
* Run web app boxes in parallel to execute blue-green deployment on an app server in a Ruby process
* Run web app boxes in parallel to evaluate dependency updates for a certain period of time by checking response diff using Ruby code
* Used as the foundation (low-level) API to implement kind of “package” (high-level) API (it is not designed yet)
For the detail of “Ruby Box”, see Ruby::Box. [Feature #21311] [Misc #21385]
ZJIT is a new just-in-time (JIT) compiler, which is developed as the next generation of YJIT. You need Rust 1.85.0 or newer to build Ruby with ZJIT support, and ZJIT is enabled when –zjit is specified.
We’re building a new compiler for Ruby because we want to both raise the performance ceiling (bigger compilation unit size and SSA IR) and encourage more outside contribution (by becoming a more traditional method compiler). See our blog post for more details.
ZJIT is faster than the interpreter, but not yet as fast as YJIT. We encourage you to experiment with ZJIT, but maybe hold off on deploying it in production for now. Stay tuned for Ruby 4.1 ZJIT.
Ractor, Ruby’s parallel execution mechanism, has received several improvements. A new class, Ractor::Port, was introduced to address issues related to message sending and receiving (see our blog post). Additionally, Ractor.shareable_proc makes it easier to share Proc objects between Ractors.
On the performance side, many internal data structures have been improved to significantly reduce contention on a global lock, unlocking better parallelism. Ractors also now share less internal data, resulting in less CPU cache contention when running in parallel.
Ractor was first introduced in Ruby 3.0 as an experimental feature. We aim to remove its “experimental” status next year.
*nil no longer calls nil.to_a, similar to how **nil does not call nil.to_hash. [Feature #21047]
Logical binary operators (||, &&, and and or) at the beginning of a line continue the previous line, like fluent dot. The following code examples are equal:
We only list stdlib changes that are notable feature changes.
Other changes are listed in the following sections. We also listed release history from the previous bundled version that is Ruby 3.4.0 if it has GitHub releases.
The following bundled gems are promoted from default gems.
The following default gem is added.
The following default gems are updated.
The following bundled gems are updated.
Ruby 4.0 bundled RubyGems and Bundler version 4. see the following links for details.
Dropped support for MSVC versions older than 14.0 (_MSC_VER 1900).
This means Visual Studio 2015 or later is now required.
* Dropped support for MSVC versions older than 14.0 (_MSC_VER 1900).
This means Visual Studio 2015 or later is now required.
The following methods were removed from Ractor due to the addition of Ractor::Port:
Process::Status#& and Process::Status#>> have been removed. They were deprecated in Ruby 3.3. [Bug #19868]
rb_path_check has been removed. This function was used for
$SAFE path checking which was removed in Ruby 2.7, and was already deprecated. [Feature #20971]
A backtrace for ArgumentError of “wrong number of arguments” now include the receiver’s class or module name (e.g., in Foo#bar
instead of in bar). [Bug #21698]
Backtraces no longer display internal frames. These methods now appear as if it is in the Ruby source file, consistent with other C-implemented methods. [Bug #20968]
ruby -e ‘[1].fetch_values(42)’
:211:in ‘Array#fetch’: index 42 outside of array bounds: -1…1 (IndexError)
from :211:in ‘block in Array#fetch_values’
from
$ ruby -e ‘[1].fetch_values(42)’
-e:1:in ‘Array#fetch_values’: index 42 outside of array bounds: -1…1 (IndexError)
from -e:1:in ′
CGI library is removed from the default gems. Now we only provide cgi/escape for the following methods:
With the move of Set from stdlib to core class, set/sorted_set.rb has been removed, and SortedSet is no longer an autoloaded constant. Please install the sorted_set gem and require ‘sorted_set’ to use SortedSet. [Feature #21287]
The default behavior of automatically setting the Content-Type header
to application/x-www-form-urlencoded for requests with a body
(e.g., POST, PUT) when the header was not explicitly set has been
removed. If your application relied on this automatic default, your
requests will now be sent without a Content-Type header, potentially
breaking compatibility with certain servers.
[GH-net-http #205]
* The default behavior of automatically setting the Content-Type header
to application/x-www-form-urlencoded for requests with a body
(e.g., POST, PUT) when the header was not explicitly set has been
removed. If your application relied on this automatic default, your
requests will now be sent without a Content-Type header, potentially
breaking compatibility with certain servers.
[GH-net-http #205]
rb_thread_fd_close is deprecated and now a no-op. If you need to expose
file descriptors from C extensions to Ruby code, create an IO instance
using RUBY_IO_MODE_EXTERNAL and use rb_io_close(io) to close it (this
also interrupts and waits for all pending operations on the IO
instance). Directly closing file descriptors does not interrupt pending
operations, and may lead to undefined behaviour. In other words, if two
IO objects share the same file descriptor, closing one does not affect
the other. [Feature #18455]
* rb_thread_fd_close is deprecated and now a no-op. If you need to expose
file descriptors from C extensions to Ruby code, create an IO instance
using RUBY_IO_MODE_EXTERNAL and use rb_io_close(io) to close it (this
also interrupts and waits for all pending operations on the IO
instance). Directly closing file descriptors does not interrupt pending
operations, and may lead to undefined behaviour. In other words, if two
IO objects share the same file descriptor, closing one does not affect
the other. [Feature #18455]
rb_thread_call_with_gvl now works with or without the GVL.
This allows gems to avoid checking ruby_thread_has_gvl_p.
Please still be diligent about the GVL. [Feature #20750]
* rb_thread_call_with_gvl now works with or without the GVL.
This allows gems to avoid checking ruby_thread_has_gvl_p.
Please still be diligent about the GVL. [Feature #20750]
A C API for Set has been added. The following methods are supported: [Feature #21459]
A C API for Set has been added. The following methods are supported: [Feature #21459]
* Class#new (ex. Object.new) is faster in all cases, but especially when passing keyword arguments. This has also been integrated into YJIT and ZJIT. [Feature #21254]
* GC heaps of different size pools now grow independently, reducing memory usage when only some pools contain long-lived objects
* GC sweeping is faster on pages of large objects
* “Generic ivar” objects (String, Array, TypedData, etc.) now use a new internal “fields” object for faster instance variable access
* The GC avoids maintaining an internal id2ref table until it is first used, making object_id allocation and GC sweeping faster
* object_id and hash are faster on Class and Module objects
* Random, Enumerator::Product, Enumerator::Chain, Addrinfo,
StringScanner, and some internal objects are now write-barrier protected,
which reduces GC overhead.
A lot of work has gone into making Ractors more stable, performant, and usable. These improvements bring Ractor implementation closer to leaving experimental status.
* Performance improvements
Frozen strings and the symbol table internally use a lock-free hash set [Feature #21268]
Class (and generic ivar) instance variable access is faster and avoids locking
CPU cache contention is avoided in object allocation by using a per-ractor counter
CPU cache contention is avoided in xmalloc/xfree by using a thread-local counter
* Frozen strings and the symbol table internally use a lock-free hash set [Feature #21268]
* Class (and generic ivar) instance variable access is faster and avoids locking
* CPU cache contention is avoided in object allocation by using a per-ractor counter
* CPU cache contention is avoided in xmalloc/xfree by using a thread-local counter
...
Read the original on www.ruby-lang.org »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.