10 interesting stories served every morning and every evening.
(Photography)
Here’s a photo of a Christmas tree, as my camera’s sensor sees it:
This is becuase while the camera’s analog-to-digital converter (ADC) output can theoretically output values from 0 to 16382, the data doesn’t cover that whole range:
The real range of ADC values is ~2110 to ~136000. Let’s set those values as the white and black in the image:
Much better, but it’s still more monochromatic then I remember the tree being. Camera sensors aren’t actually able to see color: They only measure how much light hit each pixel.
In a color camera, the sensor is covered by a grid of alternating color filters:
Let’s color each pixel the same as the filter it’s looking through:
This version is more colorful, but each pixel only has one third of it’s RGB color. To fix this, I just averaged the values each pixel with it’s neighbors:
Applying this process to the whole photo gives the lights some color:
However, the image is still very dark. This is because monitors don’t have as much dynamic range as the human eye, or a camera sensor: Even if you are using an OLED, the screen still has some ambient light reflecting off of it and limiting how black it can get.
There’s also another, sneakier factor causing this:
Our perception of brightness is non-linear.
If brightness values are quantized, most of the ADC bins will be wasted on nearly identical shades of white while every other tone is crammed into the bottom. Because this is an inefficient use of memory, most color spaces assign extra bins to darker colors:
As a result of this, if the linear data is displayed directly, it will appear much darker then it should be.
Both problems can be solved by applying a non-linear curve to each color channel to brighten up the dark areas… but this doesn’t quite work out:
Some of this green cast is caused by the camera sensor being intrinsically more sensitive to green light, but some of it is my fault: There are twice as many green pixels in the filter matrix. When combined with my rather naive demosaicing, this resulted in the green channel being boosted even higher.
In either case, it can fixed with proper white-balance: Equalize the channels by multipling each one with a constant.
However, because the image is now non-linear, I have to go back a step to do this. Here’s the dark image from before with all the values temporarily scaled up so I can see the problem:
… here’s that image with the green taken down to match the other channels:
… and after re-applying the curve:
This is really just the bare minimum: I haven’t done any color calibration, the white balance isn’t perfect, the black points are too high, there’s lots of noise that needs to be cleaned up…
Additionally, applying the curve to each color channel accidentally desaturated the highlights. This effect looks rather good — and is what we’ve come to expect from film — but it’s has de-yellowed the star. It’s possible to separate the luminance and curve it while preserving color. On it’s own, this would make the LED Christmas lights into an overstaturated mess, but combining both methods can produce nice results.
For comparison, here’s the image my camera produced from the same data:
Far from being an “unedited” photo: there’s a huge amount of math that’s gone into making an image that nicely represents what the subject looks like in person.
There’s nothing that happens when you adjust the contrast or white balance in editing software that the camera hasn’t done under the hood. The edited image isn’t “faker” then the original: they are different renditions of the same data.
In the end, replicating human perception is hard, and it’s made harder when constrained to the limitations of display technology or printed images. There’s nothing wrong with tweaking the image when the automated algorithms make the wrong call.
...
Read the original on maurycyz.com »
Skip to main content
Ask the publishers to restore access to 500,000+ books.
8 Days Left: The year is almost over—help us finish strong in 2025!
Please Don’t Scroll Past This
Can you chip in? As an independent nonprofit, the Internet Archive is fighting for universal access to quality information. We build and maintain all our own systems, but we don’t charge for access, sell user information, or run ads. We’d be deeply grateful if you’d join the one in a thousand users that support us financially.
We understand that not everyone can donate right now, but if you can afford to contribute this Thursday, we promise it will be put to good use. Our resources are crucial for knowledge lovers everywhere—so if you find all these bits and bytes useful, please pitch in.
Please Don’t Scroll Past This The Internet Archive is working to keep the record straight by recording government websites, news publications, historical documents, and more. If you find our library useful, please pitch in.
Remind Me
By submitting, you agree to receive donor-related emails from the Internet Archive. Your privacy is important to us. We do not sell or trade your information with anyone.
An icon used to represent a menu that can be
toggled by interacting with this icon.
An illustration of an open book.
An illustration of two cells of a film
strip.
An illustration of an audio speaker.
An illustration of two photographs.
An illustration of a person’s head and chest.
An illustration of a horizontal line over an up
pointing arrow.
Search the history of more than 1 trillion web pages.
Capture a web page as it appears now for use as a trusted citation in the future.
Internet Archive’s in-browser video “theater” requires JavaScript to be enabled.
It appears your browser does not have it turned on.
Please see your browser settings for this feature.
Sharyn Alfonsi’s “Inside CECOT” for 60 Minutes, which was censored by Bari Weiss, as it appeared on Canada’s Global TV app.
...
Read the original on archive.org »
...
Read the original on skyview.social »
To see all available qualifiers, see our documentation.
We read every piece of feedback, and take your input very seriously.
Secure your code as you build
To see all available qualifiers, see our documentation.
We read every piece of feedback, and take your input very seriously.
Secure your code as you build
You signed in with another tab or window. Reload to refresh your session.
You signed out in another tab or window. Reload to refresh your session.
You switched accounts on another tab or window. Reload to refresh your session.
...
Read the original on github.com »
uv installs packages faster than pip by an order of magnitude. The usual explanation is “it’s written in Rust.” That’s true, but it doesn’t explain much. Plenty of tools are written in Rust without being notably fast. The interesting question is what design decisions made the difference.
Charlie Marsh’s Jane Street talk and a Xebia engineering deep-dive cover the technical details well. The interesting parts are the design decisions: standards that enable fast paths, things uv drops that pip supports, and optimizations that don’t require Rust at all.
pip’s slowness isn’t a failure of implementation. For years, Python packaging required executing code to find out what a package needed.
The problem was setup.py. You couldn’t know a package’s dependencies without running its setup script. But you couldn’t run its setup script without installing its build dependencies. PEP 518 in 2016 called this out explicitly: “You can’t execute a setup.py file without knowing its dependencies, but currently there is no standard way to know what those dependencies are in an automated fashion without executing the setup.py file.”
This chicken-and-egg problem forced pip to download packages, execute untrusted code, fail, install missing build tools, and try again. Every install was potentially a cascade of subprocess spawns and arbitrary code execution. Installing a source distribution was essentially curl | bash with extra steps.
The fix came in stages:
* PEP 518 (2016) created pyproject.toml, giving packages a place to declare build dependencies without code execution. The TOML format was borrowed from Rust’s Cargo, which makes a Rust tool returning to fix Python packaging feel less like coincidence.
* PEP 517 (2017) separated build frontends from backends, so pip didn’t need to understand setuptools internals.
* PEP 621 (2020) standardized the [project] table, so dependencies could be read by parsing TOML rather than running Python.
* PEP 658 (2022) put package metadata directly in the Simple Repository API, so resolvers could fetch dependency information without downloading wheels at all.
PEP 658 went live on PyPI in May 2023. uv launched in February 2024. uv could be fast because the ecosystem finally had the infrastructure to support it. A tool like uv couldn’t have shipped in 2020. The standards weren’t there yet.
Other ecosystems figured this out earlier. Cargo has had static metadata from the start. npm’s package.json is declarative. Python’s packaging standards finally bring it to parity.
Speed comes from elimination. Every code path you don’t have is a code path you don’t wait for.
uv’s compatibility documentation is a list of things it doesn’t do:
No .egg support. Eggs were the pre-wheel binary format. pip still handles them; uv doesn’t even try. The format has been obsolete for over a decade.
No pip.conf. uv ignores pip’s configuration files entirely. No parsing, no environment variable lookups, no inheritance from system-wide and per-user locations.
No bytecode compilation by default. pip compiles .py files to .pyc during installation. uv skips this step, shaving time off every install. You can opt in if you want it.
Virtual environments required. pip lets you install into system Python by default. uv inverts this, refusing to touch system Python without explicit flags. This removes a whole category of permission checks and safety code.
Stricter spec enforcement. pip accepts malformed packages that technically violate packaging specs. uv rejects them. Less tolerance means less fallback logic.
Ignoring requires-python upper bounds. When a package says it requires python, uv ignores the upper bound and only checks the lower. This reduces resolver backtracking dramatically since upper bounds are almost always wrong. Packages declare python because they haven’t tested on Python 4, not because they’ll actually break. The constraint is defensive, not predictive.
First-index wins by default. When multiple package indexes are configured, pip checks all of them. uv picks from the first index that has the package, stopping there. This prevents dependency confusion attacks and avoids extra network requests.
Each of these is a code path pip has to execute and uv doesn’t.
Some of uv’s speed comes from Rust. But not as much as you’d think. Several key optimizations could be implemented in pip today:
HTTP range requests for metadata. Wheel files are zip archives, and zip archives put their file listing at the end. uv tries PEP 658 metadata first, falls back to HTTP range requests for the zip central directory, then full wheel download, then building from source. Each step is slower and riskier. The design makes the fast path cover 99% of cases. None of this requires Rust.
Parallel downloads. pip downloads packages one at a time. uv downloads many at once. Any language can do this.
Global cache with hardlinks. pip copies packages into each virtual environment. uv keeps one copy globally and uses hardlinks (or copy-on-write on filesystems that support it). Installing the same package into ten venvs takes the same disk space as one. Any language with filesystem access can do this.
Python-free resolution. pip needs Python running to do anything, and invokes build backends as subprocesses to get metadata from legacy packages. uv parses TOML and wheel metadata natively, only spawning Python when it hits a setup.py-only package that has no other option.
PubGrub resolver. uv uses the PubGrub algorithm, originally from Dart’s pub package manager. Both pip and PubGrub use backtracking, but PubGrub applies conflict-driven clause learning from SAT solvers: when it hits a dead end, it analyzes why and skips similar dead ends later. This makes it faster on complex dependency graphs and better at explaining failures. pip could adopt PubGrub without rewriting in Rust.
Zero-copy deserialization. uv uses rkyv to deserialize cached data without copying it. The data format is the in-memory format. Libraries like FlatBuffers achieve this in other languages, but rkyv integrates tightly with Rust’s type system.
Thread-level parallelism. Python’s GIL forces parallel work into separate processes, with IPC overhead and data copying. Rust can parallelize across threads natively, sharing memory without serialization boundaries. This matters most for resolution, where the solver explores many version combinations.
No interpreter startup. Every time pip spawns a subprocess, it pays Python’s startup cost. uv is a single static binary with no runtime to initialize.
Compact version representation. uv packs versions into u64 integers where possible, making comparison and hashing fast. Over 90% of versions fit in one u64. This is micro-optimization that compounds across millions of comparisons.
These are real advantages. But they’re smaller than the architectural wins from dropping legacy support and exploiting modern standards.
uv is fast because of what it doesn’t do, not because of what language it’s written in. The standards work of PEP 518, 517, 621, and 658 made fast package management possible. Dropping eggs, pip.conf, and permissive parsing made it achievable. Rust makes it a bit faster still.
pip could implement parallel downloads, global caching, and metadata-only resolution tomorrow. It doesn’t, largely because backwards compatibility with fifteen years of edge cases takes precedence. But it means pip will always be slower than a tool that starts fresh with modern assumptions.
Other package managers could learn from this: static metadata, no code execution to discover dependencies, and the ability to resolve everything upfront before downloading. Cargo and npm have operated this way for years. If your ecosystem requires running arbitrary code to find out what a package needs, you’ve already lost.
...
Read the original on nesbitt.io »
Christmas is often regarded as a time for goodwill, but one young UK couple’s act of kindness 50 years ago changed their lives forever.
On 23 December 1975, Rob Parsons and his wife Dianne were preparing for Christmas at their Cardiff home when they heard a knock at the door.
On their doorstep stood a man with a bin bag containing his possessions in his right hand and a frozen chicken in his left.
Rob studied the man’s face and vaguely remembered him as Ronnie Lockwood, someone he would occasionally see at Sunday School as a boy and who he was told to be kind to as he was a “bit different”.
“I said ‘Ronnie, what’s with the chicken?’ He said ‘somebody gave it to me for Christmas’. And then I said two words that changed all of our lives.
“And I’m not exactly sure why I said them. I said come in.”
...
Read the original on www.bbc.co.uk »
👋 Hello! If you print this page, you’ll get a nifty calendar that displays all of the year’s dates on a single page. It will automatically fit on a single sheet of paper of any size. For best results, adjust your print settings to landscape orientation and disable the header and footer.
Take in the year all at once. Fold it up and carry it with you. Jot down your notes on it. Plan things out and observe the passage of time. Above all else, be kind to others.
Looking for 2026? Here you go!
...
Read the original on neatnik.net »
People examining documents released by the Department of Justice in the Jeffrey Epstein case discovered that some of the file redaction can be undone with Photoshop techniques, or by simply highlighting text to paste into a word processing file.
Un-redacted text from these documents began circulating through social media on Monday evening. An exhibit in a civil case in the Virgin Islands against Darren K Indyke and Richard D Kahn, two executors of Epstein’s estate, contains redacted allegations explaining how Epstein and his associates had facilitated the sexual abuse of children. The exhibit was the second amended complaint in the state case against Indyke and Kahn.
In section 85, the redacted portion states: “Between September 2015 and June 2019, Indyke signed (FAC) for over $400,000 made payable to young female models and actresses, including a former Russian model who received over $380,000 through monthly payments of $8,333 made over a period of more than three and a half years until the middle of 2019.”
Prosecutors in the Virgin Islands settled its civil sex-trafficking case against Epstein’s estate, Indyke and Kahn in 2022 for $105m, plus one half of the proceeds from the sale of Little St James, the island on which Epstein resided and on which many of his crimes occurred. The justice department press release announcing the settlement did not include an admission of liability.
Indyke, an attorney who represented Epstein for decades, has not been criminally indicted by federal authorities. He was hired by the Parlatore Law Group in 2022, before the justice department settled the Epstein case. That firm represents the defense secretary, Pete Hegseth, and previously represented Donald Trump in his defense against charges stemming from the discovery of classified government documents stored at Trump’s Florida estate. Calls and email seeking comment from Indyke and the Parlatore Law Group have not yet been returned.
Trump has repeatedly denied any knowledge of or involvement in Epstein’s criminal activities and any wrongdoing.
Other sections further allege how Epstein’s enterprise concealed crimes.
“Defendants also attempted to conceal their criminal sex trafficking and abuse, conduct by paying large sums of money to participant-witnesses, including by paying for their attorneys’ fees and case costs in litigation related to this conduct,” reads one redacted passage.
“Epstein also threatened harm to victims and helped release damaging stories about them to damage their credibility when they tried to go public with their stories of being trafficked and sexually abused. Epstein also instructed one or more Epstein Enterprise participant-witnesses to destroy evidence relevant to ongoing court proceedings involving Defendants’ criminal sex trafficking and abuse conduct.”
Redactions of sections 184 through 192 of the document describe property taxes paid by companies incorporated by Epstein on properties that were not on the balance sheet for those firms.
“For instance, Cypress’s Balance Sheet as of December 31, 2018 did not reflect any assets other than cash of $18,824. Further, Cypress reported only $301 in expenses for the year ended December 31, 2018, despite it paying $106,394.60 in Santa Fe property taxes on November 6, 2018,” reads one redacted passage.
“Similarly, in 2017, Cypress reported as its only asset cash in the amount of $29,736 and expenses of $150, despite it paying $55,770.41 and $113,679.56 in Santa Fe property taxes during 2017.”
The Epstein Files Transparency Act signed into law last month permits the Department of Justice “to withhold certain information such as the personal information of victims and materials that would jeopardize an active federal investigation”.
It was unclear how property material complies with the redaction standard under the law. An inquiry to the Department of Justice has not yet been answered.
...
Read the original on www.theguardian.com »
I’ve been reading Lord of the Rings for two months and I’m just at the end of the first part. It’s not because I’m not enjoying it. It’s one of the most enjoyable reading experiences I can remember.
From the beginning, I’ve read the whole thing aloud. I’ve found reading aloud helpful for staying engaged — limiting myself to mouth-speed rather than eye-speed means I won’t rush, miss important details, and then lose interest, which has always been a problem for me.
At first I was anxious to read a 1,500-page book this way, because it would take so long. But, as someone pointed out to me, if I’m enjoying it, why would I want to be done with it sooner?
So I tried slowing down even more, and discovered something. I slowed to a pace that felt almost absurd, treating each sentence as though it might be a particularly important one. I gave each one maybe triple the usual time and attention, ignoring the fact that there are hundreds of pages to go.
This leisurely pace made Middle-Earth blossom before my eyes. When I paused after each comma, and let each sentence ring for a small moment after the period, the events of the story reached me with more weight and strength. That extra time gave space for Tolkien’s images and moods to propagate in my mind, which they did automatically.
Some part of me still wanted to rush and get on with it, to make good time, to gloss over the songs and lore to get to Moria and Mount Doom and the other marquee moments of the story. But the more I ignored that impulse, the better the experience got.
By offering the book about triple the usual amount of attentiveness, I was getting about triple the storyness (i.e. meaning, engagement, literary pleasure). Whatever the thing is that I’m seeking when I pick up a novel in the first place, there’s much more of it available at this pace.
This effect reminded me of a paradox around eating I recognized long ago. When you slow down your eating speed, say to half or a third your default speed, you get much more enjoyment out of a smaller amount of food. The extra attention given to each bite allows more of the “good stuff,” whatever that is exactly, to reach you.
What’s paradoxical is that it’s precisely the seeking of that “good stuff” that normally drives me to eat so quickly, and miss most of what I’m seeking. When you try to barrel ahead to access the good stuff quicker, you get less of it in the end. Slow down and much more of it is released.
And it’s released automatically, in both reading and eating. You don’t have to search it out. The good stuff (the meaning in the text, the pleasure in the eating) just rises up to meet you in that extra time you give it. Slowing down, and offering more time to the act of consumption, immediately increases reading comprehension (and eating comprehension).
Both are analogous to slowing down while you vacuum a carpet. If you pass the vacuum head too quickly, you miss half the dirt. Slow down, and you can hear how much more grit is sent skittering up the tube. The suction and bristles are working, but they need more time to do their work fully, to draw up the deeper-lying stuff.
It seems that my default consumption speeds for reading and eating (and maybe everything else) reduce the rewards of those things significantly, undermining the point of doing either.
Part of it is my own impatience. But I also suspect that modern living, with its infinite supply of consumables, tends to push our rate-of-intake dials too high. I’m not going to run out of books, or snacks, or opportunities to learn something. There’s always more, so not every crust of bread or printed page needs to be appreciated fully.
Internally though, the mind is juggling like Lucy and Ethel on the conveyor belt at the chocolate factory. Our receptors for meaning and appreciation, like the vacuum head, need more time to do their full work, to make all the connections they’re designed to make.
It might sound like I’m just offering clichés — less is more, stop and smell the roses, take your time — and I guess I am. But clichés suffer the same issue: they are often profound insights, consumed and passed on too rapidly for their real meaning to register anymore. You really should stop and smell roses, as you know if you’re in the habit of doing that.
At least see what happens when you reduce your consumption speed — of anything, but especially books, information, and food — by a half, or two thirds. Notice that (1) something in you really wants to plow through at the highest viable setting, and (2) how much more of the reward is released when you slow down anyway.
As far as I can tell, almost everything becomes more satisfying when you give it more time and intention, even things like checking the mailbox or writing a shopping list.
Slowing down your rate of consumption will inevitably change what you want to consume. Reading throwaway news articles or AI slop with great care and attention is only going to show you how empty of value it is. Reading dense writing in inky old books, crafted for your mind by great masters, becomes easier without the rushed pace, and the meaning just blooms out of it.
Same with food. Try to savor a cheap, waxy “chocolate” bar, or a bag of store-brand cheese puffs, and you discover a harsh taste that you don’t want to look at too closely. Enjoy a homemade pastry with great attention, and discover there’s even more in it than you realized.
Mass production is good in so many ways, but the faster we tend to consume its fruits, the more we end up seeking things for their glossy, candied surfaces. The more we go for these surface-level rewards, the more the culture focuses on offering only that part — such as TikTok videos, processed food, CGI-forward movies, and public discourse in the form of unexamined talking points.
Who knows how far we’ve drifted from the best modes of consuming the things we value. Once something becomes a norm, it seems like an appropriate standard, no matter how much has been lost. Apparently, reading silently and alone was unusual until as late as the 18th century. Certainly sit-down meals and cooking at home were.
I don’t mean to sound like a scold. Let’s say none of this is morally good or bad. It’s just that in so much of what we do, we could be getting much more of the part of it that we really seek — but it’s only available at slower speeds.
If you’re curious, try consuming things more slowly, so slowly it seems silly to others — say a third your habitual speed — and see what rises up to meet you.
Recently I opened a discussion forum for Raptitude readers who want to give something up for the month of December (alcohol, social media, snacks, etc).
It’s been a real success, and many people want to do something similar in January. If you want to quit something, or just give it up for a month, you’re invited to join.
Follow this link at the end of this post to get an invite.
...
Read the original on www.raptitude.com »
This site requires JavaScript to run correctly. Please turn on JavaScript or unblock scripts
...
Read the original on substack.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.