10 interesting stories served every morning and every evening.
(Photography)
Here’s a photo of a Christmas tree, as my camera’s sensor sees it:
It’s not even black-and-white, it’s gray-and-gray. This is becuase while the ADC’s output can theoretically go from 0 to 16382, the actual data doesn’t cover that whole range:
The real range of ADC values is ~2110 to ~136000. Let’s set those values as the white and black in the image:
Much better, but it’s still more monochromatic then I remember the tree being. Camera sensors aren’t actually able to see color: They only measure how much light hit each pixel.
In a color camera, the sensor is covered by a grid of alternating color filters:
Let’s color each pixel the same as the filter it’s looking through:
This version is more colorful, but each pixel only has one third of it’s RGB color. To fix this, I just averaged the values each pixel with it’s neighbors:
Applying this process to the whole photo gives the lights some color:
However, the image is still very dark. This is because monitors don’t have as much dynamic range as the human eye, or a camera sensor: Even if you are using an OLED, the screen still has some ambient light reflecting off of it and limiting how black it can get.
There’s also another, sneakier factor causing this:
Our perception of brightness is non-linear.
If brightness values are quantized, most of the ADC bins will be wasted on nearly identical shades of white while every other tone is crammed into the bottom. Because this is an inefficient use of memory, most color spaces assign extra bins to darker colors:
As a result of this, if the linear data is displayed directly, it will appear much darker then it should be.
Both problems can be solved by applying a non-linear curve to each color channel to brighten up the dark areas… but this doesn’t quite work out:
Some of this green cast is caused by the camera sensor being intrinsically more sensitive to green light, but some of it is my fault: There are twice as many green pixels in the filter matrix. When combined with my rather naive demosaicing, this resulted in the green channel being boosted even higher.
In either case, it can fixed with proper white-balance: Equalize the channels by multipling each one with a constant.
However, because the image is now non-linear, I have to go back a step to do this. Here’s the dark image from before with all the values temporarily scaled up so I can see the problem:
… here’s that image with the green taken down to match the other channels:
… and after re-applying the curve:
This is really just the bare minimum: I haven’t done any color calibration, the white balance isn’t perfect, there’s lots of noise that needs to be cleaned up…
Additionally, applying the curve to each color channel accidentally desaturated the highlights. This effect looks rather good — and is what we’ve come to expect from film — but it’s has de-yellowed the star. It’s possible to separate the luminance and curve it while preserving color. On it’s own, this would make the LED Christmas lights into an overstaturated mess, but combining both methods can produce nice results.
For comparison, here’s the image my camera produced from the same data:
Far from being an “unedited” photo: there’s a huge amount of math that’s gone into making an image that nicely represents what the subject looks like in person.
There’s nothing that happens when you adjust the contrast or white balance in editing software that the camera hasn’t done under the hood. The edited image isn’t “faker” then the original: they are different renditions of the same data.
In the end, replicating human perception is hard, and it’s made harder when constrained to the limitations of display technology or printed images. There’s nothing wrong with tweaking the image when the automated algorithms make the wrong call.
...
Read the original on maurycyz.com »
This site requires JavaScript to run correctly. Please turn on JavaScript or unblock scripts
...
Read the original on substack.com »
If someone had told me 12 months ago what was going to happen this past year, I wouldn’t have believed them. Skipping swiftly past all the political, economic and social turmoil, I come to the interface changes brought in macOS Tahoe with Liquid Glass. After three months of strong feedback during beta-testing, I was disappointed when Tahoe was released on 15 September to see how little had been addressed. When 26.1 followed on 3 November it had only regressed, and 26.2 has done nothing. Here I summarise my opinions on where Tahoe’s overhaul has gone wrong.
Almost all the content displayed in windows is best suited to rectangular views. Images, video, webpages and other text crave areas bounded by right angles. Gentle rounding on the corners, as in Sequoia, is fine, but the significantly increased radius enforced in Tahoe is a misfit. This either leads to cropping of contents, or reduction in size of the view and wasted space.
Cropping is misleading, as seen in this enlarged view of a thumbnail image in the Finder’s Gallery view, compared to the larger version shown below. The thumbnail misrepresents what’s in the original.
Among Apple’s claims for this new look is greater consistency. But two windows in the same app, both created using SwiftUI, can’t even share a common radius, as shown below in Providable running in macOS 26.2.
Tahoe has also increased the size of its controls, without using that to improve their clarity. The best way to see that is in my Mallyshag demo app.
This looks good in Sequoia above, but becomes a mess in Tahoe (below) because of its changed control dimensions.
Those three buttons are significantly wider, so now overlap one another and are wider than the text box below. The user sees no benefit to this, though, as the text within the controls is identical.
App icons need to be both distinguishable and readily recalled. The first ensures that we can tell one from another, and relies on all the visual cues we can muster, including colours, form and content. Tahoe enforces a rule that everything in the icon must be fitted inside its uniform square with rounded corners, so restricting cues to colours and contents. As a result, the icons of many bundled and other Apple apps have become harder to distinguish in a crowded Dock. Some, including Apple’s Developer app and the App Store, are indistinguishable, while others have degenerated into vague blotches.
Above are most of the apps bundled in Sequoia, and below are those in Tahoe.
In real life, whiteouts are dangerous because they’re so disorienting. There’s no horizon, no features in the landscape, and no clues to navigation. We see and work best in visual environments that are rich in colour and tonal contrasts. Tahoe has continued a trend for Light Mode to be bleached-out white, and Dark Mode to be a moonless night. Seeing where controls, views and contents start and end is difficult, and leaves them suspended in the whiteout.
In light mode, with default transparency, tool icons and text are clearly distinguished tonally, as are some controls including buttons and checkboxes. However, text entry fields are indistinguishable from the background, and there’s a general lack of demarcation, particularly between controls and the list view below.
This technique is used in watercolours to merge layers of colour diffusely, and the best description of some of the results of transparency in Liquid Glass. My examples speak for themselves, and are drawn first from Apple’s own design for System Settings.
Transparency of the Search box at the top of the sidebar on the left renders it incomprehensible when it’s underlaid by scrolled navigational content.
Although the view title Keyboard remains readable, bleed-through of underlying colours is confusing, distracting and aesthetically upsetting.
My next examples show the same window in Providable with a selected list row being scrolled up behind what used to be a window title bar.
With the window in focus, the selection colour overwhelms the traffic light controls and window title, which should read Drop Files. This also draws attention to the limited width necessary to accommodate rectangular content in a window with excessively rounded corners.
Out of focus the selected row is less overwhelming, but traffic lights and title have dissipated in grey blur.
I’m sure that, in the right place and time, transparency effects of Liquid Glass can be visually pleasing. Not only is this the wrong time and place, but those with visual impairment can no longer remove or even reduce these effects, as the Reduce Transparency control in Accessibility settings no longer reduces transparency in any useful way. That was one of the regressions in 26.1 that hasn’t been addressed in 26.2.
* Results in app icons being more uniform, thus less distinguishable and memorable.
* Fails to distinguish tools, controls and other interface elements using differences in tone, so making them harder to use.
* Makes a mess where transparent layers are superimposed, and won’t reduce transparency when that’s needed to render its interface more accessible.
Maybe this is because I’m getting older, but that gives me the benefit of having experienced Apple’s older interfaces, with their exceptional quality and functionality.
That was little more than a decade ago, in 2014. Not that I want to turn the clock back, but it would be really helpful if I could read clearly what’s on my display once again.
...
Read the original on eclecticlight.co »
This is the story about how I built MacThrottle.
I’ve been very happy with my M2 MacBook Air for the past few years. However, when using an external display, especially a very demanding one like a 4K 120Hz display, I’ve noticed it started struggling more. Since it lacks fans, you can’t hear it struggling, but you can feel it as everything becomes very slow or unresponsive: that’s when thermal throttling kicks in.
I know it’s thermal throttling because I can see in iStat Menus that my CPU usage is 100% while the power usage in watts goes down.
It’s even more obvious with MX Power Gadget: You can see the power usage and frequency of the performance core dropping, as usage keeps being 100%:
I’ve also hit thermal throttling with my work MacBook Pro. It’s the 14″ M4 Max variant, which is the worst variant because the thermal envelope of the 14″ is too small for the max output for the M4 Max. On my previous 14″ M1 Pro MacBook Pro, I’ve never even heard the fans in 3 years…
That being said, I still love Apple Silicon for the performance and power usage, it’s still a dramatic improvement over the Intel days. 🫶
Anyway, I wanted to know: is there a way to tell if the Apple Silicon SoC is thermal throttling, that is not based on heuristics like in my screenshot?
This was a wilder ride than I expected. It’s possible to know programmatically if the Mac is throttled, because macOS exposes this in various but inconsistent ways.
The approach that Apple recommends is to use ProcessInfo.thermalState from Foundation:
Sounds good, right? However, I knew that another tool could provide this information, though it needed root: powermetrics.
Both report the pressure level to be “nominal”, they must be the same…right?
After running a few stress tests stress-ng –cpu 0 -t 600, I started to see the two values diverge!
For some reason, the granularity is different between ProcessInfo.thermalState and powermetrics. They have a different amount of possible states and they don’t line up.
Here is my empirical experience:
I never managed to hit these states, so I don’t know if they match, but they’re technically defined:
In practice, when my Mac starts getting hot, from the powermetrics perspective it goes into moderate, and when it starts throttling, it goes into heavy. The problem is that with ProcessInfo, both are covered by the fair state, so it’s not really useful to know when the Mac is actually throttling. ☹️
I thought maybe this was an iOS vs macOS thing? But Apple references it in the macOS docs as well. Maybe it was more consistent on Intel Macs?
I stumbled upon this article from Dave MacLachlan, a Googler working on Apple stuff, from 2020. I learned that there are other CLI tools to get thermal data, but they don’t seem to work on my Apple Silicon MacBook:
But the most interesting thing I learned is that the data powermetrics shows is actually coming from thermald. And thermald writes the current thermal pressure to the Darwin notification system (notifyd)!
The various levels are defined in OSThermalNotification.h according to the article. Indeed:
The funny thing is that OSThermalNotification.h is barely referenced anywhere, there are only three pages of Google results. It seems to be used in Bazel for example. That post was a big help.
What’s great about this approach is that it doesn’t require root! I can subscribe to the notification system for the com.apple.system.thermalpressurelevel event to get the (good) thermal state!
Here is a snippet to get it in Swift:
Now that I had a useful value to work with, it was time to build the app.
Armed with Opus 4.5, I set out to build a little menu bar app where I could see, at a glance, if my Apple Silicon die was trying to save itself from crossing 110°C. I called it MacThrottle.
I built a simple SwiftUI app for the menu bar that shows me the status in a superbly original thermometer icon. The thermometer is filled depending on the thermal state, and its color changes from green to red. I have like 20 menu bar icons and they’re all monochromatic, so the color in the thermometer is very subtle to keep things consistent.
The app is a simple SwiftUI app. Apple provides a scene called MenuBarExtra to render a menu bar control. It was simpler than I expected! To make it a pure menu bar app with no dock icon, you just need to set LSUIElement to true in Info.plist.
An early version of MacThrottle, just reporting the thermal state
I explained the various approaches to get the thermal pressure level in the previous section. But when I was building the app, I discovered later on that thermald was publishing the thermal state to notifyd. So at first, I thought I had to use powermetrics to get useful thermal state changes. Since that unfortunately requires root access, the app needed root access too.
To reduce the scope of what runs as root, I did not run the app itself as root. Instead, the app does not work by default, but it gives you the option to install a helper. It does this through an AppleScript with administrator privileges to prompt for access.
The helper is just a bash script run as a launchd daemon:
The bash script writes the thermal state to a file every few seconds and the app reads it every few seconds!
Once I discovered I could use the notification system without elevated privileges, I replaced the helper by code in the app to read the value from the notification system directly. Much simpler 🎉
I wanted to show the temperature and fan speed (when supported) in a little graph in the menu bar app. This would allow me to correlate the thermal state with increased temperature, for example.
Again, there are multiple APIs to read the temperature. First, I started using an undocumented API from IOKit, but I realised I was getting ~80ºC max, while iStat Menus or MX Power Gadget would show >100ºC.
Stats, the open source alternative to iStat Menus, helped me use the SMC instead and get the correct values. But the SMC is a much more unstable API because each SoC has different keys to access the temperature data:
Though the M3 keys seem to work on my M4 Max work MacBook Pro…
I ended up using SMC first to get the accurate temperature and fall back to IOKit if SMC doesn’t work.
For the graph, I wanted a compact visualization that would show me the thermal history at a glance.
The graph packs three layers of information:
* Colored background segments for each thermal state (green for nominal, yellow for moderate, orange for heavy, red for critical)
* A solid line for CPU temperature with a dynamic Y-axis that adjusts to actual values
* A dashed cyan line for fan speed percentage (on Macs that have fans)
I didn’t want to spend too much time making a super fancy graph system. Since it polls every two seconds, the graph gets very busy after a while. So I decided to keep it down to 10 minutes, since the thermal state history is mostly interesting short-term.
When the system was under load, I noticed the graph hovering was not very smooth on my 120Hz display. I found out I can add .drawingGroup to my SwiftUI canvas to use GPU rendering!. Indeed, I added it, and it was smooth again.
Graph with pressure state, temp and fans. Tooltip on hover to get past value is supported.
I also added notifications so I get alerted when the state changes, in case I miss the menu bar icon. It can alert on specific state transitions, and optionally on recovery. This is useful to know when it’s time to kill a VS Code instance or a Docker container!
To be fair, it can get a bit noisy on a struggling MacBook Air…
It’s true that I usually already notice when the Mac is getting slow, but sometimes the Mac gets slow when it’s swapping heavily. At least now I know when it’s just too hot.
Of course, I want the app to start automatically now, since it works so well!
I expected that I would need to write .plist again, but no, it’s extremely easy to prompt the user to add a “login item” as macOS calls it, using SMAppService.
Since I don’t have an Apple Developer account, I can’t notarize the app, so installing it from the releases is going to require a few extra clicks in Privacy and Security.
And for Macs that disallow it entirely, building from source with Xcode is the only way. I added instructions in the README.
Hope this is useful to someone else!
...
Read the original on stanislas.blog »
A tada list, or to-done list, is where you write out what you accomplished each day. It’s supposed to make you focus on things you’ve completed instead of focusing on how much you still need to do. Here is what my tada lists look like:
I have a page for every month. Every day, I write out what I did. At the end of the month, I make a drawing in the header to show what I did that month. Here are a few of the drawings:
In January, I started a Substack, made paintings for friends, and wrote up two Substack posts on security.
In February, I learned took a CSS course and created a component library for myself.
In March, I read a few books, worked on a writing app, took a trip to New York, and drafted several posts on linear algebra for this Substack.
(If you’re wondering where these posts are, there’s a lag time between draft and publish, where I send the posts out for technical review and do a couple of rounds of rewrites).
I don’t really spend much time celebrating my accomplishments. Once I accomplish something, I have a small hit of, “Yay, I did it,” before moving on to, “So, what else am I going to do?” For example, when I finished my book (a three-year-long effort), I had a couple of weeks of, “Yay, I wrote a book,” before this became part of my normal life, and it turned into, “Yes, I wrote a book, but what else have I done since then?”
I thought the tada list would help reinforce “I did something!” but it also turned into “I was able to do this thing, because I did this other thing earlier”. I’ll explain with an
For years I have been wanting to create a set of cards with paintings of Minnesota, for family and friends. The problem: I didn’t have many paintings of Minnesota, and didn’t like the ones I had.
So I spent 2024 learning a lot about watercolor pigments, and color mixing hundreds of greens, to figure out which greens I wanted to use in my landscapes:
Then I spent the early part of 2025 doing a bunch of value studies, because my watercolors always looked faded:
(Value studies are where you try to make your paintings look good using black and white only, so you’re forced to work using value instead of color. It’s an old exercise to improve your art).
Then in the summer, I did about 50 plein air paintings of Minnesota landscapes:
(Plein air = painting on location. Please admire the wide variety of greens I mixed for these paintings).
Look at how much better these are:
Out of those 50, I picked my top four and had cards made. Thanks to the “tada” list, it wasn’t just “I made some cards”, it was
* And spent most of my summer painting outside?
The payoff for all that work was these lovely cards.
For a while now, I have wanted a mustache-like templating language, but with static typing. Last year, I created a parser combinator library called `tarsec` for TypeScript, and this year, I used it to write a mustache-like template language called `typestache` for myself that had static typing.
I’ve since used both `tarsec` and `typestache` in personal projects, like this one that adds file-based routing to express and autogenerates a client for the frontend.
Part of the reason I like learning stuff is it lets me do things I couldn’t do before. I think acknowledging that you CAN do something new is an important part of the learning process, but I usually skip it. The tada list helps.
Maybe the most obvious con: a tada list forces you to have an accomplishment each day so you can write it down, and this added stress to my day.
Also, a year is a long time to keep it going, and I ran out of steam by the end. You can see that my handwriting gets worse as time goes on
and for the last couple of months, I stopped doing the pictures.
It’s fun to see things on the list that I had forgotten about. For example, I had started this massive watercolor painting of the Holiday Inn in Pacifica in February, and I completely forgot about it
Will I do this next year? Maybe. I need to weigh the accomplishment part against the work it takes to keep it going. It’s neat to have this artifact to look back on either way.
A few more of the several color studies I did:
...
Read the original on www.ducktyped.org »
In March 2026, I will be Mockito maintainer for 10 years (nearly a third of my whole life). Looking ahead, I decided that a decade milestone is a good moment to pass on maintainership to other folks. In the coming months until March, I will spend time ensuring a smooth transition in maintainership.
In this issue I list several considerations why I made the decision. Communication and discussion of plans for future maintainership will be somewhere else, most likely in a separate GitHub issue. Stay tuned for that.
As you might know, Mockito 5 shipped a breaking change where its main artifact is now an agent. That’s because starting JVM 22, the previous so-called “dynamic attachment of agents” is put behind a flag. This change makes sense from a security point-of-view and I support it.
However, the way this was put forward to Mockito maintainers was energy draining to say the least. Mockito is probably the biggest user of such an agent and is often looked at for inspiration by other projects. As such, Mockito often pioneers on supporting JVM features, built on a solid foundation with ByteBuddy. Modules was such a feature that took months of hard work by Rafael to figure out, including providing feedback to JVM maintainers.
Unfortunately such a collaborative way of working was not the case when discussing agents. To me, it felt like the feature was presented as a done deal because of security. While dynamic attachment is problematic in many ways, no alternative solutions were proposed. That’s okay, as Mockito pioneers on these solutions, yet in this case I felt we were left alone.
My personal take is that folks involved with the change severely underestimated the societal impact that it had. The fact that proper build support is non-existent to this day shows that agents are not a priority. That’s okay if it isn’t a priority, but when it was communicated with Mockito I perceived it as “Mockito is holding the JVM ecosystem back by using dynamic attachment, please switch immediately and figure it out on your own”.
Here, the fact that I (and others) are volunteers doing their best for the project, is important to understand the societal impact. When you put individuals under pressure, who do this work in their own time out of goodwill, things crumble. It’s commonly joked about with XKCD’s on the fact that the whole open source world relies on a couple of individuals. That couldn’t be more true in this situation, where the collaborative system collapses when too much pressure is put on individual folks.
This saga planted the seed to reconsider my position as maintainer.
It’s undeniable that Kotlin as a language has grown in popularity in recent years. While Mockito maintains several flavors for JVM languages, these packages typically include sugar that makes integration nicer. In all cases, mockito-core remains the place where functionality is implemented.
Unfortunately, this model doesn’t nicely apply to Kotlin. Where almost all JVM languages work similarly under the hood, Kotlin often does things differently. This means that in several places in mockito-core, there are separate flows dedicated to Kotlin. Most often that’s a direct result of Kotlin doing (in my opinion) shenanigans on the JVM that the JVM never intended to support, yet was able to.
Even within Kotlin itself, features don’t work consistently. Suspend functions are the most well-known example. As such, Mockito code becomes more spaghetti, it’s API sometimes fully duplicated just to support a core Kotlin language feature and overall less maintainable.
While I fully understand the reasons that developers enjoy the feature richness of Kotlin as a programming language, its underlying implementation has significant downsides for projects like Mockito. Quite frankly, it’s not fun to deal with.
To me, a future where Kotlin becomes more predominant is not a future that makes me hopeful I can keep on dedicating energy to Mockito.
I have always been a fan of open source work and have contributed to hundreds of projects in all these years. Mockito is my most important project, but I have also consistently worked on others. In recent months, I have rediscovered the joy of programming by working on Servo. It’s a web engine written in Rust.
When I need to choose how I want to spend my 2 hours of evening time in a given week, I rarely preferred Mockito in the last year. In the past, Mockito was my go-to and I enjoyed it a lot. Nowadays, Servo and related projects provide significantly more enjoyment.
Justifying why I needed to work on Mockito becomes difficult when (because of the above reasons) it feels like a chore. Volunteering work shouldn’t feel like a chore, at least not for a long time.
As you have read, these three factors combined led me to the decision. The first point explains why I started to doubt my position, the second point why I am not hopeful for things to change in a good way and the third point how I found enjoyment in a different way.
While these points had impact on me as maintainer, my hypothesis is that it doesn’t apply to others in the same way. I know others are eager to work on Kotlin support for example. That’s why I concluded that a decade is enough time to have helped Mockito forward. Now it’s time for somebody else to take over, as I believe that’s in the best interest of Mockito as a project. Because ultimately that’s why I chose to become maintainer in the first place: I believed that with my work, I could improve Mockito for millions of software engineers.
For those wondering: yes I wholeheartedly advise everyone to take on a volunteering task such as maintaining an open source project. It was an honour and privilege to do so and I thank those that I enjoyed working with.
...
Read the original on github.com »
Learn computer graphics from scratch and for free.
The usual beloved lessons.
A blog, some private courses,
and an upcoming book project!
We’ve been missing a space where we can talk about topics related to 3D programming—and also broader themes like AI and education that connect to the work we do at Scratchapixel.
This is a new project we’ve started. For now, we’re focusing on offering a course specifically targeted at learning the Vulkan API. Interested? Read more…
Take Scratchapixel to the Beach
We’re working on a book so you can keep a physical reference for computer graphics—very useful if you’re stuck on a desert island with no internet. Read more…
These lessons are structured to introduce 3D rendering concepts in a beginner-friendly order. Unlike most resources, we start with hands-on results before diving into theory.
3D Computer Graphics Primer: Ray-Tracing as an Example
What Do I Need to Get Started?
This section is dedicated to explaining the mathematical theories and tools used in creating images and simulations with a computer. It’s not intended as a starting point, but rather as a reference to be consulted when these topics are mentioned in lessons from other sections.
A collection of lessons on specific topics that don’t necessarily fit into any broad category but are nonetheless interesting and cool.
Saving and reading images to and from disk, image file formats, color spaces, color management, and basic image processing.
Simulating the Colors of the Sky
Various techniques useful for developing 3D tools and interacting with 3D content in general.
...
Read the original on www.scratchapixel.com »
On Wednesday 31 May, it was reported that Alex Mahon, CEO of Channel 4, could receive record annual pay of £1.4m. This article was originally published on 26 April 2021 and asks, as Executive pay continues to rise, does a company need a CEO at all?
Over the next two weeks, the boards of BAE Systems, AstraZeneca, Glencore, Flutter Entertainment and the London Stock Exchange all face the possibility of shareholder revolts over executive pay at their forthcoming annual general meetings (AGMs). As the AGM season begins, there is a particular focus on pay.
Executive pay is often the most contentious item at an AGM, but this year is clearly exceptional. The people running companies that have been severely impacted by Covid-19 can’t be blamed for the devastation of their revenues by the pandemic, but they also can’t take credit for the government stimulus that has kept them afloat. Last week, for example, nearly 40 per cent of shareholders in the estate agents Foxtons voted against its chief executive officer, Nicholas Budden, receiving a bonus of just under £1m; Foxtons has received about £7m in direct government assistance and is benefiting from the government’s continued inflation of the housing market. The person who has done most to ensure Foxtons’ ongoing good fortune is not Nicholas Budden but Rishi Sunak.
Under the Enterprise and Regulatory Reform Act, executive pay is voted on at least every three years, and this process forces shareholders and the public to confront how much the people at the top take home. Tim Steiner, the highest-paid CEO in the FTSE 100, was paid £58.7m in 2019 for running Ocado, which is 2,605 times the median income of his employees for that year, while the average FTSE100 CEO makes more than £15,000 a day.
As the High Pay Centre’s annual assessment of CEO pay points out, a top-heavy wage bill extends beyond the CEO, and could be unsustainable for any company this year. “When one considers high earners beyond the CEO”, says the report, ”there is actually quite significant potential for companies to safeguard jobs and incomes by asking higher-paid staff to make sacrifices”.
In the longer term, as companies commit to greater automation of many roles, it’s pertinent to ask whether a company needs a CEO at all.
A few weeks ago Christine Carrillo, an American tech CEO, raised this question herself when she tweeted a spectacularly tone-deaf appreciation of her executive assistant, whose work allows Carrillo to “write [and] surf every day” as well as “cook dinner and read every night”. In Carrillo’s unusually frank description of the work her EA does — most of her emails, most of the work on fundraising, playbooks, operations, recruitment, research, updating investors, invoicing “and so much more” — she guessed that this unnamed worker “saves me 60% of time”.
Predictably, a horde arrived to point out that if someone else is doing 60 per cent of Carrillo’s job, they should be paid 50 per cent more than her. But as Carrillo — with a frankly breathtaking lack of self-awareness — informed another commenter, her EA is based in the Philippines. The main (and often the only) reason to outsource a role is to pay less for it.
[See also: The scourge of greedflation]
If most of a CEO’s job can be outsourced, this suggests it could also be automated. But while companies are racing to automate entry- and mid-level roles, senior executives and decision makers show much less interest in automating themselves.
There’s a good argument for automating from the top rather than from the bottom. As we know from the annotated copy of Thinking, Fast and Slow that sits (I assume) on every CEO’s Isamu Noguchi nightstand, human decision-making is the product of irrational biases and assumptions. This is one of the reasons strategy is so difficult, and roles that involve strategic decision-making are so well paid. But the difficulty of making genuinely rational strategic decisions, and the cost of the people who do so, are also good reasons to hand this work over to software.
Automating jobs can be risky, especially in public-facing roles. After Microsoft sacked a large team of journalists in 2020 in order to replace them with AI, it almost immediately had to contend with the PR disaster of the software’s failure to distinguish between two women of colour. Amazon had to abandon its AI recruitment tool after it learned to discriminate against women. And when GPT-3, one of the most advanced AI language models, was used as a medical chatbot in 2020, it responded to a (simulated) patient presenting with suicidal ideation by telling them to kill themselves.
What links these examples is that they were all attempts to automate the kind of work that happens without being scrutinised by lots of other people in a company. Top-level strategic decisions are different. They are usually debated before they’re put into practice — unless, and this is just another reason to automate them, employees feel they can’t speak up for fear of incurring the CEO’s displeasure.
Where automated management — or “decision intelligence”, as Google and IBM call it — has been deployed, it’s produced impressive results. Hong Kong’s mass transit system put software in charge of scheduling its maintenance in 2004, and enjoys a reputation as one of the world’s most punctual and best-run metros.
Clearly, chief execs didn’t get where they are today by volunteering to clear out their corner offices and hand over their caviar spittoons to robots. But management is a very large variable cost that only seems to increase — Persimmon’s bonus scheme paid out half a billion pounds to 150 execs in a single year — while technology moves in the other direction, becoming cheaper and more reliable over time.
It is often asked whether CEO pay is fair or ethical. But company owners and investors should be asking if their top management could be done well by a machine — and if so, why is it so expensive?
[See also: The milkman on a mission]
...
Read the original on www.newstatesman.com »
Kapwing’s new research shows that 21-33% of YouTube’s feed may consist of AI slop or brainrot videos. But which countries and channels are achieving the greatest reach — and how much money might they make? We analyzed social data to find out.
Kapwing’s new research shows that 21-33% of YouTube’s feed may consist of AI slop or brainrot videos. But which countries and channels are achieving the greatest reach — and how much money might they make? We analyzed social data to find out.
As the debate over the creative and ethical value of using AI to generate video rages on, users are getting interesting results out of the machine, and artist-led AI content is gaining respect in some areas. Top film schools now offer courses on the use and ethics of AI in film production, and the world’s best-known brands are utilizing AI in their creative process — albeit with mixed results.
Sadly, others are gaming the novelty of AI’s prompt-and-go content, using these engines to churn out vast quantities of AI “slop” — the “spam” of the video-first age.
Wiktionary defines a slopper as “Someone who is overreliant on generative AI tools such as ChatGPT; a producer of AI slop.” Along with the proliferation of “brainrot” videos online, sloppers are making it tough for principled and talented creators to get their videos seen.
The main point of AI slop and brainrot videos is to grab your attention, and this type of content seems harder and harder to avoid. But exactly how prevalent is it in the grand scheme of things?
Kapwing analyzed the view and subscriber counts of trending AI slop and brainrot YouTube channels to find out which ones are competing most fiercely with handmade content around the world and how much revenue the leading sloppers are making.
We identified the top 100 trending YouTube channels in every country and noted the AI slop channels. Next, we used socialblade.com to retrieve the number of views, subscribers, and estimated yearly revenue for these channels and aggregated these figures for each country to deduce their popularity. We also created a new YouTube account to record the number of AI slop and brainrot videos among the first 500 YouTube Shorts we cycled through to get an idea of the new-user experience.
* Spain’s trending AI slop channels have a combined 20.22 million subscribers — the most of any country.
* In South Korea, the trending AI slop channels have amassed 8.45 billion views.
* The AI slop channel with the most views is India’s Bandar Apna Dost (2.07 billion views).The channel has estimated annual earnings of $4,251,500.
* The channel has estimated annual earnings of $4,251,500.
* U.S.-based slop channel Cuentos Facinantes [sic] has the most subscribers of any slop channel globally (5.95 million).
* Brainrot videos account for around 33% of the first 500 YouTube shorts on a new user’s feed.
First, we analyzed the 100 top trending video channels in every country to see how prevalent AI slop has become locally.
We found that Spain has 20.22 million AI slop channel subscribers among its trending channels. This is despite Spain having fewer AI slop channels (eight) among its top 100 channels than countries including Pakistan (20), Egypt (14), South Korea (11) and the U. S. (nine). The U.S. has the third-most slop subscribers (14.47 million) — 28.4% fewer than Spain but 13.18% more than fourth-placed Brazil (12.56 million).
Spain’s AI slop subscriber base is boosted significantly by one channel, Imperio de jesus, which had 5.87 million subscribers at the time of analysis, making it the world’s second biggest AI slop channel (see A Spanish-Language U. S. AI Slop Channel is the World’s Most-Subscribed below).
Promising to strengthen “faith in Jesus through fun interactive quizzes,” the channel’s videos put the Son of God in a range of either/or scenarios where he must give the correct answer to get the better of Satan, the Grinch and others. Two other Spanish channels with over 3.5 million subscribers each focus on comedy/brainrot shorts.
While Spain’s eight trending AI slop channels may have the most subscribers, South Korea’s 11 trenders have the most views: some 8.45 billion in total. This is nearly 1.6 times as many as second-placed Pakistan (5.34B), 2.5 times as many as the third-placed U. S. (3.39B) and 3.4 times as many as Spain (2.52B).
South Korean AI slop channel Three Minutes Wisdom alone accounts for nearly a quarter of the country’s massive view count, with 2.02 billion views. Three Minutes Wisdom has the second-highest view count of any trending slop channel globally, and we estimate the channel’s annual ad income to be around US$4,036,500.
The channel’s 140 videos typically feature photorealistic(ish) footage of wild animals being defeated by cute pets, and the URL in the bio appears to be an affiliate link to Coupang, South Korea’s largest online retailer.
Next, we identified the specific channels with the most subscribers and views globally. U. S.-based Cuentos Facinantes [sic] has 5.95 million subscribers, making it the trending AI slop channel with the biggest following. This is only 1.4% more than Imperio de jesus (5.87M) but over 50% more than the eighth-, ninth- and tenth-most subscribed slop channels.
Cuentos Facinantes [sic] (Fascinating Tales) has attracted some 1.28 billion views, serving up low-quality Dragon Ball-themed videos. The channel was established in 2020, but the earliest video currently hosted is from as recently as Jan. 8, 2025.
Five of the other ten trending AI slop channels with the most views are based in South Korea, with others in Egypt, Brazil and Pakistan. But the channel with the most views of all is in India. Bandar Apna Dost features over 500 videos, mainly “featuring a realistic monkey in hilarious, dramatic, and heart-touching human-style situations,” Many of which are variations on identical set-ups. The channel also has around 100,000 followers on Instagram; on Facebook, the videos are attributed to a ‘digital creator’ named Surajit Karmakar.
If they’re monetizing their views, channels like Bandar Apna Dost may be making millions of dollars per year. But YouTube faces a dilemma over AI content.
On the one hand, YouTube CEO Neal Mohan cites generative AI as the biggest game-changer for YouTube since the original “revelation” that ordinary folk wanted to watch each other’s videos, saying that generative AI can do for video what the synthesizer did for music.
On the other hand, the company worries that its advertisers will feel devalued by having their ads attached to slop.
The AI slop channels with the highest potential earnings mostly line up with the top ten for views. This is because Social Blade estimates channel income based on annual views, and most of these channels’ videos have been published over the last few months. Using an average rate of revenue per 1,000 views, Bandar Apna Dost has an estimated annual revenue of $4.25 million.
“The genius is going to lie whether you did it in a way that was profoundly original or creative,” Mohan told Wired. “Just because the content is 75 percent AI generated doesn’t make it any better or worse than a video that’s 5 percent AI generated. What’s important is that it was done by a human being.”
Whether those flooding the platform with auto-generated content to make a buck care about being known as creative geniuses is another matter.
Finally, we simulated the experience of an untainted YouTube shorts algorithm by establishing a new YouTube account and noting the occurrence of AI slop or brainrot videos among the first 500 videos in the feed. While we were spared either of these for the first 16 videos in the feed, in total, 104 (21%) of the first 500 videos were AI-generated, and 165 (33%) of those 500 videos were brainrot.
Whether this prevalence of slop and brainrot on our test feed represents the engineering of YouTube’s algorithm or the sheer proliferation of such videos that are being uploaded is a mystery that only Google can answer. But the Guardian’s analysis of YouTube’s figures for July revealed that nearly “one in 10 of the fastest growing YouTube channels globally are showing AI-generated content only.”
And brainrot, like AI slop, is a mixed blessing for YouTube as a company: it may lack the soul or professionalism with which YouTube’s advertisers wish to be associated, but brainrot is moreish by design.
Brainrot’s natural home is the feed, whether viewers are compelled to keep watching to “numb” themselves from the trials of the world around them, or to stay up to date with the potentially infinite “lore” of emergent brainrot subgenres, which incorporate recurring characters and themes.
The term “AI slop” has been variously pinned to “unreviewed” content, to AI-generated media that may have been reviewed but with minimal quality standards (like Coke’s Christmas ads), and to all AI-generated content. As Rob Horning points out, the idea that only some AI media is slop propagates the idea that the rest is legitimate and the technology’s proliferation is inevitable.
Part of the threat of AI slop and some forms of brainrot is in how they have been normalized and may come across as harmless fun. But slop and brainrot prey on the laziest areas of our mental faculties. Researchers have shown how the “illusory truth effect” makes people more likely to believe in claims or imagery the more often they encounter it. AI tools make it easy for bad-faith actors to construct a fake enemy or situation that supports their underlying political beliefs or goals. Seeing is believing, studies have shown, even when the viewer has been explicitly told that a video is fake.
Meanwhile, “information of any kind, in enough quantities, becomes noise,” writes researcher and artist Eryk Salvaggio. The prevalence of AI slop is “a symptom of information exhaustion, and an increased human dependency on algorithmic filters to sort the world on our behalf.” And, as Doug Shapiro notes, as this noise drowns out the signal on the web, including social networks, the value of trust will rise — and so will corporate and political efforts to fabricate and manipulate trust.
And this is why, rather than attending film school to study AI techniques, it may be more valuable for creators and consumers alike — especially those still in school — to double down on Media Studies.
We manually researched the top 100 trending YouTube channels in every country (on playboard.co) to isolate the AI slop channels.
We then used socialblade.com to retrieve the number of views, subscribers and estimated yearly revenue (using the midpoint values) for these channels.
We aggregated these figures for each country’s AI slop channels to get an idea of their popularity.
In addition, we created a new YouTube account and recorded the number of AI slop and brainrot videos among the first 500 YouTube Shorts we cycled through to get an idea of the new-user experience.
Data is correct as of October 2025.
...
Read the original on www.kapwing.com »
As a concept it refers to using software to perform signal processing tasks that were traditionally performed by hardware, specific to radio/RF applications. This software can be run on a general-purpose computer (CPU), FPGA, or even GPU, and it can be used for real-time applications or offline processing of recorded signals. Analogous terms include “software radio” and “RF digital signal processing”.
As a thing (e.g., “an SDR”) it typically refers to a device that you can plug an antenna into and receive RF signals, with the digitized RF samples being sent to a computer for processing or recording (e.g., over USB, Ethernet, PCI). Many SDRs also have transmit capabilities, allowing the computer to send samples to the SDR which then transmits the signal at a specified RF frequency. Some embedded-style SDRs include an onboard computer.
The digital processing of signals; in our case, RF signals.
This textbook acts as a hands-on introduction to the areas of DSP, SDR, and wireless communications. It is designed for someone who is:
Interested in using SDRs to do cool stuff
Relatively new to DSP, wireless communications, and SDR
Better at understanding equations after learning the concepts
Looking for concise explanations, not a 1,000 page textbook
An example is a Computer Science student interested in a job involving wireless communications after graduation, although it can be used by anyone itching to learn about SDR who has programming experience. As such, it covers the necessary theory to understand DSP techniques without the intense math that is usually included in DSP courses. Instead of burying ourselves in equations, an abundance of images and animations are used to help convey the concepts, such as the Fourier series complex plane animation below. I believe that equations are best understood after learning the concepts through visuals and practical exercises. The heavy use of animations is why PySDR will never have a hard copy version being sold on Amazon.
This textbook is meant to introduce concepts quickly and smoothly, enabling the reader to perform DSP and use SDRs intelligently. It’s not meant to be a reference textbook for all DSP/SDR topics; there are plenty of great textbooks already out there, such as Analog Device’s SDR textbook and dspguide.com. You can always use Google to recall trig identities or the Shannon limit. Think of this textbook like a gateway into the world of DSP and SDR: it’s lighter and less of a time and monetary commitment, when compared to more traditional courses and textbooks.
To cover foundational DSP theory, an entire semester of “Signals and Systems”, a typical course within electrical engineering, is condensed into a few chapters. Once the DSP fundamentals are covered, we launch into SDRs, although DSP and wireless communications concepts continue to come up throughout the textbook.
Code examples are provided in Python. They utilize NumPy, which is Python’s standard library for arrays and high-level math. The examples also rely upon Matplotlib, which is a Python plotting library that provides an easy way to visualize signals, arrays, and complex numbers. Note that while Python is “slower” than C++ in general, most math functions within Python/NumPy are implemented in C/C++ and heavily optimized. Likewise, the SDR API we use is simply a set of Python bindings for C/C++ functions/classes. Those who have little Python experience yet a solid foundation in MATLAB, Ruby, or Perl will likely be fine after familiarizing themselves with Python’s syntax.
...
Read the original on pysdr.org »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.