10 interesting stories served every morning and every evening.




1 913 shares, 126 trendiness

What an unprocessed photo looks like

(Photography)

Here’s a photo of a Christmas tree, as my cam­er­a’s sen­sor sees it:

It’s not even black-and-white, it’s gray-and-gray. This is be­cuase while the ADCs out­put can the­o­ret­i­cally go from 0 to 16382, the ac­tual data does­n’t cover that whole range:

The real range of ADC val­ues is ~2110 to ~136000. Let’s set those val­ues as the white and black in the im­age:

Much bet­ter, but it’s still more mono­chro­matic then I re­mem­ber the tree be­ing. Camera sen­sors aren’t ac­tu­ally able to see color: They only mea­sure how much light hit each pixel.

In a color cam­era, the sen­sor is cov­ered by a grid of al­ter­nat­ing color fil­ters:

Let’s color each pixel the same as the fil­ter it’s look­ing through:

This ver­sion is more col­or­ful, but each pixel only has one third of it’s RGB color. To fix this, I just av­er­aged the val­ues each pixel with it’s neigh­bors:

Applying this process to the whole photo gives the lights some color:

However, the im­age is still very dark. This is be­cause mon­i­tors don’t have as much dy­namic range as the hu­man eye, or a cam­era sen­sor: Even if you are us­ing an OLED, the screen still has some am­bi­ent light re­flect­ing off of it and lim­it­ing how black it can get.

There’s also an­other, sneakier fac­tor caus­ing this:

Our per­cep­tion of bright­ness is non-lin­ear.

If bright­ness val­ues are quan­tized, most of the ADC bins will be wasted on nearly iden­ti­cal shades of white while every other tone is crammed into the bot­tom. Because this is an in­ef­fi­cient use of mem­ory, most color spaces as­sign ex­tra bins to darker col­ors:

As a re­sult of this, if the lin­ear data is dis­played di­rectly, it will ap­pear much darker then it should be.

Both prob­lems can be solved by ap­ply­ing a non-lin­ear curve to each color chan­nel to brighten up the dark ar­eas… but this does­n’t quite work out:

Some of this green cast is caused by the cam­era sen­sor be­ing in­trin­si­cally more sen­si­tive to green light, but some of it is my fault: There are twice as many green pix­els in the fil­ter ma­trix. When com­bined with my rather naive demo­saic­ing, this re­sulted in the green chan­nel be­ing boosted even higher.

In ei­ther case, it can fixed with proper white-bal­ance: Equalize the chan­nels by mul­ti­pling each one with a con­stant.

However, be­cause the im­age is now non-lin­ear, I have to go back a step to do this. Here’s the dark im­age from be­fore with all the val­ues tem­porar­ily scaled up so I can see the prob­lem:

… here’s that im­age with the green taken down to match the other chan­nels:

… and af­ter re-ap­ply­ing the curve:

This is re­ally just the bare min­i­mum: I haven’t done any color cal­i­bra­tion, the white bal­ance is­n’t per­fect, there’s lots of noise that needs to be cleaned up…

Additionally, ap­ply­ing the curve to each color chan­nel ac­ci­den­tally de­sat­u­rated the high­lights. This ef­fect looks rather good — and is what we’ve come to ex­pect from film — but it’s has de-yel­lowed the star. It’s pos­si­ble to sep­a­rate the lu­mi­nance and curve it while pre­serv­ing color. On it’s own, this would make the LED Christmas lights into an over­stat­u­rated mess, but com­bin­ing both meth­ods can pro­duce nice re­sults.

For com­par­i­son, here’s the im­age my cam­era pro­duced from the same data:

Far from be­ing an unedited” photo: there’s a huge amount of math that’s gone into mak­ing an im­age that nicely rep­re­sents what the sub­ject looks like in per­son.

There’s noth­ing that hap­pens when you ad­just the con­trast or white bal­ance in edit­ing soft­ware that the cam­era has­n’t done un­der the hood. The edited im­age is­n’t faker” then the orig­i­nal: they are dif­fer­ent ren­di­tions of the same data.

In the end, repli­cat­ing hu­man per­cep­tion is hard, and it’s made harder when con­strained to the lim­i­ta­tions of dis­play tech­nol­ogy or printed im­ages. There’s noth­ing wrong with tweak­ing the im­age when the au­to­mated al­go­rithms make the wrong call.

...

Read the original on maurycyz.com »

2 734 shares, 37 trendiness

Life in a Secret Chinese Nuclear City That Was Never on the Map

This site re­quires JavaScript to run cor­rectly. Please turn on JavaScript or un­block scripts

...

Read the original on substack.com »

3 451 shares, 28 trendiness

Look back in disbelief

If some­one had told me 12 months ago what was go­ing to hap­pen this past year, I would­n’t have be­lieved them. Skipping swiftly past all the po­lit­i­cal, eco­nomic and so­cial tur­moil, I come to the in­ter­face changes brought in ma­cOS Tahoe with Liquid Glass. After three months of strong feed­back dur­ing beta-test­ing, I was dis­ap­pointed when Tahoe was re­leased on 15 September to see how lit­tle had been ad­dressed. When 26.1 fol­lowed on 3 November it had only re­gressed, and 26.2 has done noth­ing. Here I sum­marise my opin­ions on where Tahoe’s over­haul has gone wrong.

Almost all the con­tent dis­played in win­dows is best suited to rec­tan­gu­lar views. Images, video, web­pages and other text crave ar­eas bounded by right an­gles. Gentle round­ing on the cor­ners, as in Sequoia, is fine, but the sig­nif­i­cantly in­creased ra­dius en­forced in Tahoe is a mis­fit. This ei­ther leads to crop­ping of con­tents, or re­duc­tion in size of the view and wasted space.

Cropping is mis­lead­ing, as seen in this en­larged view of a thumb­nail im­age in the Finder’s Gallery view, com­pared to the larger ver­sion shown be­low. The thumb­nail mis­rep­re­sents what’s in the orig­i­nal.

Among Apple’s claims for this new look is greater con­sis­tency. But two win­dows in the same app, both cre­ated us­ing SwiftUI, can’t even share a com­mon ra­dius, as shown be­low in Providable run­ning in ma­cOS 26.2.

Tahoe has also in­creased the size of its con­trols, with­out us­ing that to im­prove their clar­ity. The best way to see that is in my Mallyshag demo app.

This looks good in Sequoia above, but be­comes a mess in Tahoe (below) be­cause of its changed con­trol di­men­sions.

Those three but­tons are sig­nif­i­cantly wider, so now over­lap one an­other and are wider than the text box be­low. The user sees no ben­e­fit to this, though, as the text within the con­trols is iden­ti­cal.

App icons need to be both dis­tin­guish­able and read­ily re­called. The first en­sures that we can tell one from an­other, and re­lies on all the vi­sual cues we can muster, in­clud­ing colours, form and con­tent. Tahoe en­forces a rule that every­thing in the icon must be fit­ted in­side its uni­form square with rounded cor­ners, so re­strict­ing cues to colours and con­tents. As a re­sult, the icons of many bun­dled and other Apple apps have be­come harder to dis­tin­guish in a crowded Dock. Some, in­clud­ing Apple’s Developer app and the App Store, are in­dis­tin­guish­able, while oth­ers have de­gen­er­ated into vague blotches.

Above are most of the apps bun­dled in Sequoia, and be­low are those in Tahoe.

In real life, white­outs are dan­ger­ous be­cause they’re so dis­ori­ent­ing. There’s no hori­zon, no fea­tures in the land­scape, and no clues to nav­i­ga­tion. We see and work best in vi­sual en­vi­ron­ments that are rich in colour and tonal con­trasts. Tahoe has con­tin­ued a trend for Light Mode to be bleached-out white, and Dark Mode to be a moon­less night. Seeing where con­trols, views and con­tents start and end is dif­fi­cult, and leaves them sus­pended in the white­out.

In light mode, with de­fault trans­parency, tool icons and text are clearly dis­tin­guished tonally, as are some con­trols in­clud­ing but­tons and check­boxes. However, text en­try fields are in­dis­tin­guish­able from the back­ground, and there’s a gen­eral lack of de­mar­ca­tion, par­tic­u­larly be­tween con­trols and the list view be­low.

This tech­nique is used in wa­ter­colours to merge lay­ers of colour dif­fusely, and the best de­scrip­tion of some of the re­sults of trans­parency in Liquid Glass. My ex­am­ples speak for them­selves, and are drawn first from Apple’s own de­sign for System Settings.

Transparency of the Search box at the top of the side­bar on the left ren­ders it in­com­pre­hen­si­ble when it’s un­der­laid by scrolled nav­i­ga­tional con­tent.

Although the view ti­tle Keyboard re­mains read­able, bleed-through of un­der­ly­ing colours is con­fus­ing, dis­tract­ing and aes­thet­i­cally up­set­ting.

My next ex­am­ples show the same win­dow in Providable with a se­lected list row be­ing scrolled up be­hind what used to be a win­dow ti­tle bar.

With the win­dow in fo­cus, the se­lec­tion colour over­whelms the traf­fic light con­trols and win­dow ti­tle, which should read Drop Files. This also draws at­ten­tion to the lim­ited width nec­es­sary to ac­com­mo­date rec­tan­gu­lar con­tent in a win­dow with ex­ces­sively rounded cor­ners.

Out of fo­cus the se­lected row is less over­whelm­ing, but traf­fic lights and ti­tle have dis­si­pated in grey blur.

I’m sure that, in the right place and time, trans­parency ef­fects of Liquid Glass can be vi­su­ally pleas­ing. Not only is this the wrong time and place, but those with vi­sual im­pair­ment can no longer re­move or even re­duce these ef­fects, as the Reduce Transparency con­trol in Accessibility set­tings no longer re­duces trans­parency in any use­ful way. That was one of the re­gres­sions in 26.1 that has­n’t been ad­dressed in 26.2.

* Results in app icons be­ing more uni­form, thus less dis­tin­guish­able and mem­o­rable.

* Fails to dis­tin­guish tools, con­trols and other in­ter­face el­e­ments us­ing dif­fer­ences in tone, so mak­ing them harder to use.

* Makes a mess where trans­par­ent lay­ers are su­per­im­posed, and won’t re­duce trans­parency when that’s needed to ren­der its in­ter­face more ac­ces­si­ble.

Maybe this is be­cause I’m get­ting older, but that gives me the ben­e­fit of hav­ing ex­pe­ri­enced Apple’s older in­ter­faces, with their ex­cep­tional qual­ity and func­tion­al­ity.

That was lit­tle more than a decade ago, in 2014. Not that I want to turn the clock back, but it would be re­ally help­ful if I could read clearly what’s on my dis­play once again.

...

Read the original on eclecticlight.co »

4 258 shares, 15 trendiness

Building a macOS app to know when my Mac is thermal throttling

This is the story about how I built MacThrottle.

I’ve been very happy with my M2 MacBook Air for the past few years. However, when us­ing an ex­ter­nal dis­play, es­pe­cially a very de­mand­ing one like a 4K 120Hz dis­play, I’ve no­ticed it started strug­gling more. Since it lacks fans, you can’t hear it strug­gling, but you can feel it as every­thing be­comes very slow or un­re­spon­sive: that’s when ther­mal throt­tling kicks in.

I know it’s ther­mal throt­tling be­cause I can see in iS­tat Menus that my CPU us­age is 100% while the power us­age in watts goes down.

It’s even more ob­vi­ous with MX Power Gadget: You can see the power us­age and fre­quency of the per­for­mance core drop­ping, as us­age keeps be­ing 100%:

I’ve also hit ther­mal throt­tling with my work MacBook Pro. It’s the 14″ M4 Max vari­ant, which is the worst vari­ant be­cause the ther­mal en­ve­lope of the 14″ is too small for the max out­put for the M4 Max. On my pre­vi­ous 14″ M1 Pro MacBook Pro, I’ve never even heard the fans in 3 years…

That be­ing said, I still love Apple Silicon for the per­for­mance and power us­age, it’s still a dra­matic im­prove­ment over the Intel days. 🫶

Anyway, I wanted to know: is there a way to tell if the Apple Silicon SoC is ther­mal throt­tling, that is not based on heuris­tics like in my screen­shot?

This was a wilder ride than I ex­pected. It’s pos­si­ble to know pro­gram­mat­i­cally if the Mac is throt­tled, be­cause ma­cOS ex­poses this in var­i­ous but in­con­sis­tent ways.

The ap­proach that Apple rec­om­mends is to use ProcessInfo.thermalState from Foundation:

Sounds good, right? However, I knew that an­other tool could pro­vide this in­for­ma­tion, though it needed root: pow­er­met­rics.

Both re­port the pres­sure level to be nominal”, they must be the same…right?

After run­ning a few stress tests stress-ng –cpu 0 -t 600, I started to see the two val­ues di­verge!

For some rea­son, the gran­u­lar­ity is dif­fer­ent be­tween ProcessInfo.thermalState and pow­er­met­rics. They have a dif­fer­ent amount of pos­si­ble states and they don’t line up.

Here is my em­pir­i­cal ex­pe­ri­ence:

I never man­aged to hit these states, so I don’t know if they match, but they’re tech­ni­cally de­fined:

In prac­tice, when my Mac starts get­ting hot, from the pow­er­met­rics per­spec­tive it goes into mod­er­ate, and when it starts throt­tling, it goes into heavy. The prob­lem is that with ProcessInfo, both are cov­ered by the fair state, so it’s not re­ally use­ful to know when the Mac is ac­tu­ally throt­tling. ☹️

I thought maybe this was an iOS vs ma­cOS thing? But Apple ref­er­ences it in the ma­cOS docs as well. Maybe it was more con­sis­tent on Intel Macs?

I stum­bled upon this ar­ti­cle from Dave MacLachlan, a Googler work­ing on Apple stuff, from 2020. I learned that there are other CLI tools to get ther­mal data, but they don’t seem to work on my Apple Silicon MacBook:

But the most in­ter­est­ing thing I learned is that the data pow­er­met­rics shows is ac­tu­ally com­ing from thermald. And thermald writes the cur­rent ther­mal pres­sure to the Darwin no­ti­fi­ca­tion sys­tem (notifyd)!

The var­i­ous lev­els are de­fined in OSThermalNotification.h ac­cord­ing to the ar­ti­cle. Indeed:

The funny thing is that OSThermalNotification.h is barely ref­er­enced any­where, there are only three pages of Google re­sults. It seems to be used in Bazel for ex­am­ple. That post was a big help.

What’s great about this ap­proach is that it does­n’t re­quire root! I can sub­scribe to the no­ti­fi­ca­tion sys­tem for the com.ap­ple.sys­tem.ther­mal­pres­surelevel event to get the (good) ther­mal state!

Here is a snip­pet to get it in Swift:

Now that I had a use­ful value to work with, it was time to build the app.

Armed with Opus 4.5, I set out to build a lit­tle menu bar app where I could see, at a glance, if my Apple Silicon die was try­ing to save it­self from cross­ing 110°C. I called it MacThrottle.

I built a sim­ple SwiftUI app for the menu bar that shows me the sta­tus in a su­perbly orig­i­nal ther­mome­ter icon. The ther­mome­ter is filled de­pend­ing on the ther­mal state, and its color changes from green to red. I have like 20 menu bar icons and they’re all mono­chro­matic, so the color in the ther­mome­ter is very sub­tle to keep things con­sis­tent.

The app is a sim­ple SwiftUI app. Apple pro­vides a scene called MenuBarExtra to ren­der a menu bar con­trol. It was sim­pler than I ex­pected! To make it a pure menu bar app with no dock icon, you just need to set LSUIElement to true in Info.plist.

An early ver­sion of MacThrottle, just re­port­ing the ther­mal state

I ex­plained the var­i­ous ap­proaches to get the ther­mal pres­sure level in the pre­vi­ous sec­tion. But when I was build­ing the app, I dis­cov­ered later on that thermald was pub­lish­ing the ther­mal state to no­ti­fyd. So at first, I thought I had to use pow­er­met­rics to get use­ful ther­mal state changes. Since that un­for­tu­nately re­quires root ac­cess, the app needed root ac­cess too.

To re­duce the scope of what runs as root, I did not run the app it­self as root. Instead, the app does not work by de­fault, but it gives you the op­tion to in­stall a helper. It does this through an AppleScript with ad­min­is­tra­tor priv­i­leges to prompt for ac­cess.

The helper is just a bash script run as a launchd dae­mon:

The bash script writes the ther­mal state to a file every few sec­onds and the app reads it every few sec­onds!

Once I dis­cov­ered I could use the no­ti­fi­ca­tion sys­tem with­out el­e­vated priv­i­leges, I re­placed the helper by code in the app to read the value from the no­ti­fi­ca­tion sys­tem di­rectly. Much sim­pler 🎉

I wanted to show the tem­per­a­ture and fan speed (when sup­ported) in a lit­tle graph in the menu bar app. This would al­low me to cor­re­late the ther­mal state with in­creased tem­per­a­ture, for ex­am­ple.

Again, there are mul­ti­ple APIs to read the tem­per­a­ture. First, I started us­ing an un­doc­u­mented API from IOKit, but I re­alised I was get­ting ~80ºC max, while iS­tat Menus or MX Power Gadget would show >100ºC.

Stats, the open source al­ter­na­tive to iS­tat Menus, helped me use the SMC in­stead and get the cor­rect val­ues. But the SMC is a much more un­sta­ble API be­cause each SoC has dif­fer­ent keys to ac­cess the tem­per­a­ture data:

Though the M3 keys seem to work on my M4 Max work MacBook Pro…

I ended up us­ing SMC first to get the ac­cu­rate tem­per­a­ture and fall back to IOKit if SMC does­n’t work.

For the graph, I wanted a com­pact vi­su­al­iza­tion that would show me the ther­mal his­tory at a glance.

The graph packs three lay­ers of in­for­ma­tion:

* Colored back­ground seg­ments for each ther­mal state (green for nom­i­nal, yel­low for mod­er­ate, or­ange for heavy, red for crit­i­cal)

* A solid line for CPU tem­per­a­ture with a dy­namic Y-axis that ad­justs to ac­tual val­ues

* A dashed cyan line for fan speed per­cent­age (on Macs that have fans)

I did­n’t want to spend too much time mak­ing a su­per fancy graph sys­tem. Since it polls every two sec­onds, the graph gets very busy af­ter a while. So I de­cided to keep it down to 10 min­utes, since the ther­mal state his­tory is mostly in­ter­est­ing short-term.

When the sys­tem was un­der load, I no­ticed the graph hov­er­ing was not very smooth on my 120Hz dis­play. I found out I can add .drawingGroup to my SwiftUI can­vas to use GPU ren­der­ing!. Indeed, I added it, and it was smooth again.

Graph with pres­sure state, temp and fans. Tooltip on hover to get past value is sup­ported.

I also added no­ti­fi­ca­tions so I get alerted when the state changes, in case I miss the menu bar icon. It can alert on spe­cific state tran­si­tions, and op­tion­ally on re­cov­ery. This is use­ful to know when it’s time to kill a VS Code in­stance or a Docker con­tainer!

To be fair, it can get a bit noisy on a strug­gling MacBook Air…

It’s true that I usu­ally al­ready no­tice when the Mac is get­ting slow, but some­times the Mac gets slow when it’s swap­ping heav­ily. At least now I know when it’s just too hot.

Of course, I want the app to start au­to­mat­i­cally now, since it works so well!

I ex­pected that I would need to write .plist again, but no, it’s ex­tremely easy to prompt the user to add a login item” as ma­cOS calls it, us­ing SMAppService.

Since I don’t have an Apple Developer ac­count, I can’t no­ta­rize the app, so in­stalling it from the re­leases is go­ing to re­quire a few ex­tra clicks in Privacy and Security.

And for Macs that dis­al­low it en­tirely, build­ing from source with Xcode is the only way. I added in­struc­tions in the README.

Hope this is use­ful to some­one else!

...

Read the original on stanislas.blog »

5 250 shares, 14 trendiness

One year of keeping a tada list

A tada list, or to-done list, is where you write out what you ac­com­plished each day. It’s sup­posed to make you fo­cus on things you’ve com­pleted in­stead of fo­cus­ing on how much you still need to do. Here is what my tada lists look like:

I have a page for every month. Every day, I write out what I did. At the end of the month, I make a draw­ing in the header to show what I did that month. Here are a few of the draw­ings:

In January, I started a Substack, made paint­ings for friends, and wrote up two Substack posts on se­cu­rity.

In February, I learned took a CSS course and cre­ated a com­po­nent li­brary for my­self.

In March, I read a few books, worked on a writ­ing app, took a trip to New York, and drafted sev­eral posts on lin­ear al­ge­bra for this Substack.

(If you’re won­der­ing where these posts are, there’s a lag time be­tween draft and pub­lish, where I send the posts out for tech­ni­cal re­view and do a cou­ple of rounds of rewrites).

I don’t re­ally spend much time cel­e­brat­ing my ac­com­plish­ments. Once I ac­com­plish some­thing, I have a small hit of, Yay, I did it,” be­fore mov­ing on to, So, what else am I go­ing to do?” For ex­am­ple, when I fin­ished my book (a three-year-long ef­fort), I had a cou­ple of weeks of, Yay, I wrote a book,” be­fore this be­came part of my nor­mal life, and it turned into, Yes, I wrote a book, but what else have I done since then?”

I thought the tada list would help re­in­force I did some­thing!” but it also turned into I was able to do this thing, be­cause I did this other thing ear­lier”. I’ll ex­plain with an

For years I have been want­ing to cre­ate a set of cards with paint­ings of Minnesota, for fam­ily and friends. The prob­lem: I did­n’t have many paint­ings of Minnesota, and did­n’t like the ones I had.

So I spent 2024 learn­ing a lot about wa­ter­color pig­ments, and color mix­ing hun­dreds of greens, to fig­ure out which greens I wanted to use in my land­scapes:

Then I spent the early part of 2025 do­ing a bunch of value stud­ies, be­cause my wa­ter­col­ors al­ways looked faded:

(Value stud­ies are where you try to make your paint­ings look good us­ing black and white only, so you’re forced to work us­ing value in­stead of color. It’s an old ex­er­cise to im­prove your art).

Then in the sum­mer, I did about 50 plein air paint­ings of Minnesota land­scapes:

(Plein air = paint­ing on lo­ca­tion. Please ad­mire the wide va­ri­ety of greens I mixed for these paint­ings).

Look at how much bet­ter these are:

Out of those 50, I picked my top four and had cards made. Thanks to the tada” list, it was­n’t just I made some cards”, it was

* And spent most of my sum­mer paint­ing out­side?

The pay­off for all that work was these lovely cards.

For a while now, I have wanted a mus­tache-like tem­plat­ing lan­guage, but with sta­tic typ­ing. Last year, I cre­ated a parser com­bi­na­tor li­brary called `tarsec` for TypeScript, and this year, I used it to write a mus­tache-like tem­plate lan­guage called `typestache` for my­self that had sta­tic typ­ing.

I’ve since used both `tarsec` and `typestache` in per­sonal pro­jects, like this one that adds file-based rout­ing to ex­press and au­to­gen­er­ates a client for the fron­tend.

Part of the rea­son I like learn­ing stuff is it lets me do things I could­n’t do be­fore. I think ac­knowl­edg­ing that you CAN do some­thing new is an im­por­tant part of the learn­ing process, but I usu­ally skip it. The tada list helps.

Maybe the most ob­vi­ous con: a tada list forces you to have an ac­com­plish­ment each day so you can write it down, and this added stress to my day.

Also, a year is a long time to keep it go­ing, and I ran out of steam by the end. You can see that my hand­writ­ing gets worse as time goes on

and for the last cou­ple of months, I stopped do­ing the pic­tures.

It’s fun to see things on the list that I had for­got­ten about. For ex­am­ple, I had started this mas­sive wa­ter­color paint­ing of the Holiday Inn in Pacifica in February, and I com­pletely for­got about it

Will I do this next year? Maybe. I need to weigh the ac­com­plish­ment part against the work it takes to keep it go­ing. It’s neat to have this ar­ti­fact to look back on ei­ther way.

A few more of the sev­eral color stud­ies I did:

...

Read the original on www.ducktyped.org »

6 241 shares, 17 trendiness

Stepping down as maintainer after 10 years · Issue #3777 · mockito/mockito

In March 2026, I will be Mockito main­tainer for 10 years (nearly a third of my whole life). Looking ahead, I de­cided that a decade mile­stone is a good mo­ment to pass on main­tain­er­ship to other folks. In the com­ing months un­til March, I will spend time en­sur­ing a smooth tran­si­tion in main­tain­er­ship.

In this is­sue I list sev­eral con­sid­er­a­tions why I made the de­ci­sion. Communication and dis­cus­sion of plans for fu­ture main­tain­er­ship will be some­where else, most likely in a sep­a­rate GitHub is­sue. Stay tuned for that.

As you might know, Mockito 5 shipped a break­ing change where its main ar­ti­fact is now an agent. That’s be­cause start­ing JVM 22, the pre­vi­ous so-called dynamic at­tach­ment of agents” is put be­hind a flag. This change makes sense from a se­cu­rity point-of-view and I sup­port it.

However, the way this was put for­ward to Mockito main­tain­ers was en­ergy drain­ing to say the least. Mockito is prob­a­bly the biggest user of such an agent and is of­ten looked at for in­spi­ra­tion by other pro­jects. As such, Mockito of­ten pi­o­neers on sup­port­ing JVM fea­tures, built on a solid foun­da­tion with ByteBuddy. Modules was such a fea­ture that took months of hard work by Rafael to fig­ure out, in­clud­ing pro­vid­ing feed­back to JVM main­tain­ers.

Unfortunately such a col­lab­o­ra­tive way of work­ing was not the case when dis­cussing agents. To me, it felt like the fea­ture was pre­sented as a done deal be­cause of se­cu­rity. While dy­namic at­tach­ment is prob­lem­atic in many ways, no al­ter­na­tive so­lu­tions were pro­posed. That’s okay, as Mockito pi­o­neers on these so­lu­tions, yet in this case I felt we were left alone.

My per­sonal take is that folks in­volved with the change se­verely un­der­es­ti­mated the so­ci­etal im­pact that it had. The fact that proper build sup­port is non-ex­is­tent to this day shows that agents are not a pri­or­ity. That’s okay if it is­n’t a pri­or­ity, but when it was com­mu­ni­cated with Mockito I per­ceived it as Mockito is hold­ing the JVM ecosys­tem back by us­ing dy­namic at­tach­ment, please switch im­me­di­ately and fig­ure it out on your own”.

Here, the fact that I (and oth­ers) are vol­un­teers do­ing their best for the pro­ject, is im­por­tant to un­der­stand the so­ci­etal im­pact. When you put in­di­vid­u­als un­der pres­sure, who do this work in their own time out of good­will, things crum­ble. It’s com­monly joked about with XKCDs on the fact that the whole open source world re­lies on a cou­ple of in­di­vid­u­als. That could­n’t be more true in this sit­u­a­tion, where the col­lab­o­ra­tive sys­tem col­lapses when too much pres­sure is put on in­di­vid­ual folks.

This saga planted the seed to re­con­sider my po­si­tion as main­tainer.

It’s un­de­ni­able that Kotlin as a lan­guage has grown in pop­u­lar­ity in re­cent years. While Mockito main­tains sev­eral fla­vors for JVM lan­guages, these pack­ages typ­i­cally in­clude sugar that makes in­te­gra­tion nicer. In all cases, mock­ito-core re­mains the place where func­tion­al­ity is im­ple­mented.

Unfortunately, this model does­n’t nicely ap­ply to Kotlin. Where al­most all JVM lan­guages work sim­i­larly un­der the hood, Kotlin of­ten does things dif­fer­ently. This means that in sev­eral places in mock­ito-core, there are sep­a­rate flows ded­i­cated to Kotlin. Most of­ten that’s a di­rect re­sult of Kotlin do­ing (in my opin­ion) shenani­gans on the JVM that the JVM never in­tended to sup­port, yet was able to.

Even within Kotlin it­self, fea­tures don’t work con­sis­tently. Suspend func­tions are the most well-known ex­am­ple. As such, Mockito code be­comes more spaghetti, it’s API some­times fully du­pli­cated just to sup­port a core Kotlin lan­guage fea­ture and over­all less main­tain­able.

While I fully un­der­stand the rea­sons that de­vel­op­ers en­joy the fea­ture rich­ness of Kotlin as a pro­gram­ming lan­guage, its un­der­ly­ing im­ple­men­ta­tion has sig­nif­i­cant down­sides for pro­jects like Mockito. Quite frankly, it’s not fun to deal with.

To me, a fu­ture where Kotlin be­comes more pre­dom­i­nant is not a fu­ture that makes me hope­ful I can keep on ded­i­cat­ing en­ergy to Mockito.

I have al­ways been a fan of open source work and have con­tributed to hun­dreds of pro­jects in all these years. Mockito is my most im­por­tant pro­ject, but I have also con­sis­tently worked on oth­ers. In re­cent months, I have re­dis­cov­ered the joy of pro­gram­ming by work­ing on Servo. It’s a web en­gine writ­ten in Rust.

When I need to choose how I want to spend my 2 hours of evening time in a given week, I rarely pre­ferred Mockito in the last year. In the past, Mockito was my go-to and I en­joyed it a lot. Nowadays, Servo and re­lated pro­jects pro­vide sig­nif­i­cantly more en­joy­ment.

Justifying why I needed to work on Mockito be­comes dif­fi­cult when (because of the above rea­sons) it feels like a chore. Volunteering work should­n’t feel like a chore, at least not for a long time.

As you have read, these three fac­tors com­bined led me to the de­ci­sion. The first point ex­plains why I started to doubt my po­si­tion, the sec­ond point why I am not hope­ful for things to change in a good way and the third point how I found en­joy­ment in a dif­fer­ent way.

While these points had im­pact on me as main­tainer, my hy­poth­e­sis is that it does­n’t ap­ply to oth­ers in the same way. I know oth­ers are ea­ger to work on Kotlin sup­port for ex­am­ple. That’s why I con­cluded that a decade is enough time to have helped Mockito for­ward. Now it’s time for some­body else to take over, as I be­lieve that’s in the best in­ter­est of Mockito as a pro­ject. Because ul­ti­mately that’s why I chose to be­come main­tainer in the first place: I be­lieved that with my work, I could im­prove Mockito for mil­lions of soft­ware en­gi­neers.

For those won­der­ing: yes I whole­heart­edly ad­vise every­one to take on a vol­un­teer­ing task such as main­tain­ing an open source pro­ject. It was an ho­n­our and priv­i­lege to do so and I thank those that I en­joyed work­ing with.

...

Read the original on github.com »

7 225 shares, 16 trendiness

Scratchapixel

Learn com­puter graph­ics from scratch and for free.

The usual beloved lessons.

A blog, some pri­vate courses,

and an up­com­ing book pro­ject!

We’ve been miss­ing a space where we can talk about top­ics re­lated to 3D pro­gram­ming—and also broader themes like AI and ed­u­ca­tion that con­nect to the work we do at Scratchapixel.

This is a new pro­ject we’ve started. For now, we’re fo­cus­ing on of­fer­ing a course specif­i­cally tar­geted at learn­ing the Vulkan API. Interested? Read more…

Take Scratchapixel to the Beach

We’re work­ing on a book so you can keep a phys­i­cal ref­er­ence for com­puter graph­ics—very use­ful if you’re stuck on a desert is­land with no in­ter­net. Read more…

These lessons are struc­tured to in­tro­duce 3D ren­der­ing con­cepts in a be­gin­ner-friendly or­der. Unlike most re­sources, we start with hands-on re­sults be­fore div­ing into the­ory.

3D Computer Graphics Primer: Ray-Tracing as an Example

What Do I Need to Get Started?

This sec­tion is ded­i­cated to ex­plain­ing the math­e­mat­i­cal the­o­ries and tools used in cre­at­ing im­ages and sim­u­la­tions with a com­puter. It’s not in­tended as a start­ing point, but rather as a ref­er­ence to be con­sulted when these top­ics are men­tioned in lessons from other sec­tions.

A col­lec­tion of lessons on spe­cific top­ics that don’t nec­es­sar­ily fit into any broad cat­e­gory but are nonethe­less in­ter­est­ing and cool.

Saving and read­ing im­ages to and from disk, im­age file for­mats, color spaces, color man­age­ment, and ba­sic im­age pro­cess­ing.

Simulating the Colors of the Sky

Various tech­niques use­ful for de­vel­op­ing 3D tools and in­ter­act­ing with 3D con­tent in gen­eral.

...

Read the original on www.scratchapixel.com »

8 198 shares, 21 trendiness

CEOs are hugely expensive. Why not automate them?

On Wednesday 31 May, it was re­ported that Alex Mahon, CEO of Channel 4, could re­ceive record an­nual pay of £1.4m. This ar­ti­cle was orig­i­nally pub­lished on 26 April 2021 and asks, as Executive pay con­tin­ues to rise, does a com­pany need a CEO at all?

Over the next two weeks, the boards of BAE Systems, AstraZeneca, Glencore, Flutter Entertainment and the London Stock Exchange all face the pos­si­bil­ity of share­holder re­volts over ex­ec­u­tive pay at their forth­com­ing an­nual gen­eral meet­ings (AGMs). As the AGM sea­son be­gins, there is a par­tic­u­lar fo­cus on pay.

Executive pay is of­ten the most con­tentious item at an AGM, but this year is clearly ex­cep­tional. The peo­ple run­ning com­pa­nies that have been se­verely im­pacted by Covid-19 can’t be blamed for the dev­as­ta­tion of their rev­enues by the pan­demic, but they also can’t take credit for the gov­ern­ment stim­u­lus that has kept them afloat. Last week, for ex­am­ple, nearly 40 per cent of share­hold­ers in the es­tate agents Foxtons voted against its chief ex­ec­u­tive of­fi­cer, Nicholas Budden, re­ceiv­ing a bonus of just un­der £1m; Foxtons has re­ceived about £7m in di­rect gov­ern­ment as­sis­tance and is ben­e­fit­ing from the gov­ern­men­t’s con­tin­ued in­fla­tion of the hous­ing mar­ket. The per­son who has done most to en­sure Foxtons’ on­go­ing good for­tune is not Nicholas Budden but Rishi Sunak.

Under the Enterprise and Regulatory Reform Act, ex­ec­u­tive pay is voted on at least every three years, and this process forces share­hold­ers and the pub­lic to con­front how much the peo­ple at the top take home. Tim Steiner, the high­est-paid CEO in the FTSE 100, was paid £58.7m in 2019 for run­ning Ocado, which is 2,605 times the me­dian in­come of his em­ploy­ees for that year, while the av­er­age FTSE100 CEO makes more than £15,000 a day.

As the High Pay Centre’s an­nual as­sess­ment of CEO pay points out, a top-heavy wage bill ex­tends be­yond the CEO, and could be un­sus­tain­able for any com­pany this year. When one con­sid­ers high earn­ers be­yond the CEO, says the re­port, there is ac­tu­ally quite sig­nif­i­cant po­ten­tial for com­pa­nies to safe­guard jobs and in­comes by ask­ing higher-paid staff to make sac­ri­fices”.

In the longer term, as com­pa­nies com­mit to greater au­toma­tion of many roles, it’s per­ti­nent to ask whether a com­pany needs a CEO at all.

A few weeks ago Christine Carrillo, an American tech CEO, raised this ques­tion her­self when she tweeted a spec­tac­u­larly tone-deaf ap­pre­ci­a­tion of her ex­ec­u­tive as­sis­tant, whose work al­lows Carrillo to write [and] surf every day” as well as cook din­ner and read every night”. In Carrillo’s un­usu­ally frank de­scrip­tion of the work her EA does — most of her emails, most of the work on fundrais­ing, play­books, op­er­a­tions, re­cruit­ment, re­search, up­dat­ing in­vestors, in­voic­ing and so much more” — she guessed that this un­named worker saves me 60% of time”.

Predictably, a horde ar­rived to point out that if some­one else is do­ing 60 per cent of Carrillo’s job, they should be paid 50 per cent more than her. But as Carrillo — with a frankly breath­tak­ing lack of self-aware­ness — in­formed an­other com­menter, her EA is based in the Philippines. The main (and of­ten the only) rea­son to out­source a role is to pay less for it.

[See also: The scourge of greed­fla­tion]

If most of a CEOs job can be out­sourced, this sug­gests it could also be au­to­mated. But while com­pa­nies are rac­ing to au­to­mate en­try- and mid-level roles, se­nior ex­ec­u­tives and de­ci­sion mak­ers show much less in­ter­est in au­tomat­ing them­selves.

There’s a good ar­gu­ment for au­tomat­ing from the top rather than from the bot­tom. As we know from the an­no­tated copy of Thinking, Fast and Slow that sits (I as­sume) on every CEOs Isamu Noguchi night­stand, hu­man de­ci­sion-mak­ing is the prod­uct of ir­ra­tional bi­ases and as­sump­tions. This is one of the rea­sons strat­egy is so dif­fi­cult, and roles that in­volve strate­gic de­ci­sion-mak­ing are so well paid. But the dif­fi­culty of mak­ing gen­uinely ra­tio­nal strate­gic de­ci­sions, and the cost of the peo­ple who do so, are also good rea­sons to hand this work over to soft­ware.

Automating jobs can be risky, es­pe­cially in pub­lic-fac­ing roles. After Microsoft sacked a large team of jour­nal­ists in 2020 in or­der to re­place them with AI, it al­most im­me­di­ately had to con­tend with the PR dis­as­ter of the soft­ware’s fail­ure to dis­tin­guish be­tween two women of colour. Amazon had to aban­don its AI re­cruit­ment tool af­ter it learned to dis­crim­i­nate against women. And when GPT-3, one of the most ad­vanced AI lan­guage mod­els, was used as a med­ical chat­bot in 2020, it re­sponded to a (simulated) pa­tient pre­sent­ing with sui­ci­dal ideation by telling them to kill them­selves.

What links these ex­am­ples is that they were all at­tempts to au­to­mate the kind of work that hap­pens with­out be­ing scru­ti­nised by lots of other peo­ple in a com­pany. Top-level strate­gic de­ci­sions are dif­fer­ent. They are usu­ally de­bated be­fore they’re put into prac­tice — un­less, and this is just an­other rea­son to au­to­mate them, em­ploy­ees feel they can’t speak up for fear of in­cur­ring the CEOs dis­plea­sure.

Where au­to­mated man­age­ment — or decision in­tel­li­gence”, as Google and IBM call it — has been de­ployed, it’s pro­duced im­pres­sive re­sults. Hong Kong’s mass tran­sit sys­tem put soft­ware in charge of sched­ul­ing its main­te­nance in 2004, and en­joys a rep­u­ta­tion as one of the world’s most punc­tual and best-run met­ros.

Clearly, chief ex­ecs did­n’t get where they are to­day by vol­un­teer­ing to clear out their cor­ner of­fices and hand over their caviar spit­toons to ro­bots. But man­age­ment is a very large vari­able cost that only seems to in­crease — Persimmon’s bonus scheme paid out half a bil­lion pounds to 150 ex­ecs in a sin­gle year — while tech­nol­ogy moves in the other di­rec­tion, be­com­ing cheaper and more re­li­able over time.

It is of­ten asked whether CEO pay is fair or eth­i­cal. But com­pany own­ers and in­vestors should be ask­ing if their top man­age­ment could be done well by a ma­chine — and if so, why is it so ex­pen­sive?

[See also: The milk­man on a mis­sion]

...

Read the original on www.newstatesman.com »

9 157 shares, 6 trendiness

The Global Rise of Low-Quality AI Videos

Kapwing’s new re­search shows that 21-33% of YouTube’s feed may con­sist of AI slop or brain­rot videos. But which coun­tries and chan­nels are achiev­ing the great­est reach — and how much money might they make? We an­a­lyzed so­cial data to find out.

Kapwing’s new re­search shows that 21-33% of YouTube’s feed may con­sist of AI slop or brain­rot videos. But which coun­tries and chan­nels are achiev­ing the great­est reach — and how much money might they make? We an­a­lyzed so­cial data to find out.

As the de­bate over the cre­ative and eth­i­cal value of us­ing AI to gen­er­ate video rages on, users are get­ting in­ter­est­ing re­sults out of the ma­chine, and artist-led AI con­tent is gain­ing re­spect in some ar­eas. Top film schools now of­fer courses on the use and ethics of AI in film pro­duc­tion, and the world’s best-known brands are uti­liz­ing AI in their cre­ative process — al­beit with mixed re­sults.

Sadly, oth­ers are gam­ing the nov­elty of AIs prompt-and-go con­tent, us­ing these en­gines to churn out vast quan­ti­ties of AI slop” — the spam” of the video-first age.

Wiktionary de­fines a slop­per as Someone who is over­re­liant on gen­er­a­tive AI tools such as ChatGPT; a pro­ducer of AI slop.” Along with the pro­lif­er­a­tion of brainrot” videos on­line, slop­pers are mak­ing it tough for prin­ci­pled and tal­ented cre­ators to get their videos seen.

The main point of AI slop and brain­rot videos is to grab your at­ten­tion, and this type of con­tent seems harder and harder to avoid. But ex­actly how preva­lent is it in the grand scheme of things?

Kapwing an­a­lyzed the view and sub­scriber counts of trend­ing AI slop and brain­rot YouTube chan­nels to find out which ones are com­pet­ing most fiercely with hand­made con­tent around the world and how much rev­enue the lead­ing slop­pers are mak­ing.

We iden­ti­fied the top 100 trend­ing YouTube chan­nels in every coun­try and noted the AI slop chan­nels. Next, we used so­cial­blade.com to re­trieve the num­ber of views, sub­scribers, and es­ti­mated yearly rev­enue for these chan­nels and ag­gre­gated these fig­ures for each coun­try to de­duce their pop­u­lar­ity. We also cre­ated a new YouTube ac­count to record the num­ber of AI slop and brain­rot videos among the first 500 YouTube Shorts we cy­cled through to get an idea of the new-user ex­pe­ri­ence.

* Spain’s trend­ing AI slop chan­nels have a com­bined 20.22 mil­lion sub­scribers — the most of any coun­try.

* In South Korea, the trend­ing AI slop chan­nels have amassed 8.45 bil­lion views.

* The AI slop chan­nel with the most views is India’s Bandar Apna Dost (2.07 bil­lion views).The chan­nel has es­ti­mated an­nual earn­ings of $4,251,500.

* The chan­nel has es­ti­mated an­nual earn­ings of $4,251,500.

* U.S.-based slop chan­nel Cuentos Facinantes [sic] has the most sub­scribers of any slop chan­nel glob­ally (5.95 mil­lion).

* Brainrot videos ac­count for around 33% of the first 500 YouTube shorts on a new user’s feed.

First, we an­a­lyzed the 100 top trend­ing video chan­nels in every coun­try to see how preva­lent AI slop has be­come lo­cally.

We found that Spain has 20.22 mil­lion AI slop chan­nel sub­scribers among its trend­ing chan­nels. This is de­spite Spain hav­ing fewer AI slop chan­nels (eight) among its top 100 chan­nels than coun­tries in­clud­ing Pakistan (20), Egypt (14), South Korea (11) and the U. S. (nine). The U.S. has the third-most slop sub­scribers (14.47 mil­lion) — 28.4% fewer than Spain but 13.18% more than fourth-placed Brazil (12.56 mil­lion).

Spain’s AI slop sub­scriber base is boosted sig­nif­i­cantly by one chan­nel, Imperio de je­sus, which had 5.87 mil­lion sub­scribers at the time of analy­sis, mak­ing it the world’s sec­ond biggest AI slop chan­nel (see A Spanish-Language U. S. AI Slop Channel is the World’s Most-Subscribed be­low).

Promising to strengthen faith in Jesus through fun in­ter­ac­tive quizzes,” the chan­nel’s videos put the Son of God in a range of ei­ther/​or sce­nar­ios where he must give the cor­rect an­swer to get the bet­ter of Satan, the Grinch and oth­ers. Two other Spanish chan­nels with over 3.5 mil­lion sub­scribers each fo­cus on com­edy/​brain­rot shorts.

While Spain’s eight trend­ing AI slop chan­nels may have the most sub­scribers, South Korea’s 11 tren­ders have the most views: some 8.45 bil­lion in to­tal. This is nearly 1.6 times as many as sec­ond-placed Pakistan (5.34B), 2.5 times as many as the third-placed U. S. (3.39B) and 3.4 times as many as Spain (2.52B).

South Korean AI slop chan­nel Three Minutes Wisdom alone ac­counts for nearly a quar­ter of the coun­try’s mas­sive view count, with 2.02 bil­lion views. Three Minutes Wisdom has the sec­ond-high­est view count of any trend­ing slop chan­nel glob­ally, and we es­ti­mate the chan­nel’s an­nual ad in­come to be around US$4,036,500.

The chan­nel’s 140 videos typ­i­cally fea­ture pho­to­re­al­is­tic(ish) footage of wild an­i­mals be­ing de­feated by cute pets, and the URL in the bio ap­pears to be an af­fil­i­ate link to Coupang, South Korea’s largest on­line re­tailer.

Next, we iden­ti­fied the spe­cific chan­nels with the most sub­scribers and views glob­ally. U. S.-based Cuentos Facinantes [sic] has 5.95 mil­lion sub­scribers, mak­ing it the trend­ing AI slop chan­nel with the biggest fol­low­ing. This is only 1.4% more than Imperio de je­sus (5.87M) but over 50% more than the eighth-, ninth- and tenth-most sub­scribed slop chan­nels.

Cuentos Facinantes [sic] (Fascinating Tales) has at­tracted some 1.28 bil­lion views, serv­ing up low-qual­ity Dragon Ball-themed videos. The chan­nel was es­tab­lished in 2020, but the ear­li­est video cur­rently hosted is from as re­cently as Jan. 8, 2025.

Five of the other ten trend­ing AI slop chan­nels with the most views are based in South Korea, with oth­ers in Egypt, Brazil and Pakistan. But the chan­nel with the most views of all is in India. Bandar Apna Dost fea­tures over 500 videos, mainly featuring a re­al­is­tic mon­key in hi­lar­i­ous, dra­matic, and heart-touch­ing hu­man-style sit­u­a­tions,” Many of which are vari­a­tions on iden­ti­cal set-ups. The chan­nel also has around 100,000 fol­low­ers on Instagram; on Facebook, the videos are at­trib­uted to a digital cre­ator’ named Surajit Karmakar.

If they’re mon­e­tiz­ing their views, chan­nels like Bandar Apna Dost may be mak­ing mil­lions of dol­lars per year. But YouTube faces a dilemma over AI con­tent.

On the one hand, YouTube CEO Neal Mohan cites gen­er­a­tive AI as the biggest game-changer for YouTube since the orig­i­nal revelation” that or­di­nary folk wanted to watch each oth­er’s videos, say­ing that gen­er­a­tive AI can do for video what the syn­the­sizer did for mu­sic.

On the other hand, the com­pany wor­ries that its ad­ver­tis­ers will feel de­val­ued by hav­ing their ads at­tached to slop.

The AI slop chan­nels with the high­est po­ten­tial earn­ings mostly line up with the top ten for views. This is be­cause Social Blade es­ti­mates chan­nel in­come based on an­nual views, and most of these chan­nels’ videos have been pub­lished over the last few months. Using an av­er­age rate of rev­enue per 1,000 views, Bandar Apna Dost has an es­ti­mated an­nual rev­enue of $4.25 mil­lion.

The ge­nius is go­ing to lie whether you did it in a way that was pro­foundly orig­i­nal or cre­ative,” Mohan told Wired. Just be­cause the con­tent is 75 per­cent AI gen­er­ated does­n’t make it any bet­ter or worse than a video that’s 5 per­cent AI gen­er­ated. What’s im­por­tant is that it was done by a hu­man be­ing.”

Whether those flood­ing the plat­form with auto-gen­er­ated con­tent to make a buck care about be­ing known as cre­ative ge­niuses is an­other mat­ter.

Finally, we sim­u­lated the ex­pe­ri­ence of an un­tainted YouTube shorts al­go­rithm by es­tab­lish­ing a new YouTube ac­count and not­ing the oc­cur­rence of AI slop or brain­rot videos among the first 500 videos in the feed. While we were spared ei­ther of these for the first 16 videos in the feed, in to­tal, 104 (21%) of the first 500 videos were AI-generated, and 165 (33%) of those 500 videos were brain­rot.

Whether this preva­lence of slop and brain­rot on our test feed rep­re­sents the en­gi­neer­ing of YouTube’s al­go­rithm or the sheer pro­lif­er­a­tion of such videos that are be­ing up­loaded is a mys­tery that only Google can an­swer. But the Guardian’s analy­sis of YouTube’s fig­ures for July re­vealed that nearly one in 10 of the fastest grow­ing YouTube chan­nels glob­ally are show­ing AI-generated con­tent only.”

And brain­rot, like AI slop, is a mixed bless­ing for YouTube as a com­pany: it may lack the soul or pro­fes­sion­al­ism with which YouTube’s ad­ver­tis­ers wish to be as­so­ci­ated, but brain­rot is mor­eish by de­sign.

Brainrot’s nat­ural home is the feed, whether view­ers are com­pelled to keep watch­ing to numb” them­selves from the tri­als of the world around them, or to stay up to date with the po­ten­tially in­fi­nite lore” of emer­gent brain­rot sub­gen­res, which in­cor­po­rate re­cur­ring char­ac­ters and themes.

The term AI slop” has been var­i­ously pinned to unreviewed” con­tent, to AI-generated me­dia that may have been re­viewed but with min­i­mal qual­ity stan­dards (like Coke’s Christmas ads), and to all AI-generated con­tent. As Rob Horning points out, the idea that only some AI me­dia is slop prop­a­gates the idea that the rest is le­git­i­mate and the tech­nol­o­gy’s pro­lif­er­a­tion is in­evitable.

Part of the threat of AI slop and some forms of brain­rot is in how they have been nor­mal­ized and may come across as harm­less fun. But slop and brain­rot prey on the lazi­est ar­eas of our men­tal fac­ul­ties. Researchers have shown how the illusory truth ef­fect” makes peo­ple more likely to be­lieve in claims or im­agery the more of­ten they en­counter it. AI tools make it easy for bad-faith ac­tors to con­struct a fake en­emy or sit­u­a­tion that sup­ports their un­der­ly­ing po­lit­i­cal be­liefs or goals. Seeing is be­liev­ing, stud­ies have shown, even when the viewer has been ex­plic­itly told that a video is fake.

Meanwhile, information of any kind, in enough quan­ti­ties, be­comes noise,” writes re­searcher and artist Eryk Salvaggio. The preva­lence of AI slop is a symp­tom of in­for­ma­tion ex­haus­tion, and an in­creased hu­man de­pen­dency on al­go­rith­mic fil­ters to sort the world on our be­half.” And, as Doug Shapiro notes, as this noise drowns out the sig­nal on the web, in­clud­ing so­cial net­works, the value of trust will rise — and so will cor­po­rate and po­lit­i­cal ef­forts to fab­ri­cate and ma­nip­u­late trust.

And this is why, rather than at­tend­ing film school to study AI tech­niques, it may be more valu­able for cre­ators and con­sumers alike — es­pe­cially those still in school — to dou­ble down on Media Studies.

We man­u­ally re­searched the top 100 trend­ing YouTube chan­nels in every coun­try (on play­board.co) to iso­late the AI slop chan­nels.

We then used so­cial­blade.com to re­trieve the num­ber of views, sub­scribers and es­ti­mated yearly rev­enue (using the mid­point val­ues) for these chan­nels.

We ag­gre­gated these fig­ures for each coun­try’s AI slop chan­nels to get an idea of their pop­u­lar­ity.

In ad­di­tion, we cre­ated a new YouTube ac­count and recorded the num­ber of AI slop and brain­rot videos among the first 500 YouTube Shorts we cy­cled through to get an idea of the new-user ex­pe­ri­ence.

Data is cor­rect as of October 2025.

...

Read the original on www.kapwing.com »

10 156 shares, 16 trendiness

PySDR: A Guide to SDR and DSP using Python

As a con­cept it refers to us­ing soft­ware to per­form sig­nal pro­cess­ing tasks that were tra­di­tion­ally per­formed by hard­ware, spe­cific to ra­dio/​RF ap­pli­ca­tions. This soft­ware can be run on a gen­eral-pur­pose com­puter (CPU), FPGA, or even GPU, and it can be used for real-time ap­pli­ca­tions or of­fline pro­cess­ing of recorded sig­nals. Analogous terms in­clude software ra­dio” and RF dig­i­tal sig­nal pro­cess­ing”.

As a thing (e.g., an SDR) it typ­i­cally refers to a de­vice that you can plug an an­tenna into and re­ceive RF sig­nals, with the dig­i­tized RF sam­ples be­ing sent to a com­puter for pro­cess­ing or record­ing (e.g., over USB, Ethernet, PCI). Many SDRs also have trans­mit ca­pa­bil­i­ties, al­low­ing the com­puter to send sam­ples to the SDR which then trans­mits the sig­nal at a spec­i­fied RF fre­quency. Some em­bed­ded-style SDRs in­clude an on­board com­puter.

The dig­i­tal pro­cess­ing of sig­nals; in our case, RF sig­nals.

This text­book acts as a hands-on in­tro­duc­tion to the ar­eas of DSP, SDR, and wire­less com­mu­ni­ca­tions. It is de­signed for some­one who is:

Interested in us­ing SDRs to do cool stuff

Relatively new to DSP, wire­less com­mu­ni­ca­tions, and SDR

Better at un­der­stand­ing equa­tions af­ter learn­ing the con­cepts

Looking for con­cise ex­pla­na­tions, not a 1,000 page text­book

An ex­am­ple is a Computer Science stu­dent in­ter­ested in a job in­volv­ing wire­less com­mu­ni­ca­tions af­ter grad­u­a­tion, al­though it can be used by any­one itch­ing to learn about SDR who has pro­gram­ming ex­pe­ri­ence. As such, it cov­ers the nec­es­sary the­ory to un­der­stand DSP tech­niques with­out the in­tense math that is usu­ally in­cluded in DSP courses. Instead of bury­ing our­selves in equa­tions, an abun­dance of im­ages and an­i­ma­tions are used to help con­vey the con­cepts, such as the Fourier se­ries com­plex plane an­i­ma­tion be­low. I be­lieve that equa­tions are best un­der­stood af­ter learn­ing the con­cepts through vi­su­als and prac­ti­cal ex­er­cises. The heavy use of an­i­ma­tions is why PySDR will never have a hard copy ver­sion be­ing sold on Amazon.

This text­book is meant to in­tro­duce con­cepts quickly and smoothly, en­abling the reader to per­form DSP and use SDRs in­tel­li­gently. It’s not meant to be a ref­er­ence text­book for all DSP/SDR top­ics; there are plenty of great text­books al­ready out there, such as Analog Device’s SDR text­book and dspguide.com. You can al­ways use Google to re­call trig iden­ti­ties or the Shannon limit. Think of this text­book like a gate­way into the world of DSP and SDR: it’s lighter and less of a time and mon­e­tary com­mit­ment, when com­pared to more tra­di­tional courses and text­books.

To cover foun­da­tional DSP the­ory, an en­tire se­mes­ter of Signals and Systems”, a typ­i­cal course within elec­tri­cal en­gi­neer­ing, is con­densed into a few chap­ters. Once the DSP fun­da­men­tals are cov­ered, we launch into SDRs, al­though DSP and wire­less com­mu­ni­ca­tions con­cepts con­tinue to come up through­out the text­book.

Code ex­am­ples are pro­vided in Python. They uti­lize NumPy, which is Python’s stan­dard li­brary for ar­rays and high-level math. The ex­am­ples also rely upon Matplotlib, which is a Python plot­ting li­brary that pro­vides an easy way to vi­su­al­ize sig­nals, ar­rays, and com­plex num­bers. Note that while Python is slower” than C++ in gen­eral, most math func­tions within Python/NumPy are im­ple­mented in C/C++ and heav­ily op­ti­mized. Likewise, the SDR API we use is sim­ply a set of Python bind­ings for C/C++ func­tions/​classes. Those who have lit­tle Python ex­pe­ri­ence yet a solid foun­da­tion in MATLAB, Ruby, or Perl will likely be fine af­ter fa­mil­iar­iz­ing them­selves with Python’s syn­tax.

...

Read the original on pysdr.org »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.