10 interesting stories served every morning and every evening.




1 1,419 shares, 79 trendiness

copilot edited an ad into my pr

After a team mem­ber sum­moned Copilot to cor­rect a typo in a PR of mine, Copilot edited my PR de­scrip­tion to in­clude and ad for it­self and Raycast.

This is hor­rific. I knew this kind of bull­shit would hap­pen even­tu­ally, but I did­n’t ex­pect it so soon.

Here is how plat­forms die: first, they are good to their users; then they abuse their users to make things bet­ter for their busi­ness cus­tomers; fi­nally, they abuse those busi­ness cus­tomers to claw back all the value for them­selves. Then, they die.

...

Read the original on notes.zachmanson.com »

2 524 shares, 56 trendiness

How to turn anything into a router

I don’t like to cover current events” very much, but the American gov­ern­ment just re­vealed a truly be­wil­der­ing pol­icy ef­fec­tively ban­ning im­port of new con­sumer router mod­els. This is ridicu­lous for many rea­sons, but if this does in­deed come to pass it may be ben­e­fi­cial to learn how to homebrew” a router.

Fortunately, you can make a router out of ba­si­cally any­thing re­sem­bling a com­puter.

I’ve used a linux pow­ered mini-pc as my own router for many years, and have posted a few times be­fore about how to make linux routers and fire­walls in that time. It’s been rock solid sta­ble, and the only is­sue I’ve had over the years was wear­ing out a $20 mSATA drive. While I use Debian typ­i­cally, Alpine linux prob­a­bly works just as well, per­haps bet­ter if you’re fa­mil­iar with it. As long as the de­vice runs Linux well and has a cou­ple USB ports, you’re good to go. Mini-PCs, desk­top PCs, SBCs, rack­mount servers, old lap­tops, or pur­pose built de­vices will all work.

To be clear, this is not meant to be a prac­ti­cal solution” to the US pol­icy, it’s to show peo­ple a neat hack” you can do to squeeze more ca­pa­bil­ity out of hard­ware you might al­ready own, and to demon­strate that there’s noth­ing spe­cial about routers - They’re all just com­put­ers af­ter all.

My per­sonal pref­er­ence is a pur­pose-made mini PC with a pas­sively cooled de­sign.

However, ba­si­cally any­thing will work. It should have two Ethernet in­ter­faces, but a stan­dard USB-Ethernet don­gle will also do the trick. It won’t be as re­li­able as an on­board in­ter­face, but will prob­a­bly be good enough. For ex­am­ple, this janky pile of spare parts can eas­ily push 820-850mbps on the wired LAN and ~300 mbps on the wire­less net­work:

This par­tic­u­lar de­vice is a Celeron 3205U dual core run­ning at a blis­ter­ing 1.5 GHz. Even that measly chip is more than ca­pa­ble of rout­ing an en­tire house or small busi­ness worth of traf­fic.

Going back even fur­ther, this was my setup for the first cou­ple weeks of the fall 2016 se­mes­ter:

It might be hard to tell what’s go­ing on here by look­ing, so let me break it down:

* An ExpressCard-PCIe bridge in the ThinkPad’s ex­pan­sion bay

* A trash-picked no-name Ethernet card in the PCIe slot, miss­ing its mount­ing bracket

* An an­cient Cisco 2960 100 mbit switch, pur­chased for $10 from my col­lege

* A D-Link router act­ing as an ac­cess point (“as-is” thrift store find with a bad WAN port)

Yes, this is in­deed a router! It prob­a­bly looks like a pile of junk, be­cause it is, but it’s junk that’s per­fectly able to per­form the job I gave it!

When set up, the sys­tem will be con­fig­ured like this:

Both LAN in­ter­faces will be bridged to­gether, mean­ing that de­vices on the wired and wire­less net­works will be able to com­mu­ni­cate nor­mally. If one LAN port is­n’t enough, you can plug in as many USB Ethernet don­gles as you need and bridge em all to­gether. It won’t be quite as fast as a real” switch, but if you’re look­ing for per­for­mance you might’ve come to the wrong place to­day.

As men­tioned be­fore, this will run Debian as the op­er­at­ing sys­tem, and uses very few pieces that don’t come with the base in­stall:

* Any firmware blobs not in the de­fault in­stall

Also, I should men­tion that I’ll only be set­ting up IPv4 here. IPv6 works great for stuff like mo­bile de­vices, but I still find it too frus­trat­ing in­side a LAN. Perhaps my brain is too cal­ci­fied al­ready, but I’ll hap­pily hold out on IPv4 for now.

* If you can, set the de­vice to the low­est clock speed, but dis­able any power man­age­ment for USB or PCI de­vices.

* Find the op­tion like Restore af­ter AC Power Loss” and turn it ON.

* Some de­vices won’t prop­erly power up if there’s no dis­play con­nected. If your de­vice is like this, stick a dummy don­gle” into the HDMI port.

* Lots of hard­ware will only work cor­rectly with the non-free-firmware repos­i­tory en­abled

Depending on your wire­less hard­ware, you may need to in­stall an ad­di­tional firmware pack­age.

sudo apt in­stall firmware-iwl­wifi

sudo apt in­stall firmware-ath9k-htc

Or if you have some­thing truly an­cient like I do:

sudo apt in­stall firmware-ath­eros

After the ini­tial in­stall is done, there are some ad­di­tional util­i­ties to in­stall:

sudo apt in­stall bridge-utils hostapd dns­masq

In terms of soft­ware, that’s about all that’s needed. There should be about 250 pack­ages on the sys­tem in to­tal.

In mod­ern Linux sys­tems, the net­work in­ter­face names are named based on phys­i­cal con­nec­tion and dri­ver type, like en­p0s31f6. I find the old for­mat, like ethX much sim­pler, so each in­ter­face gets a per­sis­tent name.

For each net­work in­ter­face, cre­ate a file at /etc/systemd/network/10-persistent-ethX.link

[Match]

MACAddress=AA:BB:CC:DD:00:11

[Link]

Name=ethX

This uses a USB Wi-Fi don­gle to act as an ac­cess point, cre­at­ing a net­work for other de­vices to join. This will not work as well as a pur­pose built de­vice, but it’s bet­ter than noth­ing. I’ve had rea­son­ably good re­sults with this, but I also live in a very small build­ing where I’m rarely more than 10m away from the router. If you rely heav­ily on your wire­less net­work work­ing prop­erly, try to find a ded­i­cated ac­cess point de­vice. An old router, even from over a decade ago, will prob­a­bly work fine for this by just con­nect­ing to its LAN port (not the WAN port!).

To set up the on­board wi-fi net­work, cre­ate a con­fig file at /etc/hostapd/hostapd.conf

in­ter­face=wlan0

bridge=br0

hw_­mode=g

chan­nel=11

ieee80211d=1

coun­try_­code=US

ieee80211n=1

wm­m_en­abled=1

ssid=My Cool and Creative Wi-Fi Name

auth_algs=1

wpa=2

wpa_key_mgmt=WPA-PSK

rsn_­pair­wise=CCMP

wpa_­passphrase=my­se­curepass­word

By de­fault the hostapd ser­vice is not startable, so we un­mask it be­fore en­abling the ser­vice.

sudo sys­tem­ctl un­mask hostapd

sudo sys­tem­ctl en­able –now hostapd

The outside” in­ter­face will be the WAN, and the inside” will be the LAN. Note that the LAN in­ter­face does not get a de­fault gate­way.

al­low-hot­plug eth0

al­low-hot­plug eth1

auto wlan0

auto br0

iface eth0 inet dhcp

iface br0 inet sta­tic

bridge_­ports eth1 wlan0

ad­dress 192.168.1.1/24

After this step, the de­vice should have a quick re­boot. It should come back up nicely. If it does­n’t con­firm that the pre­vi­ous steps were done cor­rectly, and check for er­rors by run­ning jour­nalctl -e -u net­work­ing.ser­vice

If it all worked cor­rectly, the out­put of this com­mand should be the same:

$ sudo brctl show br0

bridge name bridge id STP en­abled in­ter­faces

br0 8000.xxxxx no eth1

wlan0

Create /etc/sysctl.d/10-forward.conf and add this line to en­able IP for­ward­ing:

net.ipv4.ip_­for­ward=1

sudo sys­tem­ctl restart sys­temd-sysctl.ser­vice

The fire­wall rules and NAT con­fig­u­ra­tion are both han­dled by the new net­fil­ter sys­tem in Linux. We man­age this us­ing nfta­bles.

#!/usr/sbin/nft -f

flush rule­set

table inet fil­ter {

chain in­put {

type fil­ter hook in­put pri­or­ity 0; pol­icy drop;

ct state { es­tab­lished,re­lated } counter ac­cept

ip pro­to­col icmp counter ac­cept

iif­name br0” tcp dport { 22, 53 } counter ac­cept

iif­name br0” udp dport { 53, 67, 68 } counter ac­cept

counter

chain for­ward {

type fil­ter hook for­ward pri­or­ity 0; pol­icy drop;

iif­name eth0” oif­name br0″ ct state { es­tab­lished,re­lated } counter ac­cept

iif­name br0” oif­name eth0″ ct state { new,es­tab­lished,re­lated } counter ac­cept

counter

chain out­put {

type fil­ter hook out­put pri­or­ity 0; pol­icy ac­cept;

counter

table ip nat {

chain postrout­ing {

type nat hook postrout­ing pri­or­ity 100; pol­icy ac­cept;

oif­name eth0” counter mas­quer­ade

This per­forms NAT, de­nies all in­bound traf­fic from out­side the net­work, and al­lows the router de­vice to act as a DNS, DHCP, and SSH server (for man­age­ment). Pretty much a bog stan­dard fire­wall con­fig.

Enable this for the next boot:

sudo sys­tem­ctl en­able nfta­bles.ser­vice

...

Read the original on nbailey.ca »

3 381 shares, 17 trendiness

Philly courts will ban all smart eyeglasses starting next week

Philly court­rooms are re­main­ing friendly to the Luddites. At least with eye­wear.

The Philadelphia court sys­tem is im­ple­ment­ing a ban on all forms of smart or AI-integrated eye­wear, the First Judicial District of Pennsylvania an­nounced this week.

The ban will go into ef­fect Monday.

From then on, any eye­wear with video and au­dio record­ing ca­pa­bil­ity will be for­bid­den in all of the First Judicial District build­ings, cour­t­houses, or of­fices, even for peo­ple who have a pre­scrip­tion. Other de­vices with record­ing ca­pa­bil­i­ties like cell phones and lap­tops con­tinue to be al­lowed in­side court­rooms but must be pow­ered off and stowed away.

Since these glasses are dif­fi­cult to de­tect in court­rooms, it was de­ter­mined they should be banned from the build­ing,” said court spokesper­son Martin O’Rourke.

The ban is meant to pre­vent po­ten­tial wit­ness and ju­ror in­tim­i­da­tion from threats of record­ing, O’Rourke said. It is un­clear whether Philadelphia courts will im­ple­ment ex­tra screen­ing mea­sures to de­ter­mine if a per­son’s glasses vi­o­late the rule.

If some­one were caught at­tempt­ing to bring smart eye­wear into those spaces, they could be barred en­try or re­moved from the build­ing, and ar­rested and charged with crim­i­nal con­tempt, O’Rourke said. The only po­ten­tial ex­cep­tions would be if a judge or court lead­er­ship had granted prior writ­ten per­mis­sion to a smart glasses user.

Philadelphia is part of an early wave of court sys­tems that are im­ple­ment­ing smart eye­wear bans, join­ing sys­tems like those in Hawaii, Wisconsin, and North Carolina. While most courts al­ready ban any kind of record­ing de­vices in­side the court­rooms, it’s not yet com­mon to have ex­plicit bans on smart eye­wear or to com­pletely bar them from the build­ing.

Without di­rect bans in place, judges typ­i­cally have lat­i­tude to make rul­ings on what de­vices are al­lowed in­side their court room. During the re­cent trial in Los Angeles that found Google and Meta li­able for so­cial me­dia caus­ing harm, Meta CEO Mark Zuckerberg and his col­leagues wore their com­pa­ny’s smart eye­wear into the court­room. The judge in that case or­dered them to re­move the glasses, and threat­ened to hold any­one who had used them to record court pro­ceed­ings in con­tempt of court.

Google Glass was a fre­quent butt of the joke af­ter it was in­tro­duced over a decade ago, but rea­son­ably af­ford­able and avail­able smart glasses have fi­nally be­gun catch­ing on within the last year.

Eyewear gi­ants Ray-Ban and Oakley both now sell glasses in­te­grated with Meta AI and au­dio and vi­sual record­ing for less than $500. The new glasses were the fo­cuses of each com­pa­ny’s re­cent Super Bowl ad cam­paigns, and the com­pa­nies re­port­edly hawked 7 mil­lion pairs in 2025. They have a head start on Apple, which is plan­ning to join the mar­ket with its own smart glasses in 2027.

...

Read the original on www.inquirer.com »

4 351 shares, 20 trendiness

How the AI bubble bursts

The cat­a­lysts for a crash are al­ready laid out, and it can hap­pen sooner than most ex­pect. AI is here to stay. If used right, chances are it will make us all more pro­duc­tive. That, on the other hand, does not mean it will be a good in­vest­ment.

Magnificent 7 com­pa­nies are in­creas­ing capex to their biggest ever to dif­fer­en­ti­ate their tech from each other and the big AI labs, but the key re­al­iza­tion is that they don’t have to spend it to win. It’s a de­fen­sive move for them, if they com­mit $50B, OpenAI and Anthropic need to go raise $100B each to stay com­pet­i­tive, which makes them re­liant on in­vestors’ money. As the num­bers get big­ger, the amount of funds that can write checks of the size re­quired to fill such amounts gets smaller. And many of them are now get­ting bombed in the Gulf.

This is the rea­son there’s a push for IPOs, it’s be­cause it’s the only op­tion left to keep the fund­ing com­ing.

Taking this into ac­count, Google is ex­tremely well po­si­tioned to weather the storm. When they an­nounce capex ex­pen­di­ture, they don’t spend it overnight. They can sim­ply de­ploy month by month un­til their com­peti­tors strug­gle to raise and get forced to ca­pit­u­late. At that point they can just ramp down the spend­ing and de­clare vic­tory in a cor­nered mar­ket. They don’t need capex, they just need to make it very clear for every­one that no­body can out­spend them. It is hard to pic­ture as num­bers get so big, but Alphabet (Google’s par­ent) is ten times more valu­able than the biggest mil­i­tary com­pany .

This also has a great im­pli­ca­tion for the Mag 7, es­pe­cially Google: their capex will be a lot smaller in prac­tice than pro­jected, and as in­vestors hate to see high capex in tech, the mar­ket will prob­a­bly re­ward that if it ma­te­ri­al­izes.

Apple did­n’t even have to pre­tend, their strat­egy of wait­ing on the side­lines, while sell­ing Mac Minis, for some­one to come up with a good-enough model and just buy that when it’s done seems to be work­ing. They may not even do that, they are now hint­ing at charg­ing mod­els for be­ing avail­able on Siri. Amazon is hedged with an Anthropic in­vest­ment, and Meta is spend­ing like there’s no to­mor­row.

We’re hit­ting the worst-case sce­nar­ios for the big AI labs: en­ergy, their biggest ex­pense, is at multi-year highs, cap­i­tal from the Gulf is not avail­able for ob­vi­ous rea­sons, there are se­ri­ous con­cerns about a rate hike, and RAM prices are crash­ing be­cause new mod­els won’t need as much, but labs al­ready bought them at sky-high prices. And that last in­no­va­tion came from their biggest com­peti­tor, Google.

Anthropic is al­ready in a push to re­duce costs and in­crease rev­enue. If in­vestor money dries up, they will be forced to cut their losses and pass the true costs to their users. The ques­tion is now if cus­tomers will be will­ing to pay up. Independent re­ports state that Claude me­tered mod­els are priced 5x more ex­pen­sive than their sub­scribers pay, and no­body is sure if even their me­tered pric­ing is prof­itable. In in­vest­ing, sto­ries are way more ex­cit­ing than re­al­ity: a com­pany los­ing money but grow­ing like crazy is an eas­ier sell than a huge com­pany los­ing money or with tight mar­gins. Raising prices will for sure de­crease de­mand and that risks killing the growth story. And even if rev­enue keeps grow­ing, it does­n’t mat­ter if there are no mar­gins — grow­ing rev­enue with­out prof­its just means burn­ing cash faster, es­pe­cially when com­pet­ing against com­pa­nies that can of­fer the same prod­uct as a loss leader bun­dled into their cloud plat­forms.

It’s also worth men­tion­ing that Claude’s most ex­pen­sive sub­scrip­tion plans (Max and Max 5x, priced at $100 and $200 re­spec­tively) do not al­low for yearly pay­ments, hint­ing prices will go up.

OpenAI is strug­gling to mon­e­tize. They turned to show­ing ads in ChatGPT, some­thing Sam Altman once called a last re­sort”, while Anthropic is crush­ing them with the more prof­itable cor­po­rate cus­tomers and soft­ware en­gi­neers. Their shop­ping fea­ture flopped and they shut down Sora, both sup­posed to be rev­enue dri­vers.

I would­n’t be sur­prised at all if in the next cou­ple of quar­ters we see OpenAI look­ing for an exit. It will be in­ter­est­ing be­cause the sizes are now so big that we will prob­a­bly know all the de­tails. The most likely buyer is Microsoft, they al­ready own a lot of it, and be­cause of that, they are the most in­ter­ested in show­ing a win. Sam Altman man­aged to get Microsoft so in­volved in OpenAI that mak­ing sure it lands on its feet is a Microsoft prob­lem to solve. But, would share­hold­ers vote to spend 22% of an es­tab­lished com­pa­ny’s mar­ket cap to res­cue a money-burn­ing AI lab that has lost most of its dif­fer­en­tia­tors?

And in­de­pen­dent of whether Microsoft makes money or not in their OpenAI en­deavor, it kills the story: they were bet­ting the whole growth story on AI, and if that does­n’t work out, then what’s left to jus­tify a high stock price? They lose a big cus­tomer for their cloud ser­vices. Even worse con­sid­er­ing that now, us­ing the AI they helped fund, every­one can com­pete with their sub-par prod­ucts. GitHub is a good can­di­date for dis­rup­tion, and that’d be just the start.

You may think that you’re not af­fected by the big labs strug­gling. Hell, you may even be happy that they won’t be re­plac­ing your job af­ter all. But that is far from re­al­ity.

Investments are now so big that writ­ing them off would cer­tainly hurt pub­lic com­pa­nies’ bal­ance sheets, and their growth prospects. This will drag the whole mar­ket, re­duc­ing val­u­a­tions and slow­ing M&A, which fur­ther dries up VC money and slows down in­vest­ments. Just like it hap­pened in 2022.

And this has even more ram­i­fi­ca­tions, pen­sion funds around the world will take a hit. Datacenters that were built with the ex­pec­ta­tion of growth will now be un­der­ca­pac­ity, be­cause as train­ing is the most com­pute-in­ten­sive part of a model, if there’s no cap­i­tal to train a new one, they won’t be needed. GPUs then sit idle while their value goes down as there’s no de­mand. Some com­mit­ted GPUs may never get de­liv­ered, or even man­u­fac­tured. Investment dry­ing up is a dis­as­ter for Nvidia, now the biggest com­pany in the world.

It could hap­pen that dat­a­cen­ters are not un­der­used, but they get to charge their cus­tomers a way lower rate than they pro­jected be­fore build­ing, so every­one ben­e­fits from AI but them.

Building a dat­a­cen­ter is sup­posed to be a safe” in­vest­ment in nor­mal times, so banks give pri­vate credit and mort­gages to fi­nance them. A write-off of those as­sets means that banks start re­al­iz­ing losses, hurt­ing their ca­pac­ity to loan, and some may even be forced to liq­ui­date, just like we saw in 2023. And all this as­sumes we don’t get dis­rup­tions in man­u­fac­tur­ing in Taiwan or global sup­ply chains.

Of course, the con­tent of this ar­ti­cle is highly spec­u­la­tive, it may end up be­ing that de­mand for mod­els is just so high it off­sets every other prob­lem I lay. But al­most all in­no­va­tions go through a boom and bust cy­cle and I don’t see a rea­son this is an ex­cep­tion.

Thanks to Javier Silveira and Augusto Gesualdi for re­view­ing drafts of this post.

...

Read the original on martinvol.pe »

5 332 shares, 19 trendiness

The Curious Case of Retro Demo Scene Graphics

My whole art de­part­ment is run on trac­ing pa­per. Why re-in­vent the wheel?

The demo scene has a pe­cu­liar view on copy­right. It roughly boils down to a sys­tem of ef­fort - ef­fort in ideas, ef­fort in craft - where the scene po­lices it­self and pun­ishes sceners that steal out­right from other sceners. Theft from the out­side world, how­ever, is of­ten taken lightly - es­pe­cially when it comes to graph­ics.

Early pixel art on the scene was al­most al­ways copied (or, more cor­rectly, pla­gia­rized) from other sources. In par­tic­u­lar, fan­tasy- and sci­ence fic­tion re­lated art was im­mensely com­mon. Fantasy artists Boris Vallejo and Frank Frazetta, as well as raunchy ro­bot air­brusher Hajime Sorayama, were pop­u­lar favourites.

Three dif­fer­ent Amiga pixel art in­ter­pre­ta­tions of Frank Frazetta’s Death Dealer. All im­ages on this page are click­able and link to non-lossy ver­sions when avail­able.

This pixel art was­n’t about orig­i­nal­ity as much as it was about craft. Scanners and dig­i­tiz­ers were far too ex­pen­sive for a teenager, and the im­ages pro­duced by early con­sumer mod­els were crude and lack­lus­ter. Making an im­age truly pop with de­tail and sharp­ness re­quired hand-pix­elling, which is a very in­volved process. First, there was the copy­ing of a source out­line by hand, us­ing a mouse (or joy­stick, on the C64), and then came as­pects such as con­vey­ing de­tails in a lim­ited res­o­lu­tion (typically around 320x256 pix­els), pick­ing a lim­ited in­dexed palette (usually 16 or 32 colours), and man­u­ally adding dither­ing and anti-alias­ing. It was painstak­ing work.

The TV paint­ing tu­to­ri­als by pro­lific land­scape artist Bob Ross has­n’t be­come an on­line phe­nom­e­non be­cause his hun­dreds of moun­tain­scapes are era-defin­ing sen­sa­tions (though cer­tainly nice to look at), but be­cause peo­ple en­joy watch­ing his cre­ative process and tech­nique, mas­tered to per­fect ef­fort­less­ness. This no­tion is echoed in any care­fully hand-pix­elled work, where the craft it­self can be dis­cerned and en­joyed on its own, even if the sub­ject mat­ter is yet an­other Frazetta copy. Teenage boys will be teenage boys, and their choice of source ma­te­r­ial all too pre­dictable. The real value of early scene pix­els came from the in­vested labour, not whether they con­sti­tuted a unique com­po­si­tion or oth­er­wise fresh idea.

Owning Up, or Not

Some scene artists were very up­front about copy­ing. Bisley’s Horsys is clearly a Simon Bisley copy, and call­ing a pic­ture Vallejo (NSFW!) is self-ex­plana­tory. In the slide show Seven Seas, artist Fairfax clearly lists sources and in­spi­ra­tions in the in­cluded scroll text. Others were more quiet about it, but the pre­vail­ing sen­ti­ment among scene artists at that point in time was that copy­ing was not only al­lowed, but al­most ex­pected.

Pixel artist Lazur’s 256 colour ren­di­tion (left) of a photo by Krzysztof Kaczorowski (right). A mas­ter­ful copy show­cas­ing the sharp­ness, de­tails and vi­brancy achiev­able with pixel tech­niques. Of spe­cial note is the use of dither­ing on the match­box striker and the front­most man’s sweater, cre­at­ing an al­most tac­tile sense of tex­ture.

Just like in tra­di­tional paint­ing, some pixel artists had a nat­ural knack for copy­ing by free­hand, whereas oth­ers re­sorted to more fan­ci­ful meth­ods. Some used grids, over­lay­ing the orig­i­nal im­age and then re­pro­duc­ing the same grid on screen to re­tain pro­por­tions. Others traced out­lines onto over­head pro­jec­tor sheets, which - thanks to the na­ture of CRT mon­i­tors - were easy to stick to the com­puter screen and trace un­der. Today, the use of draw­ing tablets is much more likely. In the end, how­ever, they all had to fill, shade, dither and anti-alias by hand.

Scene artists soon per­fected the pixel art trans­la­tion, and could ac­com­plish as­ton­ish­ing re­sults with very lim­ited re­sources. Some started adding their own flair to their copies: a few de­tails here and there, per­haps com­bin­ing sev­eral sources into a new com­po­si­tion. This grind of copy­ing and re­fin­ing is of­ten a great way to learn, and peo­ple in their late teens may be for­given for want­ing to em­u­late their idols with­out in­clud­ing the proper cred­its.

Some time around 1995, scan­ners had be­come both cheaper and bet­ter, and the Internet opened up a world of new im­age sources. Combined with cheap, pow­er­ful PCs and wide­spread piracy of Adobe Photoshop, this al­lowed for new ways of cre­at­ing dig­i­tal art. Clever ras­cals started do­ing pure scans and pass­ing them off as their own work, but these were still of­ten in­fe­rior in qual­ity to the hand­made pixel art copies. With time, how­ever, paintovers and tweaked scans could of­ten be passed off as craft to an un­sus­pect­ing au­di­ence. Around this time, the No Copy? web page was launched, caus­ing dis­il­lu­sion­ment among many graph­ics fans who weren’t fa­mil­iar with how com­mon copy­ing in fact was.

At its core the scene is a mer­i­toc­racy, even if the source of merit may some­times seem strange to out­siders. Scanning and re­touch­ing was (and re­mains) con­sid­ered low sta­tus and cheat­ing, and many artists and other sceners com­plained (and com­plains) loudly when find­ing some­one out. Before 1995, com­plaints about scan­ning weren’t usu­ally about copied source ma­te­r­ial, but about the lack of craft: the process still mat­tered more than orig­i­nal­ity and imag­i­na­tion.

Around the turn of the mil­len­nium, this at­ti­tude started to shift. Many sceners were now well into their twen­ties or thir­ties, and with ma­tu­rity came a thirst for orig­i­nal work - both among artists and au­di­ence. Some artists, how­ever, had a hard time break­ing free from the com­fort of copy­ing or, worse, sim­ply con­vert­ing. The prac­tice con­tin­ued, but a greater stigma was now at­tached to it. Hence, Vallejo was dis­carded in fa­vor of ma­te­r­ial that could more safely be passed off as one’s own. Today’s var­i­ous art shar­ing web­sites have made this eas­ier than ever, but that also means pla­gia­riz­ing other hob­by­ist artists, which has a dif­fer­ent sort of tinge to it than teenagers rip­ping off big name fan­tasy painters.

Steve Jobs once said that good artists copy and great artists steal, and at­trib­uted the quote to Picasso. As with many good quotes, it’s of­ten re­ferred to out of con­text, and with­out much thought. The ac­tual source seems to be T. S. Eliot, who wrote that Immature po­ets im­i­tate; ma­ture po­ets steal; bad po­ets de­face what they take, and good po­ets make it into some­thing bet­ter, or at least some­thing dif­fer­ent. The good poet welds his theft into a whole of feel­ing which is unique, ut­terly dif­fer­ent from that from which it was torn; the bad poet throws it into some­thing which has no co­he­sion.”

It’s easy to mis­con­strue Jobs’ ver­sion of the quote as a carte blanche for sim­ply re­pro­duc­ing some­one else’s work, but what Eliot de­scribes is how artists un­der­stand art, and how they in­cor­po­rate in­spi­ra­tion from other works into their own: He’s not sug­gest­ing that great po­ets copy Shakespeare ver­ba­tim and pass it off as theirs. In fair­ness, nei­ther did Jobs: At their height, Apple de­cid­edly im­proved what they stole - es­pe­cially the GUI.

The dis­tinc­tion be­tween copy­ing and orig­i­nal work spans a gray area, and when pressed about copy­ing, demo scene artists will usu­ally mum­ble some­thing about how every­one uses references”. For peo­ple not gen­er­ally in­volved in paint­ing, this might sound plau­si­ble enough, but ref­er­ences aren’t the same as mak­ing copies of pre-ex­ist­ing art. References are an aid for vi­su­ally un­der­stand­ing a sub­ject and achiev­ing re­al­ism, be­cause no­body can per­fectly draw, say, a train from mem­ory alone.

Hergé was a stick­ler for re­al­ism and of­ten did near-per­fect re­pro­duc­tions of ref­er­ences in Tintin - but al­ways in his own dis­tinct lignie claire” style.

Some will use ex­ist­ing pho­tos, some will walk down to the lo­cal train sta­tion with a cam­era, oth­ers still will bring a sketch­book and make de­tailed pen­cil stud­ies. If striv­ing for ac­cu­racy and de­tail, photo ref­er­ences are in­valu­able. Sometimes an artist will work from a photo they’ve taken or com­mis­sioned them­selves, thus be­ing in con­trol of the sub­ject and com­po­si­tion. Anders Zorn and Pascal Dagnan-Bouveret are two of a plethora of clas­sic painters who used photo ref­er­ences for some of their most rec­og­niz­able works; Zorn him­self was an avid pho­tog­ra­pher.

Norman Rockwell demon­strat­ing his use of a Balopticon.

Famous Americana il­lus­tra­tor Norman Rockwell fre­quently used a Balopticon to pro­ject pho­tos onto a can­vas and traced the pro­jec­tion. He de­scribed this tech­nique with no small amount of self-dep­re­ca­tion: The Balopticon is an evil, inartis­tic, habit-form­ing, lazy and vi­cious ma­chine. I use one of­ten - and though am thor­oughly ashamed of it. I hide it when­ever I hear peo­ple com­ing.” Yet, his per­sonal style is un­mis­tak­able and the photo com­po­si­tions were his own. Dutch re­nais­sance mas­ter Vermeer is sug­gested to have used a sim­i­lar tech­nique with a cam­era ob­scura.

The key dif­fer­ence be­tween a ref­er­ence and a copy is that in a copy, the source is a work of art by some­one else, and the orig­i­nal artist’s sub­ject, style, in­tent, com­po­si­tion and choices are trans­ferred onto the new work. Perfectly re­pro­duc­ing the Mona Lisa may take time and skill, but the re­pro­duc­tion is a copy, not an orig­i­nal work based on a ref­er­ence. Trying to pass it off as your own is pla­gia­rism, and this is what most sceners ac­tu­ally mean when they say copy”.

To the left is a skill­ful 1994 pixel ren­di­tion by Tyshdomos of the car­i­ca­ture to the right, by Sebastian Krüger. The orig­i­nal was no doubt made us­ing at least one ref­er­ence. The pixel art ver­sion, while show­ing much more than just a shal­low un­der­stand­ing of the source ma­te­r­ial, is still a copy of the style, in­tent and choices of Krüger. Tyshdomos usu­ally cred­ited the orig­i­nal artist in his im­ages.

As op­posed to the more tra­di­tional pla­gia­rism on the scene, pre-ex­ist­ing dig­i­tal im­ages re­quire no te­dious man­ual trans­fer us­ing a mouse. It’s sim­ply a mat­ter of scal­ing them down to a suit­able retro res­o­lu­tion and adding a sprin­kle of your own dither­ing to make it seem more hand­made. Suddenly - as with scan­ning - the grind of the copy is no longer a fac­tor, and the craft is seem­ingly re­duced to cov­er­ing up the pic­ture’s ori­gin.

In the pre­sent day, typ­i­cal retro sceners are in their for­ties and fifties and have fam­i­lies, es­tab­lished ca­reers and com­fort­able mid­dle class salaries. The scene is no longer a place for cut­throat teenage so­cial games, but an in­dul­gent hobby and time sink of choice. It’s about cre­at­ing for the sake of cre­at­ing, for the love of the craft, for the joy of the process. It’s about get­ting bet­ter at some­thing that is, ul­ti­mately, ut­terly in­con­se­quen­tial in the grand scheme of things. It’s even point­less as a mid­dle class sta­tus marker: few peo­ple brag to their neigh­bours about hav­ing coded a tex­ture-mapped cube in a pe­cu­liar graph­ics mode on a long for­got­ten home com­puter.

Most pixel artists have long since left the bla­tant pla­gia­rism be­hind and are now ac­com­plished, ma­ture cre­ators. They’re ca­pa­ble of think­ing up orig­i­nal ideas and re­al­iz­ing them in their own, unique styles. As with any hobby, there’s still sta­tus to be had among the in-group, but the strict peck­ing or­der of teenagers has been re­placed with a laid-back at­ti­tude of friend­ship, shar­ing and mu­tual ap­pre­ci­a­tion of the de­mo­mak­ing craft in gen­eral.

Despite this, there are graph­ics artists who con­tinue to pla­gia­rize, and those who’ve started to rely on gen­er­a­tive AI. Some are up­front about this, too, and clearly la­bel AI gen­er­ated im­ages as such. Others tell out­right lies or are very quiet or avoidant when dis­cussing their process. Often, there’s a bit of man­u­ally added pix­els in these pic­tures for good mea­sure, like a sprig of pars­ley on a mi­crowave meal be­ing passed off as a labour of love.

Just like with copy­ing, there’s an on­go­ing dis­cus­sion about AI on the scene, and there are as many dif­fer­ent views as there are sceners. The gen­eral con­sen­sus seems to be in the camp of hon­or­ing the craft, or at the very least prac­tic­ing trans­parency about the cre­ative process. This is re­flected in the rules of most demo par­ties, which of­ten ex­plic­itly state that the use of gen­er­a­tive AI is for­bid­den - a rule that is seem­ingly hard to en­force and fre­quently bro­ken.

Elements of Green, orig­i­nal pixel art by Prowler. In this time­lapse we can fol­low the process from a pen­cil sketch (perhaps based on photo ref­er­ences) to fin­ished piece, via both dig­i­tal paint­ing and tra­di­tional pix­elling.

Some sceners claim that the end re­sult is all that mat­ters, and that dis­cussing or even dis­clos­ing the process is point­less. Another view is that gen­er­a­tive AI is just an­other tool, like a paint pro­gram, and that its us­age is a nat­ural pro­gres­sion for a cul­ture that has al­ways been about ex­plor­ing the in­ter­sec­tion of dig­i­tal tech­nol­ogy and art.

The Joy of Not Painting?

The scene - like cre­ative com­mu­ni­ties in gen­eral - has al­ways been full of con­tra­dic­tions and para­doxes, in views as well as meth­ods. In some cases, what could be con­sid­ered pla­gia­rism is the cen­tral point of an en­tire body of work: Batman Group is a demo group that al­most ex­clu­sively makes Batman-themed demos, show­cas­ing as­ton­ish­ing skill in raw tech as well as aes­thet­ics and sto­ry­telling. In other cases, it may be a ques­tion of satire or uti­liz­ing a cul­tur­ally pow­er­ful pas­tiche. One of my own favourite demos of all time, Deep - The Psilocybin Mix, makes heavy use of (very ap­par­ent) photo mon­tages. These are things both artists and au­di­ences have to live and deal with on a case-to-case ba­sis.

For me per­son­ally, gen­er­a­tive AI ru­ins much of the fun. I still en­joy cre­at­ing pixel art and mak­ing lit­tle an­i­ma­tions and demos. My own cre­ative process re­mains sat­is­fy­ing as an iso­lated ac­tiv­ity. Alas, ob­vi­ous AI gen­er­ated im­agery - as well as mid­dle-aged men pla­gia­riz­ing other, some­times much younger, hob­by­ist artists - makes me feel dis­ap­pointed and empty. It’s not as much about ef­fort as it is about the loss of style and per­son­al­ity; soul, if you will. The re­sult is de­face­ment, to echo T. S. Eliot, rather than in­spired im­prove­ment. Even in more elab­o­rate AI-based works, it’s hard to tell where the prompt ends and the pix­elling be­gins.

In the com­mer­cial world of late stage cap­i­tal­ism, I’d ex­pect noth­ing less than cut­ting cor­ners. For me, the scene is about some­thing else. It’s a place of refuge from the con­stant churn of in­creased ef­fi­ciency, and an es­cape from the sick­en­ing void of the on­line at­ten­tion econ­omy. It’s where we can spend months putting yet an­other row of mov­ing pix­els on the screen to break some old record, be­cause the plat­form does­n’t change and no­body is pay­ing us to be quick about it. It’s where I in­stinc­tively want to go for things that aren’t the re­sult of a few min­utes in front of DALL-E. I can get that every­where else, at any time.

Farting around with Amigas in 2026 means ac­tively choos­ing to make things harder for the sake of mak­ing things harder. Making that choice and still out­sourc­ing the bulk of the craft and cre­ative process is like claim­ing to be a pas­sion­ate hobby cook while serv­ing pro­fes­sion­ally catered din­ners and pre­tend­ing they’re your own con­coc­tions.

There’s not much to be done about it, be­cause the scene has no gov­ern­ing body or court of ap­peals - and I dearly hope it stays that way. I just can’t wrap my head around the point of us­ing AI in this set­ting: It feels an­ti­thet­i­cal to a cul­ture that so adamantly cel­e­brates cre­ativ­ity, tech­ni­cal lim­i­ta­tions, ex­tremely spe­cial­ized skills, and anti-com­mer­cial shar­ing of art and soft­ware.

What’s in­ter­est­ing is that those most re­liant on AI and pla­gia­rism seem to feel the same way. Otherwise, they would­n’t be so se­cre­tive about it.

...

Read the original on www.datagubbe.se »

6 298 shares, 11 trendiness

New Apple Silicon M4 & M5 HiDPI Limitation on 4K External Displays

Where the limit is ap­plied­What could fix this

Starting with the M4 and in­clud­ing the new M5 gen­er­a­tions of Apple Silicon, ma­cOS no longer of­fers or al­lows full-res­o­lu­tion HiDPI 4k modes for ex­ter­nal dis­plays.

The max­i­mum HiDPI mode avail­able on a 3840x2160 panel is now just 3360x1890 - M2/M3 ma­chines did not have this lim­i­ta­tion.

With this re­gres­sion Apple is leav­ing users to choose be­tween:

Full screen real es­tate at 4k (3840x2160) with blurry text due to HiDPI be­ing dis­abled.

Reduced screen real es­tate at 3.3k (3360x1890) with sharp text (HiDPI) but sig­nif­i­cantly less us­able work­ing space, and ma­cOS’s UI look­ing ridicu­lously over­sized.

The DCP (Display Coprocessor) re­ports iden­ti­cal ca­pa­bil­i­ties on both M2 Max and M5 Max for the same dis­play. The M5 Max hard­ware sup­ports 8K (7680x4320) at 60Hz per Apple’s own specs. However, the M4/M5 gen­er­a­tion ap­pears to have in­tro­duced a new per-sub-pipe frame­buffer bud­get sys­tem (IOMFBMaxSrcPixels) that caps the sin­gle-stream scaler path (sub-pipe 0) at 6720 pix­els wide - ex­actly the back­ing store width for 3360x1890 HiDPI. The M2 Max used a com­pletely dif­fer­ent ar­chi­tec­ture with a flat per-con­troller bud­get of 7680 pix­els wide, which is why it worked.

Both ma­chines re­port iden­ti­cal DCP pa­ra­me­ters for the LG dis­play:

What: Wrote a dis­play over­ride plist to /Library/Displays/Contents/Resources/Overrides/DisplayVendorID-1e6d/DisplayProductID-7750 con­tain­ing scale-res­o­lu­tions en­tries for 7680x4320 HiDPI.

Result: No ef­fect on M5 Max. The iden­ti­cal plist pro­duces 3840x2160 HiDPI on M2 Max. WindowServer on M5 Max re­fuses to enu­mer­ate the mode re­gard­less of plist con­tent.

The over­ride plist that works on M2 Max:

What: Wrote a patched EDID into the over­ride plist’s IODisplayEDID key with:

Result: No ef­fect with these val­ues. However, wayd­ab­ber (incredibly help­ful BetterDisplay de­vel­oper) has con­firmed that soft­ware EDID over­rides can work on M4 - he got an 8K frame­buffer on a 4K TV by adding a valid 8K tim­ing and defin­ing it as the na­tive res­o­lu­tion. The catch: even with a cor­rect over­ride, a 4K panel can’t ac­tu­ally ac­cept an 8K sig­nal, so this con­firms the mech­a­nism (scaled modes de­rive from the sys­tem’s idea of na­tive res­o­lu­tion) with­out pro­vid­ing a prac­ti­cal fix.

What: Created a patched EDID with VIC 199 (7680x4320@60Hz) added to the CEA Video Data Block, keep­ing the pre­ferred de­tailed tim­ing at 3840x2160. Successfully flashed to the LG mon­i­tor’s EEPROM via BetterDisplay.

Result: The DCP read VIC 199 from the hard­ware EDID and up­dated its re­ported ca­pa­bil­i­ties: MaxW changed from 3840 to 7680, MaxH from 2160 to 4320, and MaxActivePixelRate to 1,990,656,000. The DCP also al­lo­cated 2 pipes (PipeIDs=(0,2), MaxPipes=2) as it would for a real 8K dis­play. However, the sub-pipe 0 frame­buffer bud­get (MaxSrcRectWidthForPipe) re­mained at 6720, and no 3840x2160 HiDPI mode ap­peared.

A fur­ther at­tempt added a DisplayID Type I Detailed Timing for 7680x4320@30Hz marked as pre­ferred and na­tive. This did gen­er­ate a 3840x2160 scale=2.0 mode in the CG mode list. However, when se­lected, ma­cOS at­tempted to out­put 7680x4320 on the wire (since the EDID de­clared it as a sup­ported out­put mode), which the LG could not dis­play. A DisplayID Display Parameters block (declaring 7680x4320 as na­tive pixel for­mat with­out cre­at­ing an out­put tim­ing) did not gen­er­ate any new modes.

What: Created a patched EDID bi­nary with boosted range lim­its only (keeping pre­ferred tim­ing at na­tive 3840x2160 to avoid break­ing dis­play out­put), at­tempted to flash to the LG mon­i­tor’s EEPROM via BetterDisplay’s Upload EDID fea­ture.

Result: The range-lim­its-only flash did not change any DCP pa­ra­me­ters. The DCP de­rives MaxActivePixelRate from the pre­ferred tim­ing’s pixel clock, not from the range lim­its. A sub­se­quent flash with VIC 199 added to the Video Data Block was suc­cess­ful (see EDID Hardware Flash” sec­tion above).

What: Attempted to mod­ify the DCPs DisplayHints dic­tio­nary and ConnectionMapping ar­ray di­rectly in the IOKit reg­istry us­ing IORegistryEntrySetCFProperty, tar­get­ing higher MaxW, MaxH, and MaxActivePixelRate val­ues.

Result: The DCP dri­ver ex­plic­itly re­jects user­space prop­erty writes with kIORe­tur­nUn­sup­ported (kern_return=-536870201). These prop­er­ties are owned by the ker­nel-level AppleDisplayCrossbar dri­ver and can­not be mod­i­fied from user­space.

What: Used IOServiceRequestProbe to trig­ger the DCP to re-read dis­play in­for­ma­tion af­ter writ­ing over­ride plists.

Result: No ef­fect on mode enu­mer­a­tion. The DCP re-reads from the phys­i­cal dis­play, not from soft­ware over­rides.

What: Deleted ~/Library/Preferences/ByHost/com.apple.windowserver.displays.*.plist and at­tempted to restart WindowServer. Also per­formed a full re­boot.

Result: kil­lall WindowServer on ma­cOS 26 does not ac­tu­ally restart WindowServer (no dis­play flicker, no ses­sion in­ter­rup­tion). Full re­boot with the over­ride plist in place still did not pro­duce the 3840x2160 HiDPI mode. The cache was not the is­sue.

What: Disconnected the third dis­play (U13ZA) to test whether the DCPs band­width bud­get across dis­play pipes was the con­straint.

Result: No ef­fect. With only 2 dis­plays (LG + built-in), the mode list re­mained iden­ti­cal. The lim­i­ta­tion is not re­lated to the num­ber of con­nected dis­plays.

What: Considered switch­ing from USB-C/DisplayPort to HDMI.

Result: Not at­tempted; HDMI 2.0 has less band­width (14.4 Gbps vs 25.92 Gbps on DP 1.4 HBR3), so would be the same or worse.

What: Used SLConfigureDisplayWithDisplayMode from the pri­vate SkyLight frame­work to at­tempt to di­rectly ap­ply a 3840x2160 HiDPI mode (7680x4320 pixel back­ing, scale=2.0) to the LG dis­play. The mode was sourced from both the CG mode list and from other dis­plays.

Result: Returns er­ror code 1000 when the mode is not in the dis­play’s own mode list. The SkyLight dis­play con­fig­u­ra­tion API val­i­dates modes against the same DCP-derived mode list as WindowServer. There is no pri­vate API path to by­pass the mode list val­i­da­tion.

Where the limit is ap­plied#

The DCP re­ports iden­ti­cal ca­pa­bil­ity pa­ra­me­ters on both ma­chines - MaxActivePixelRate, MaxW, MaxH, MaxTotalPixelRate all match. These come from the dis­play’s EDID, so that’s ex­pected.

The dif­fer­ence shows up in WindowServer’s mode list. On M2 Max, CGSGetNumberOfDisplayModes in­cludes 3840x2160 at scale=2.0. On M5 Max, with the same DCP pa­ra­me­ters and the same over­ride plists, that mode does­n’t ex­ist.

The IOMFBMaxSrcPixels prop­erty on the IOMobileFramebufferShim IOKit ser­vice ex­poses frame­buffer size bud­gets. The M2 Max and M5 Max use fun­da­men­tally dif­fer­ent struc­tures here, which is the root cause of the re­gres­sion.

Every ex­ter­nal dis­play con­troller gets a flat MaxSrcRectWidth of 7680 and MaxSrcRectTotal of 33,177,600 (exactly 7680 x 4320). The LG is as­signed to PipeIDs=(1). With a 7680-pixel bud­get, 3840x2160 HiDPI (7680x4320 back­ing store) fits com­fort­ably.

M5 Max re­struc­tured to per-sub-pipe bud­gets within each con­troller:

The 4 val­ues in MaxSrcRectWidthForPipe are sub-pipes within each dis­play con­troller, not sep­a­rate dis­play out­puts. A sin­gle-stream 4K dis­play only uses sub-pipe 0. Sub-pipes 1-3 are for multi-pipe con­fig­u­ra­tions (8K dis­plays use 2 sub-pipes si­mul­ta­ne­ously, which is why an 8K EDID causes the DCP to as­sign PipeIDs=(0,2) with MaxPipes=2).

Single-stream out­put (used by all stan­dard dis­plays)

Sub-pipe 0’s bud­get of 6720 pix­els lines up ex­actly with the ob­served cap: 3360x1890 HiDPI needs a 6720x3780 back­ing store. For 3840x2160 HiDPI, the back­ing store would need to be 7680 pix­els wide. Sub-pipes 1-3 have this bud­get, but they’re only ac­ces­si­ble in multi-pipe mode for dis­plays that gen­uinely out­put above 4K.

This prop­erty is set by the ker­nel-level IOMobileFramebufferShim dri­ver and can’t be mod­i­fied from user­space.

The bud­get is fixed at boot#

Testing con­firmed that MaxSrcRectWidthForPipe is set when the dri­ver loads and does not change at run­time, re­gard­less of what you do:

It’s pos­si­ble the dri­ver reads EDID con­tent dur­ing early boot to de­ter­mine these al­lo­ca­tions (as wayd­ab­ber’s analy­sis sug­gests), but that has­n’t been con­firmed with a cold boot test us­ing a mod­i­fied EDID yet.

Generally 3840x2160 HiDPI is not avail­able with any M4 gen­er­a­tion Mac on non-8K dis­plays due to the new dy­namic na­ture of how the sys­tem al­lo­cates re­sources. There might be ex­cep­tions maybe - when the sys­tem con­cludes that no other dis­plays could be at­tached and there are re­sources left still for a higher res­o­lu­tion frame­buffer. But nor­mally the sys­tem al­lo­cates as low frame­buffer size as pos­si­ble, an­tic­i­pat­ing fur­ther dis­plays to be con­nected and sav­ing room for those.”

The IOMFBMaxSrcPixels data fits this de­scrip­tion. The M5 Max sup­ports up to 4 ex­ter­nal dis­plays, and the GPU dri­ver pre-al­lo­cates frame­buffer bud­gets across all pipes at boot to cover the chip’s max­i­mum sup­ported dis­play con­fig­u­ra­tion. Pipe 0 gets a re­duced bud­get of 6720 to leave room for dis­plays that could be plugged in. Even in clamshell mode with only the LG con­nected, the bud­get stays at 6720 - the dri­ver does­n’t care how many dis­plays are ac­tu­ally pre­sent.

The DCP re­ports iden­ti­cal ca­pa­bil­i­ties on M2 Max and M5 Max (same MaxW, MaxH, MaxActivePixelRate)The M2 Max uses a flat per-con­troller frame­buffer bud­get (MaxSrcRectWidth=7680), giv­ing every ex­ter­nal dis­play enough back­ing store width for 3840x2160 HiDPIThe M5 Max re­struc­tured to per-sub-pipe bud­gets (MaxSrcRectWidthForPipe=(6720, 7680, 7680, 7680)), where the sin­gle-stream sub-pipe (the only one a 4K dis­play can use) is capped at 6720This caps the back­ing store width and there­fore caps HiDPI at 3360x1890 on M5 MaxDisconnecting other dis­plays, switch­ing ports, or clos­ing the lap­top lid does­n’t change the sub-pipe bud­get­sAdding VIC 199 (8K) to the EDID changes DCP-reported MaxW/MaxH but does­n’t af­fect the sub-pipe bud­getAdding a DisplayID 7680x4320 tim­ing cre­ates a 3840x2160 scale=2.0 mode, but ma­cOS tries to out­put 8K on the wire (treating it as a real out­put mode rather than a scal­ing mode), which the 4K panel can’t dis­play

The scaled res­o­lu­tion modes on M4/M5 are de­rived from what­ever the sys­tem be­lieves is the dis­play’s na­tive res­o­lu­tion. On M2/M3, the sys­tem would gen­er­ate HiDPI modes up to 2.0x the na­tive res­o­lu­tion (so 3840x2160 na­tive got you a 7680x4320 back­ing store). On M4/M5, the sin­gle-stream sub-pipe bud­get caps this at around 1.75x. Whether this is a hard­ware con­straint in the new scaler ar­chi­tec­ture or a con­ser­v­a­tive firmware al­lo­ca­tion pol­icy is un­clear with­out Apple’s doc­u­men­ta­tion - but the ar­chi­tec­tural change from M2s flat bud­get to M5s sub-pipe bud­get is the di­rect cause of the re­gres­sion.

What could fix this#

This needs a change from Apple in the IOMobileFramebufferShim dri­ver’s sub-pipe bud­get al­lo­ca­tion. Specifically, sub-pipe 0’s MaxSrcRectWidthForPipe needs to be 7680 in­stead of 6720 when a 3840x2160 dis­play is con­nected. A few ways they could ap­proach it:

Raise sub-pipe 0’s bud­get to 7680 for ex­ter­nal dis­play con­trollers (matching the M2 Max’s flat al­lo­ca­tion)Dy­nam­i­cally re­al­lo­cate sub-pipe bud­gets based on ac­tu­ally con­nected dis­plays and their ca­pa­bil­i­ties

The M2 Max’s flat per-con­troller bud­get of 7680 proves the dis­play con­troller hard­ware can han­dle it. The M5 Max’s multi-pipe sub-pipes (1-3) also have 7680, but these are only used for 8K multi-stream out­put. I’ve filed Apple Feedback FB22365722.

A 5K or 8K panel may not hit the ex­act same limit since its EDID na­tive res­o­lu­tion is high enough that 1.75x scal­ing still pro­vides a us­able back­ing store.

Commands to re­pro­duce this on any Mac. All ex­cept #3 work with­out spe­cial per­mis­sions. Command #6 is the most use­ful sin­gle di­ag­nos­tic for this is­sue.

# 1. DCP rate lim­its and na­tive caps per dis­play

ioreg -l -w0 | grep -o “MaxActivePixelRate”=[0-9]*\|“MaxW”=[0-9]*\|“MaxH”=[0-9]*’ \

| paste - - - | sort -u

# 2. System pro­filer dis­play sum­mary

sys­tem_pro­filer SPDisplaysDataType

# 3. All HiDPI modes for a dis­play (requires Screen Recording per­mis­sion)

# Use BetterDisplay, SwitchResX, or any tool that calls

# CGSGetNumberOfDisplayModes / CGSGetDisplayModeDescriptionOfLength.

# Example out­put for­mat shown be­low.

# 4. Display con­nec­tion de­tails and DisplayHints

ioreg -l -w0 | grep -B5 -A2 MaxActivePixelRate’ | grep -v EventLog

# 5. ConnectionMapping (per-pipe al­lo­ca­tion)

ioreg -l -w0 | grep ConnectionMapping”

# 6. Per-pipe frame­buffer bud­gets (the key con­straint on M4/M5)

ioreg -l -w0 | grep IOMFBMaxSrcPixels”

Note: Commands 2, 4, 5 were cap­tured with­out the LG con­nected. The mode list (command 3) was cap­tured with the LG con­nected in a sep­a­rate ses­sion.

Note: 3840x2160 at scale = 2.0 is pre­sent as the high­est avail­able HiDPI mode.

When LG is con­nected, re­ports iden­ti­cal val­ues to M5 Max:

Graphics/Displays:

Apple M5 Max:

Chipset Model: Apple M5 Max

Type: GPU

Bus: Built-In

Total Number of Cores: 40

Vendor: Apple (0x106b)

Metal Support: Metal 4

Displays:

LG HDR 4K:

Resolution: 6720 x 3780

UI Looks like: 3360 x 1890 @ 60.00Hz

Main Display: Yes

Mirror: Off

Online: Yes

Rotation: Supported

Color LCD:

Display Type: Built-in Liquid Retina XDR Display

Resolution: 3456 x 2234 Retina

Mirror: Off

Online: Yes

Automatically Adjust Brightness: Yes

Connection Type: Internal

U13ZA:

Resolution: 3840 x 2400 (WQUXGA)

UI Looks like: 1920 x 1200 @ 60.00Hz

Mirror: Off

Online: Yes

Rotation: Supported

...

Read the original on smcleod.net »

7 290 shares, 13 trendiness

15 Years of Forking

Fifteen years ago to­day, I posted a thread on the Overclock.net fo­rums. I was six­teen, I had an HP Compaq TC4400 that I’d con­vinced my par­ents would improve my school work”, and I was frus­trated that Firefox did­n’t have an of­fi­cial 64-bit build. So I com­piled one my­self, called it Waterfox, stuck it on SourceForge and went back to my A lev­els.

Within a week it had 50,000 down­loads, com­pletely un­ex­pected. Frustratingly, be­ing on an is­land in the Mediterranean meant there was no sup­port net­work or any­one to turn to with re­gards to what’s next”. Had I been state­side, with the in­fra­struc­ture and in­sti­tu­tional knowl­edge of tech”, who knows - I might’ve had a guid­ing hand on how to man­age some­thing like this and work with the mo­men­tum. But alas, I would have to learn a lot of painful lessons my­self.

Fast for­ward to to­day, 15 years later, and Waterfox is still here. So am I, al­beit a bit older and sig­nif­i­cantly more tired. At best es­ti­mates, Waterfox prob­a­bly has around 1M monthly ac­tive users.

If you go and look at that orig­i­nal OCN thread, it’s a very dif­fer­ent world. People are talk­ing about Silverlight sup­port, MSVCR100.dll er­rors, and Peacekeeper bench­mark scores. Someone asks for a 64-bit Chromium build and the thread ti­tle gets up­dated with every new Firefox ver­sion, all the way up to 56.0.2.

Originally, and un­der the user­name MrAlex, I was only try­ing to earn enough fo­rum rep­u­ta­tion so I could trade and buy sec­ond hand PC parts. I did­n’t have a plan and I cer­tainly did­n’t have a busi­ness model. I just thought it was cool that you could take some­one else’s source code, com­pile it with some changes, and end up with some­thing dif­fer­ent. Open source is a won­der­ful thing when you’re six­teen and don’t know any­thing about the soft­ware de­vel­op­ment life­cy­cle, yearn­ing for knowl­edge.

You can scour the in­ter­net, read this blog or view the me­dia carousel at the bot­tom for then un­til now, but the short ver­sion: Waterfox grew - a lot - over 25 mil­lion life­time down­loads, and that fig­ure is from cal­cu­la­tions about seven years ago so the real num­ber is cer­tainly higher. I went to uni­ver­sity, study­ing Electronics Engineering at York be­fore a mas­ters in Software Engineering at Oxford. I tried to start a char­i­ta­ble search en­gine, which failed as badly run star­tups tend to do. Ecosia reached out and some­thing nice hap­pened - Waterfox users helped plant over 350,000 trees in a sin­gle year.

Then System1 came along. I joined them, served as VP of Engineering, and helped scale the browser en­gi­neer­ing team through a NYSE IPO - a gen­uine ed­u­ca­tion, though com­pa­nies change and fo­cus shifts.

So I took Waterfox back un­der BrowserWorks, in­de­pen­dent once again. The three years since have been si­mul­ta­ne­ously the most dif­fi­cult and the most re­ward­ing of Waterfox’s ex­is­tence.

I’m not go­ing to pre­tend the eco­nom­ics of run­ning a pri­vacy fo­cused in­de­pen­dent browser are great, be­cause they’re re­ally not. When Bing ter­mi­nated all third party search con­tracts it hit hard - search part­ner­ships are ba­si­cally how in­de­pen­dent browsers sur­vive, and rev­enue has been poor since. There have been a few months in the red re­cently.

Other ways browsers make money just feel icky, and it’s not some­thing that Waterfox stands for ei­ther.

But, pain and all, I keep com­ing back. Every time I think about step­ping away, some­one sends a kind mes­sage through the do­na­tion page, or I see a thread some­where of some­one dis­cov­er­ing Waterfox for the first time and be­ing pleas­antly sur­prised. There’s a com­mu­nity here that cares, and I care about it.

I want users to know that what­ever fu­ture steps I’ll take, they’ll al­ways be with Waterfox and its sus­tain­abil­ity in mind.

This year will see Waterfox ship­ping a na­tive con­tent blocker built on Brave’s ad­block li­brary - and it’s worth ex­plain­ing what that means and why.

The blocker runs in the main browser process rather than as a web ex­ten­sion, which means it is­n’t sub­ject to the lim­i­ta­tions that ex­ten­sion based block­ers like uBlock Origin face. It’s faster, more tightly in­te­grated, and does­n’t de­pend on a sep­a­rate ex­ten­sion process or re­quire us to con­stantly pull in up­stream up­dates. Brave’s ad­block li­brary is also ma­ture - it has paid en­gi­neers work­ing on it, a wide fil­ter­set, and cru­cially it’s li­censed un­der MPL2, the same li­cence as Waterfox, which makes it a nat­ural fit. uBlock Origin, as good as it is, car­ries a GPLv3 li­cence that would’ve cre­ated real com­pat­i­bil­ity headaches.

For how it works in prac­tice: by de­fault, text ads will re­main vis­i­ble on our de­fault search part­ner’s page - cur­rently Startpage. The idea is that this is what will keep the lights on. This mir­rors the ap­proach Brave takes with their search part­ner.

Users who want to dis­able that en­tirely can do so with a sin­gle tog­gle in set­tings, and it has noth­ing to do with any of Brave’s crypto or re­wards ecosys­tem - we’re just us­ing the ad­block­ing li­brary. Everyone else gets a fast, na­tive ad­blocker out of the box, no ex­ten­sion re­quired.

If you al­ready use an ad­blocker, don’t worry, you can carry on us­ing it. This will be en­abled for new users or users who aren’t al­ready us­ing an ad­blocker.

In the mean­while, Waterfox’s mem­ber­ship of the Browser Choice Alliance along­side Google and Opera, is push­ing for fair com­pe­ti­tion and ac­tual user choice in the browser mar­ket.

And we still don’t have AI in the browser. That has­n’t changed. The browser’s job is to load web pages, keep your data pri­vate, and get out of the way. It seems other browsers have for­got­ten that.

Oh and one last thing - dis­tri­b­u­tion is im­por­tant too, so there’s a big­ger fo­cus on dif­fer­ent pack­ages and ar­chi­tec­ture sup­port (Linux, you are such a pain to tar­get) - more specif­i­cally for ARM64.

I’d like to think so. The browser mar­ket is more di­verse than it’s ever been in terms of soft forks - every­one and their mum seems to be launch­ing a vari­a­tion of Firefox. Running an in­de­pen­dent browser is get­ting harder, not eas­ier. But there are more peo­ple who care about pri­vacy now than there were when I was com­pil­ing a blue Firefox on a tablet PC in my bed­room. More peo­ple who want soft­ware that re­spects them.

Waterfox started be­cause a six­teen year old wanted a faster browser. Fifteen years later, it’s still here be­cause enough peo­ple want a browser that works for them - not for AI com­pa­nies, and not for any­one else.

Thanks to every­one who’s been part of this - from the OCN com­mu­nity who gave those early builds a chance, to the peo­ple who send do­na­tions with mes­sages that make my day, to the con­trib­u­tors who sub­mit patches and file bugs. This pro­ject has al­ways been big­ger than me, even when I’m the only one work­ing on it.

Here’s to the next 15! 🍻

I also would­n’t be where I am with the con­stant moral sup­port of my par­ents, Angela & Lakis, who since day dot have been proud of every­thing I’ve done, even if it’s felt like I was fail­ing and flail­ing. My friends, nu­mer­ous to count, but es­pe­cially Lee who I’m sur­prised has­n’t once told me to shut up about my tri­als and tribu­la­tions. And fi­nally, my won­der­ful girl­friend and part­ner Lucy who has been giv­ing help­ful de­sign tips be­cause while I have won­der­ful taste (only half jok­ing) my cre­ative tal­ent is un­for­tu­nately lack­ing.

Read the me­dia cov­er­age Waterfox has re­ceived over the last 15 years.

...

Read the original on www.waterfox.com »

8 283 shares, 89 trendiness

13 Government Apps That Spy Harder Than the Apps They Ban

The fed­eral gov­ern­ment re­leased an app yes­ter­day, March 27th, and it’s spy­ware.

The White House app mar­kets it­self as a way to get unparalleled ac­cess” to the Trump ad­min­is­tra­tion, with press re­leases, livestreams, and pol­icy up­dates. The kind of con­tent that every RSS feed on the planet de­liv­ers with one per­mis­sion: net­work ac­cess. But the White House app, ver­sion 47.0.1 (because sub­tlety died a long time ago), re­quests pre­cise GPS lo­ca­tion, bio­met­ric fin­ger­print ac­cess, stor­age mod­i­fi­ca­tion, the abil­ity to run at startup, draw over other apps, view your Wi-Fi con­nec­tions, and read badge no­ti­fi­ca­tions. It also ships with 3 em­bed­ded track­ers in­clud­ing Huawei Mobile Services Core (yes, the Chinese com­pany the US gov­ern­ment sanc­tioned, ship­ping track­ing in­fra­struc­ture in­side the sit­ting pres­i­den­t’s of­fi­cial app), and it has an ICE tip line but­ton that redi­rects straight to ICEs re­port­ing page.

This thing also has a Text the President” but­ton that auto-fills your mes­sage with Greatest President Ever!” and then col­lects your name and phone num­ber. There’s no spe­cific pri­vacy pol­icy for the app, just a generic white­house.gov pol­icy that does­n’t ad­dress any of the ap­p’s track­ing ca­pa­bil­i­ties.

The White House app might ac­tu­ally be one of the milder ones. I’ve been go­ing through every fed­eral agency app I can find on Google Play, pulling their per­mis­sions from Exodus Privacy (which au­dits Android APKs for track­ers and per­mis­sions), and what I found de­serves its own term. I’m call­ing it Fedware.

Ok so let me walk you through what the fed­eral gov­ern­ment is run­ning on your phone.

The FBIs app, myFBI Dashboard, re­quests 12 per­mis­sions in­clud­ing stor­age mod­i­fi­ca­tion, Wi-Fi scan­ning, ac­count dis­cov­ery (it can see what ac­counts are on your de­vice), phone state read­ing, and auto-start at boot. It also con­tains 4 track­ers, one of which is Google AdMob, which means the FBIs of­fi­cial app ships with an ad-serv­ing SDK while also read­ing your phone iden­tity. From what I found, the FBIs news app has more track­ers em­bed­ded than most weather apps.

The FEMA app re­quests 28 per­mis­sions in­clud­ing pre­cise and ap­prox­i­mate lo­ca­tion, and has gone from 4 track­ers in older ver­sions down to 1 in v3.0.14. Twenty-eight per­mis­sions for an app whose pri­mary func­tion is show­ing you weather alerts and shel­ter lo­ca­tions. To put that in con­text, the AP News app de­liv­ers the same kind of dis­as­ter cov­er­age with a frac­tion of the per­mis­sions.

IRS2Go has 3 track­ers and 10 per­mis­sions in its lat­est ver­sion, and ac­cord­ing to a TIGTA au­dit, the IRS re­leased this app to the pub­lic be­fore the re­quired Privacy Impact Assessment was even signed, which vi­o­lated OMB Circular A-130. The app shares de­vice IDs, app ac­tiv­ity, and crash logs with third par­ties, and TIGTA found that the IRS never con­firmed that fil­ing sta­tus and re­fund amounts were masked and en­crypted in the app in­ter­face.

MyTSA comes in lighter with 9 per­mis­sions and 1 tracker, but still re­quests pre­cise and ap­prox­i­mate lo­ca­tion. The TSAs own Privacy Impact Assessment says the app stores lo­ca­tion lo­cally and claims it never trans­mits GPS data to TSA. I’ll give them credit for doc­u­ment­ing that, be­cause most of these apps have pri­vacy poli­cies that read like ran­som notes.

CBP Mobile Passport Control is where things get gen­uinely alarm­ing. This one re­quests 14 per­mis­sions in­clud­ing 7 clas­si­fied as dangerous”: back­ground lo­ca­tion track­ing (it fol­lows you even when the app is closed), cam­era ac­cess, bio­met­ric au­then­ti­ca­tion, and full ex­ter­nal stor­age read/​write. And the whole CBP ecosys­tem, from CBP One to CBP Home to Mobile Passport Control, feeds data into a net­work that re­tains your faceprints for up to 75 years and shares it across DHS, ICE, and the FBI.

The gov­ern­ment also built a fa­cial recog­ni­tion app called Mobile Fortify that ICE agents carry in the field. It draws from hun­dreds of mil­lions of im­ages across DHS, FBI, and State Department data­bases. ICE Homeland Security Investigations signed a $9.2 mil­lion con­tract with Clearview AI in September 2025, giv­ing agents ac­cess to over 50 bil­lion fa­cial im­ages scraped from the in­ter­net. DHSs own in­ter­nal doc­u­ments ad­mit Mobile Fortify can be used to amass bi­o­graph­i­cal in­for­ma­tion of individuals re­gard­less of cit­i­zen­ship or im­mi­gra­tion sta­tus”, and CBP con­firmed it will retain all pho­tographs” in­clud­ing those of U. S. cit­i­zens, for 15 years.

Photos sub­mit­ted through CBP Home, bio­met­ric scans from Mobile Passport Control, and faces cap­tured by Mobile Fortify all feed this sys­tem. And the EFF found that ICE does not al­low peo­ple to opt out of be­ing scanned, and agents can use a fa­cial recog­ni­tion match to de­ter­mine your im­mi­gra­tion sta­tus even when other ev­i­dence con­tra­dicts it. A U. S.-born cit­i­zen was told he could be de­ported based on a bio­met­ric match alone.

SmartLINK is the ICE elec­tronic mon­i­tor­ing app, built by BI Incorporated, a sub­sidiary of the GEO Group (a pri­vate prison com­pany that prof­its di­rectly from how many peo­ple ICE mon­i­tors), un­der a $2.2 bil­lion con­tract. The app col­lects ge­olo­ca­tion, fa­cial im­ages, voice prints, med­ical in­for­ma­tion in­clud­ing preg­nancy data, and phone num­bers of your con­tacts. ICEs con­tract gives them unlimited rights to use, dis­pose of, or dis­close” all data col­lected. The ap­p’s for­mer terms of ser­vice al­lowed shar­ing virtually any in­for­ma­tion col­lected through the ap­pli­ca­tion, even be­yond the scope of the mon­i­tor­ing plan.” SmartLINK went from 6,000 users in 2019 to over 230,000 by 2022, and in 2019, ICE used GPS data from these mon­i­tors to co­or­di­nate one of the largest im­mi­gra­tion raids in his­tory, ar­rest­ing around 700 peo­ple across six cities in Mississippi.

And if you think your lo­ca­tion data is safe be­cause you use reg­u­lar apps and avoid gov­ern­ment ones, the fed­eral gov­ern­ment is buy­ing that data too. Companies like Venntel col­lect 15 bil­lion lo­ca­tion points from over 250 mil­lion de­vices every day through SDKs em­bed­ded in over 80,000 apps (weather, nav­i­ga­tion, coupons, games). DHS, FBI, DOD, and the DEA pur­chase this data with­out war­rants, cre­at­ing a con­sti­tu­tional loop­hole around the Supreme Court’s 2018 Carpenter v. United States rul­ing that re­quires a war­rant for cell­phone lo­ca­tion his­tory. The Defense Department even pur­chased lo­ca­tion data from prayer apps to mon­i­tor Muslim com­mu­ni­ties. Police de­part­ments used sim­i­lar data to track racial jus­tice pro­test­ers.

And then there’s the IRS-ICE data shar­ing deal from April 2025. The IRS and ICE signed a Memorandum of Understanding al­low­ing ICE to re­ceive names, ad­dresses, and tax data for peo­ple with re­moval or­ders. ICE sub­mit­ted 1.28 mil­lion names. The IRS er­ro­neously shared the data of thou­sands of peo­ple who should never have been in­cluded. The act­ing IRS Commissioner, Melanie Krause, re­signed in protest. The chief pri­vacy of­fi­cer quit. One per­son leav­ing changes noth­ing about the in­sti­tu­tion, and the data was al­ready out the door. A fed­eral judge blocked fur­ther shar­ing in November 2025, rul­ing it likely vi­o­lates IRS con­fi­den­tial­ity pro­tec­tions, but by then the IRS was al­ready build­ing an au­to­mated sys­tem to give ICE bulk ac­cess to home ad­dresses with min­i­mal hu­man over­sight. The court or­der is a speed bump, and they’ll find an­other route.

The apps, the data­bases, and the data bro­ker con­tracts all feed the same pipeline, and no sin­gle agency con­trols it be­cause they all share it.

The GAO re­ported in 2023 that nearly 60% of 236 pri­vacy and se­cu­rity rec­om­men­da­tions is­sued since 2010 had still not been im­ple­mented. Congress has been told twice, in 2013 and 2019, to pass com­pre­hen­sive in­ter­net pri­vacy leg­is­la­tion. It has done nei­ther. And it won’t, be­cause the sur­veil­lance ap­pa­ra­tus serves the peo­ple who run it, and the peo­ple who run it write the laws. Oversight is the­ater. The GAO is­sues a re­port, Congress holds a hear­ing, every­one per­forms con­cern for the cam­eras, and then the con­tracts get re­newed and the data keeps flow­ing. It’s work­ing ex­actly as de­signed.

The fed­eral gov­ern­ment pub­lishes con­tent avail­able through stan­dard web pro­to­cols and RSS feeds, then wraps that con­tent in ap­pli­ca­tions that de­mand ac­cess to your lo­ca­tion, bio­met­rics, stor­age, con­tacts, and de­vice iden­tity. They em­bed ad­ver­tis­ing track­ers in FBI apps. They sell the line that you need their app to re­ceive their pro­pa­ganda while the app qui­etly col­lects data that flows into the same sur­veil­lance pipeline feed­ing ICE raids and war­rant­less lo­ca­tion track­ing. Every sin­gle one of these apps could be re­placed by a web page, and they know that. The app ex­ists be­cause a web page can’t read your fin­ger­print, track your GPS in the back­ground, or in­ven­tory the other ac­counts on your de­vice.

You don’t need their app. You don’t need their per­mis­sion to ac­cess pub­lic in­for­ma­tion. You al­ready have a browser, an RSS reader, and the abil­ity to de­cide for your­self what runs on your own hard­ware. Use them.

...

Read the original on www.sambent.com »

9 274 shares, 48 trendiness

New Washington law bans noncompete agreements

The mea­sure, spear­headed by state Rep. Liz Berry (D-Seattle), out­laws non­com­pete agree­ments: in gen­eral, con­tracts that let em­ploy­ers for­bid work­ers from cre­at­ing or join­ing a com­pet­ing busi­ness for a set amount of time.

Industries that uti­lize non­com­pete agree­ments, oth­er­wise known as re­stric­tive covenants, in­clude tech­nol­ogy, health care, fi­nance and sales. The law, signed Monday, takes ef­fect on June 30, 2027.

Washington state is stand­ing up for work­ers,” Berry said in a news re­lease pub­lished Wednesday. If you want to take a new job with bet­ter pay or leave to start your own com­pany, your old job should­n’t be able to block you from pur­su­ing your dream.”

On the ef­fec­tive date, re­stric­tive covenants will be un­en­force­able for all Washington-based work­ers and busi­nesses, ac­cord­ing to the new law. New non­com­pete agree­ments are il­le­gal. Employers must no­tify cur­rent and for­mer work­ers in writ­ing about any voided non­com­pete agree­ments by Oct. 1, 2027.

The mea­sure fur­thers a state law from 2019 that lim­ited non­com­pete agree­ments to em­ploy­ees who earned more than about $126,859 and con­trac­tors who made more than around $317,147, ac­cord­ing to the 2026 earn­ings thresh­olds posted by the Washington State Department of Labor and Industries.

The state’s lat­est ap­proach echoes a de­ci­sion made in 2024 un­der for­mer President Joe Biden’s ad­min­is­tra­tion to pro­hibit non­com­pete agree­ments across the U. S. However, the Federal Trade Commission rolled back the ban this year.

After the Non-Compete Rule was is­sued, sev­eral em­ploy­ers and trade groups filed law­suits chal­leng­ing it,” the agency wrote in a rule pub­lished in February. Federal dis­trict courts in three ju­ris­dic­tions is­sued opin­ions in law­suits chal­leng­ing the Non-Compete Rule.”

In Washington, the new in­junc­tion also clar­i­fies non­so­lic­i­ta­tion agree­ments, which bar for­mer work­ers from court­ing clients and co-work­ers at their past work­places.

Nonsolicitation agree­ments are not the same as non­com­pete agree­ments, and they are not pro­hib­ited. However, the de­f­i­n­i­tion of (the) non­so­lic­i­ta­tion agree­ment must be nar­rowly con­strued,” per the law.

Locally, at­tor­neys are pro­vid­ing guid­ance to work­places about the new mea­sure.

Washington now joins a small but grow­ing num­ber of states that have de­clared non-com­pe­ti­tion covenants void and un­en­force­able,” Alex Cates, se­nior coun­sel at law firm Holland and Knight, wrote in an ad­vi­sory Tuesday. This is a ma­jor change.”

States with full non­com­pete bans in­clude California, North Dakota, Minnesota and Oklahoma, per the Economic Innovation Group, a bi­par­ti­san pub­lic pol­icy or­ga­ni­za­tion.

...

Read the original on www.seattletimes.com »

10 273 shares, 17 trendiness

15 years, one server, 8GB RAM and 500k users

web­mi­nal.org runs on a sin­gle CentOS Linux box with 8GB RAM. That’s it. No Kubernetes, no mi­croser­vices, no auto-scal­ing. One server since 2011. It has sur­vived:

* That one time in 2017 when a Spanish tech blog sent 10,000 users in one day

* My friend Freston’s in­sis­tence that Slackware is the only real dis­tro

The idea was sim­ple. I was sit­ting at my Windows ma­chine at work, want­ing to learn Linux. What if I could open a browser, prac­tice on a real Linux ter­mi­nal - no Run” but­ton, no Execute” but­ton, just a real server- gain the con­fi­dence, and then spin my chair to a real Linux ma­chine and ac­tu­ally use it? No fear, no hes­i­ta­tion, be­cause I al­ready know what I’m do­ing.

We just gave the en­tire site a re­design. Every page, from scratch. Here’s what changed:

Root Lab - prac­tice real sysad­min skills with full root ac­cess. We use User Mode Linux to give you a com­plete ker­nel with real block de­vices. Practice fdisk, LVM, RAID, mkfs, sys­tem­ctl,crontab, fire­walld, SSH keys, awk & sed - things you can’t do on a shared ter­mi­nal.

Live com­mand ticker - that scrolling bar on the home­page? It’s real. Powered by eBPF (execsnoop) trac­ing com­mands in real-time. 28 mil­lion and count­ing.

Linode → DigitalOcean → AWSGCPOVHIBM Cloud → Linode

Full cir­cle. Along the way we built: a browser IDE with VS Code/Theia, Docker-over-LXC root en­vi­ron­ments, Asciinema screen­cast­ing, a shared file pool, ttyrec-to-GIF pub­lish­ing, a cus­tom user­add bi­nary (the de­fault was too slow with 300k+ users), and an OpenVZ-based VM pro­vi­sion­ing sys­tem. Some still run­ning, some killed by time or money.

I’m from India. Freston is from the Netherlands. We met on LinuxForums.org in 2010. Until 2015, we had never seen each oth­er’s face — not even on Skype. All com­mu­ni­ca­tion hap­pened over SSH into our server in a screen ses­sion.

$ screen -x chat

$ cat > /dev/null

hey, should we add MySQL sup­port?

That’s how an en­tire plat­form was built. No Slack, no Zoom, no Jira tick­ets. Just two guys writ­ing mes­sages in a ter­mi­nal.

Python: 2.7 (yes, re­ally)

Framework: Flask 0.12.5

Terminal: Shellinabox (abandoned in 2017, still works per­fectly)

Root labs: User Mode Linux (a tech­nol­ogy from 2001)

Monitoring: eBPF/​ex­ec­snoop (the only mod­ern thing)

Database: MySQL on a server that sur­vived a fire

Frontend: No React, no Vue, no npm. Just HTML and in­line CSS.

Every tech con­fer­ence talk would tell you this stack is wrong. But it serves 500k users and has been up for 15 years.

We tried re­plac­ing Shellinabox with the mod­ern WebSocket-based ter­mi­nal. It lasted a few hours in pro­duc­tion be­fore users re­ported blank screens and Firefox in­com­pat­i­bil­ity.

Shellinabox is from 2005. It’s ugly, it’s slow, and it works through every fire­wall, proxy, and cor­po­rate net­work on earth. We switched back. Sometimes the old thing is the right thing.

Everyone uses Docker. We use User Mode Linux — a full Linux ker­nel run­ning in user­space, cre­ated by Jeff Dike in 2001.

Why? Because when a stu­dent types fdisk /dev/sdb, they need a real block de­vice. Docker can’t give you that. UML can.

* Copy-on-write over­lay - one golden im­age shared by every­one

When the stu­dent types poweroff, the UML ex­its, and they’re back in their nor­mal shell. Total iso­la­tion. Zero risk to the host.

The COW over­lay means 100 con­cur­rent users add only ~2GB of disk. The golden im­age is shared.

That 28,469,041 com­mands ex­e­cuted counter on the home­page? It’s real. We use ex­ec­snoop2 from bcc-tools.

The live ticker you see scrolling on the home­page — those are real com­mands be­ing typed by real users right now. Anonymized, safe com­mands only. No ar­gu­ments, no paths, no pass­words. Just $ ls, $ gcc, $ vim flow­ing by like a heart­beat.

The Linux ker­nel it­self tells us when some­one runs their first ls.

I am a Windows sys­tem ad­min with­out a lot of free time and this site has re­ally helped me get fa­mil­iar with Linux. I even use the site on my tablet. The tu­to­ri­als you of­fer are re­ally great too. Thanks for all you do.”

I am a stu­dent study­ing Electronic Engineering in Korea. I am study­ing Linux by your site and it re­ally helped me a lot!”

The tu­to­r­ial is great! I also laughed at some points. Your site is ab­solutely amaz­ing. Please make more! Keep the great work up!”

Webminal has zero rev­enue. No ads, no track­ing, no VC fund­ing. I pay for the server from my sav­ings. I’ve spent more money on this pro­ject than on per­sonal or fam­ily stuff.

More than once, I thought about killing it. 15 years is a long time. There were months when I was be­tween jobs, watch­ing my sav­ings shrink, and the server bill kept com­ing. Every month I’d think - maybe this is the month I pull the plug. Then I’d get a job, the thought would go away, and Webminal would live an­other year. I ap­plied to YC. Rejected. Tried to mon­e­tize - PayPal, Stripe, paid plans. Never worked. The users who need Webminal most are stu­dents who can’t af­ford $4/month. So it stays free.

500,000 peo­ple have typed their first ls on Webminal. Some of them are sysad­mins now. Some run their own servers. One of them prob­a­bly man­ages more in­fra­struc­ture than I ever will.

As long as it helps a sin­gle stu­dent, Webminal will run.

If you want to help to up­grade from 8GB to 128GB so more stu­dents can run root labs at the same time. Every bit counts: Sponsor @Lakshmipathi on GitHub Sponsors · GitHub

...

Read the original on community.webminal.org »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.