10 interesting stories served every morning and every evening.
After a team member summoned Copilot to correct a typo in a PR of mine, Copilot edited my PR description to include and ad for itself and Raycast.
This is horrific. I knew this kind of bullshit would happen eventually, but I didn’t expect it so soon.
Here is how platforms die: first, they are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die.
...
Read the original on notes.zachmanson.com »
I don’t like to cover “current events” very much, but the American government just revealed a truly bewildering policy effectively banning import of new consumer router models. This is ridiculous for many reasons, but if this does indeed come to pass it may be beneficial to learn how to “homebrew” a router.
Fortunately, you can make a router out of basically anything resembling a computer.
I’ve used a linux powered mini-pc as my own router for many years, and have posted a few times before about how to make linux routers and firewalls in that time. It’s been rock solid stable, and the only issue I’ve had over the years was wearing out a $20 mSATA drive. While I use Debian typically, Alpine linux probably works just as well, perhaps better if you’re familiar with it. As long as the device runs Linux well and has a couple USB ports, you’re good to go. Mini-PCs, desktop PCs, SBCs, rackmount servers, old laptops, or purpose built devices will all work.
To be clear, this is not meant to be a practical “solution” to the US policy, it’s to show people a neat “hack” you can do to squeeze more capability out of hardware you might already own, and to demonstrate that there’s nothing special about routers - They’re all just computers after all.
My personal preference is a purpose-made mini PC with a passively cooled design.
However, basically anything will work. It should have two Ethernet interfaces, but a standard USB-Ethernet dongle will also do the trick. It won’t be as reliable as an onboard interface, but will probably be good enough. For example, this janky pile of spare parts can easily push 820-850mbps on the wired LAN and ~300 mbps on the wireless network:
This particular device is a Celeron 3205U dual core running at a blistering 1.5 GHz. Even that measly chip is more than capable of routing an entire house or small business worth of traffic.
Going back even further, this was my setup for the first couple weeks of the fall 2016 semester:
It might be hard to tell what’s going on here by looking, so let me break it down:
* An ExpressCard-PCIe bridge in the ThinkPad’s expansion bay
* A trash-picked no-name Ethernet card in the PCIe slot, missing its mounting bracket
* An ancient Cisco 2960 100 mbit switch, purchased for $10 from my college
* A D-Link router acting as an access point (“as-is” thrift store find with a bad WAN port)
Yes, this is indeed a router! It probably looks like a pile of junk, because it is, but it’s junk that’s perfectly able to perform the job I gave it!
When set up, the system will be configured like this:
Both LAN interfaces will be bridged together, meaning that devices on the wired and wireless networks will be able to communicate normally. If one LAN port isn’t enough, you can plug in as many USB Ethernet dongles as you need and bridge ’em all together. It won’t be quite as fast as a “real” switch, but if you’re looking for performance you might’ve come to the wrong place today.
As mentioned before, this will run Debian as the operating system, and uses very few pieces that don’t come with the base install:
* Any firmware blobs not in the default install
Also, I should mention that I’ll only be setting up IPv4 here. IPv6 works great for stuff like mobile devices, but I still find it too frustrating inside a LAN. Perhaps my brain is too calcified already, but I’ll happily hold out on IPv4 for now.
* If you can, set the device to the lowest clock speed, but disable any power management for USB or PCI devices.
* Find the option like “Restore after AC Power Loss” and turn it ON.
* Some devices won’t properly power up if there’s no display connected. If your device is like this, stick a “dummy dongle” into the HDMI port.
* Lots of hardware will only work correctly with the non-free-firmware repository enabled
Depending on your wireless hardware, you may need to install an additional firmware package.
sudo apt install firmware-iwlwifi
sudo apt install firmware-ath9k-htc
Or if you have something truly ancient like I do:
sudo apt install firmware-atheros
After the initial install is done, there are some additional utilities to install:
sudo apt install bridge-utils hostapd dnsmasq
In terms of software, that’s about all that’s needed. There should be about 250 packages on the system in total.
In modern Linux systems, the network interface names are named based on physical connection and driver type, like enp0s31f6. I find the old format, like ethX much simpler, so each interface gets a persistent name.
For each network interface, create a file at /etc/systemd/network/10-persistent-ethX.link
[Match]
MACAddress=AA:BB:CC:DD:00:11
[Link]
Name=ethX
This uses a USB Wi-Fi dongle to act as an access point, creating a network for other devices to join. This will not work as well as a purpose built device, but it’s better than nothing. I’ve had reasonably good results with this, but I also live in a very small building where I’m rarely more than 10m away from the router. If you rely heavily on your wireless network working properly, try to find a dedicated access point device. An old router, even from over a decade ago, will probably work fine for this by just connecting to its LAN port (not the WAN port!).
To set up the onboard wi-fi network, create a config file at /etc/hostapd/hostapd.conf
interface=wlan0
bridge=br0
hw_mode=g
channel=11
ieee80211d=1
country_code=US
ieee80211n=1
wmm_enabled=1
ssid=My Cool and Creative Wi-Fi Name
auth_algs=1
wpa=2
wpa_key_mgmt=WPA-PSK
rsn_pairwise=CCMP
wpa_passphrase=mysecurepassword
By default the hostapd service is not startable, so we unmask it before enabling the service.
sudo systemctl unmask hostapd
sudo systemctl enable –now hostapd
The “outside” interface will be the WAN, and the “inside” will be the LAN. Note that the LAN interface does not get a default gateway.
allow-hotplug eth0
allow-hotplug eth1
auto wlan0
auto br0
iface eth0 inet dhcp
iface br0 inet static
bridge_ports eth1 wlan0
address 192.168.1.1/24
After this step, the device should have a quick reboot. It should come back up nicely. If it doesn’t confirm that the previous steps were done correctly, and check for errors by running journalctl -e -u networking.service
If it all worked correctly, the output of this command should be the same:
$ sudo brctl show br0
bridge name bridge id STP enabled interfaces
br0 8000.xxxxx no eth1
wlan0
Create /etc/sysctl.d/10-forward.conf and add this line to enable IP forwarding:
net.ipv4.ip_forward=1
sudo systemctl restart systemd-sysctl.service
The firewall rules and NAT configuration are both handled by the new netfilter system in Linux. We manage this using nftables.
#!/usr/sbin/nft -f
flush ruleset
table inet filter {
chain input {
type filter hook input priority 0; policy drop;
ct state { established,related } counter accept
ip protocol icmp counter accept
iifname “br0” tcp dport { 22, 53 } counter accept
iifname “br0” udp dport { 53, 67, 68 } counter accept
counter
chain forward {
type filter hook forward priority 0; policy drop;
iifname “eth0” oifname “br0″ ct state { established,related } counter accept
iifname “br0” oifname “eth0″ ct state { new,established,related } counter accept
counter
chain output {
type filter hook output priority 0; policy accept;
counter
table ip nat {
chain postrouting {
type nat hook postrouting priority 100; policy accept;
oifname “eth0” counter masquerade
This performs NAT, denies all inbound traffic from outside the network, and allows the router device to act as a DNS, DHCP, and SSH server (for management). Pretty much a bog standard firewall config.
Enable this for the next boot:
sudo systemctl enable nftables.service
...
Read the original on nbailey.ca »
Philly courtrooms are remaining friendly to the Luddites. At least with eyewear.
The Philadelphia court system is implementing a ban on all forms of smart or AI-integrated eyewear, the First Judicial District of Pennsylvania announced this week.
The ban will go into effect Monday.
From then on, any eyewear with video and audio recording capability will be forbidden in all of the First Judicial District buildings, courthouses, or offices, even for people who have a prescription. Other devices with recording capabilities like cell phones and laptops continue to be allowed inside courtrooms but must be powered off and stowed away.
“Since these glasses are difficult to detect in courtrooms, it was determined they should be banned from the building,” said court spokesperson Martin O’Rourke.
The ban is meant to prevent potential witness and juror intimidation from threats of recording, O’Rourke said. It is unclear whether Philadelphia courts will implement extra screening measures to determine if a person’s glasses violate the rule.
If someone were caught attempting to bring smart eyewear into those spaces, they could be barred entry or removed from the building, and arrested and charged with criminal contempt, O’Rourke said. The only potential exceptions would be if a judge or court leadership had granted prior written permission to a smart glasses user.
Philadelphia is part of an early wave of court systems that are implementing smart eyewear bans, joining systems like those in Hawaii, Wisconsin, and North Carolina. While most courts already ban any kind of recording devices inside the courtrooms, it’s not yet common to have explicit bans on smart eyewear or to completely bar them from the building.
Without direct bans in place, judges typically have latitude to make rulings on what devices are allowed inside their court room. During the recent trial in Los Angeles that found Google and Meta liable for social media causing harm, Meta CEO Mark Zuckerberg and his colleagues wore their company’s smart eyewear into the courtroom. The judge in that case ordered them to remove the glasses, and threatened to hold anyone who had used them to record court proceedings in contempt of court.
Google Glass was a frequent butt of the joke after it was introduced over a decade ago, but reasonably affordable and available smart glasses have finally begun catching on within the last year.
Eyewear giants Ray-Ban and Oakley both now sell glasses integrated with Meta AI and audio and visual recording for less than $500. The new glasses were the focuses of each company’s recent Super Bowl ad campaigns, and the companies reportedly hawked 7 million pairs in 2025. They have a head start on Apple, which is planning to join the market with its own smart glasses in 2027.
...
Read the original on www.inquirer.com »
The catalysts for a crash are already laid out, and it can happen sooner than most expect. AI is here to stay. If used right, chances are it will make us all more productive. That, on the other hand, does not mean it will be a good investment.
Magnificent 7 companies are increasing capex to their biggest ever to differentiate their tech from each other and the big AI labs, but the key realization is that they don’t have to spend it to win. It’s a defensive move for them, if they commit $50B, OpenAI and Anthropic need to go raise $100B each to stay competitive, which makes them reliant on investors’ money. As the numbers get bigger, the amount of funds that can write checks of the size required to fill such amounts gets smaller. And many of them are now getting bombed in the Gulf.
This is the reason there’s a push for IPOs, it’s because it’s the only option left to keep the funding coming.
Taking this into account, Google is extremely well positioned to weather the storm. When they announce capex expenditure, they don’t spend it overnight. They can simply deploy month by month until their competitors struggle to raise and get forced to capitulate. At that point they can just ramp down the spending and declare victory in a cornered market. They don’t need capex, they just need to make it very clear for everyone that nobody can outspend them. It is hard to picture as numbers get so big, but Alphabet (Google’s parent) is ten times more valuable than the biggest military company .
This also has a great implication for the Mag 7, especially Google: their capex will be a lot smaller in practice than projected, and as investors hate to see high capex in tech, the market will probably reward that if it materializes.
Apple didn’t even have to pretend, their strategy of waiting on the sidelines, while selling Mac Minis, for someone to come up with a good-enough model and just buy that when it’s done seems to be working. They may not even do that, they are now hinting at charging models for being available on Siri. Amazon is hedged with an Anthropic investment, and Meta is spending like there’s no tomorrow.
We’re hitting the worst-case scenarios for the big AI labs: energy, their biggest expense, is at multi-year highs, capital from the Gulf is not available for obvious reasons, there are serious concerns about a rate hike, and RAM prices are crashing because new models won’t need as much, but labs already bought them at sky-high prices. And that last innovation came from their biggest competitor, Google.
Anthropic is already in a push to reduce costs and increase revenue. If investor money dries up, they will be forced to cut their losses and pass the true costs to their users. The question is now if customers will be willing to pay up. Independent reports state that Claude metered models are priced 5x more expensive than their subscribers pay, and nobody is sure if even their metered pricing is profitable. In investing, stories are way more exciting than reality: a company losing money but growing like crazy is an easier sell than a huge company losing money or with tight margins. Raising prices will for sure decrease demand and that risks killing the growth story. And even if revenue keeps growing, it doesn’t matter if there are no margins — growing revenue without profits just means burning cash faster, especially when competing against companies that can offer the same product as a loss leader bundled into their cloud platforms.
It’s also worth mentioning that Claude’s most expensive subscription plans (Max and Max 5x, priced at $100 and $200 respectively) do not allow for yearly payments, hinting prices will go up.
OpenAI is struggling to monetize. They turned to showing ads in ChatGPT, something Sam Altman once called a “last resort”, while Anthropic is crushing them with the more profitable corporate customers and software engineers. Their shopping feature flopped and they shut down Sora, both supposed to be revenue drivers.
I wouldn’t be surprised at all if in the next couple of quarters we see OpenAI looking for an exit. It will be interesting because the sizes are now so big that we will probably know all the details. The most likely buyer is Microsoft, they already own a lot of it, and because of that, they are the most interested in showing a win. Sam Altman managed to get Microsoft so involved in OpenAI that making sure it lands on its feet is a Microsoft problem to solve. But, would shareholders vote to spend 22% of an established company’s market cap to rescue a money-burning AI lab that has lost most of its differentiators?
And independent of whether Microsoft makes money or not in their OpenAI endeavor, it kills the story: they were betting the whole growth story on AI, and if that doesn’t work out, then what’s left to justify a high stock price? They lose a big customer for their cloud services. Even worse considering that now, using the AI they helped fund, everyone can compete with their sub-par products. GitHub is a good candidate for disruption, and that’d be just the start.
You may think that you’re not affected by the big labs struggling. Hell, you may even be happy that they won’t be replacing your job after all. But that is far from reality.
Investments are now so big that writing them off would certainly hurt public companies’ balance sheets, and their growth prospects. This will drag the whole market, reducing valuations and slowing M&A, which further dries up VC money and slows down investments. Just like it happened in 2022.
And this has even more ramifications, pension funds around the world will take a hit. Datacenters that were built with the expectation of growth will now be undercapacity, because as training is the most compute-intensive part of a model, if there’s no capital to train a new one, they won’t be needed. GPUs then sit idle while their value goes down as there’s no demand. Some committed GPUs may never get delivered, or even manufactured. Investment drying up is a disaster for Nvidia, now the biggest company in the world.
It could happen that datacenters are not underused, but they get to charge their customers a way lower rate than they projected before building, so everyone benefits from AI but them.
Building a datacenter is supposed to be a “safe” investment in normal times, so banks give private credit and mortgages to finance them. A write-off of those assets means that banks start realizing losses, hurting their capacity to loan, and some may even be forced to liquidate, just like we saw in 2023. And all this assumes we don’t get disruptions in manufacturing in Taiwan or global supply chains.
Of course, the content of this article is highly speculative, it may end up being that demand for models is just so high it offsets every other problem I lay. But almost all innovations go through a boom and bust cycle and I don’t see a reason this is an exception.
Thanks to Javier Silveira and Augusto Gesualdi for reviewing drafts of this post.
...
Read the original on martinvol.pe »
My whole art department is run on tracing paper. Why re-invent the wheel?
The demo scene has a peculiar view on copyright. It roughly boils down to a system of effort - effort in ideas, effort in craft - where the scene polices itself and punishes sceners that steal outright from other sceners. Theft from the outside world, however, is often taken lightly - especially when it comes to graphics.
Early pixel art on the scene was almost always copied (or, more correctly, plagiarized) from other sources. In particular, fantasy- and science fiction related art was immensely common. Fantasy artists Boris Vallejo and Frank Frazetta, as well as raunchy robot airbrusher Hajime Sorayama, were popular favourites.
Three different Amiga pixel art interpretations of Frank Frazetta’s Death Dealer. All images on this page are clickable and link to non-lossy versions when available.
This pixel art wasn’t about originality as much as it was about craft. Scanners and digitizers were far too expensive for a teenager, and the images produced by early consumer models were crude and lackluster. Making an image truly pop with detail and sharpness required hand-pixelling, which is a very involved process. First, there was the copying of a source outline by hand, using a mouse (or joystick, on the C64), and then came aspects such as conveying details in a limited resolution (typically around 320x256 pixels), picking a limited indexed palette (usually 16 or 32 colours), and manually adding dithering and anti-aliasing. It was painstaking work.
The TV painting tutorials by prolific landscape artist Bob Ross hasn’t become an online phenomenon because his hundreds of mountainscapes are era-defining sensations (though certainly nice to look at), but because people enjoy watching his creative process and technique, mastered to perfect effortlessness. This notion is echoed in any carefully hand-pixelled work, where the craft itself can be discerned and enjoyed on its own, even if the subject matter is yet another Frazetta copy. Teenage boys will be teenage boys, and their choice of source material all too predictable. The real value of early scene pixels came from the invested labour, not whether they constituted a unique composition or otherwise fresh idea.
Owning Up, or Not
Some scene artists were very upfront about copying. Bisley’s Horsys is clearly a Simon Bisley copy, and calling a picture Vallejo (NSFW!) is self-explanatory. In the slide show Seven Seas, artist Fairfax clearly lists sources and inspirations in the included scroll text. Others were more quiet about it, but the prevailing sentiment among scene artists at that point in time was that copying was not only allowed, but almost expected.
Pixel artist Lazur’s 256 colour rendition (left) of a photo by Krzysztof Kaczorowski (right). A masterful copy showcasing the sharpness, details and vibrancy achievable with pixel techniques. Of special note is the use of dithering on the matchbox striker and the frontmost man’s sweater, creating an almost tactile sense of texture.
Just like in traditional painting, some pixel artists had a natural knack for copying by freehand, whereas others resorted to more fanciful methods. Some used grids, overlaying the original image and then reproducing the same grid on screen to retain proportions. Others traced outlines onto overhead projector sheets, which - thanks to the nature of CRT monitors - were easy to stick to the computer screen and trace under. Today, the use of drawing tablets is much more likely. In the end, however, they all had to fill, shade, dither and anti-alias by hand.
Scene artists soon perfected the pixel art translation, and could accomplish astonishing results with very limited resources. Some started adding their own flair to their copies: a few details here and there, perhaps combining several sources into a new composition. This grind of copying and refining is often a great way to learn, and people in their late teens may be forgiven for wanting to emulate their idols without including the proper credits.
Some time around 1995, scanners had become both cheaper and better, and the Internet opened up a world of new image sources. Combined with cheap, powerful PCs and widespread piracy of Adobe Photoshop, this allowed for new ways of creating digital art. Clever rascals started doing pure scans and passing them off as their own work, but these were still often inferior in quality to the handmade pixel art copies. With time, however, paintovers and tweaked scans could often be passed off as craft to an unsuspecting audience. Around this time, the No Copy? web page was launched, causing disillusionment among many graphics fans who weren’t familiar with how common copying in fact was.
At its core the scene is a meritocracy, even if the source of merit may sometimes seem strange to outsiders. Scanning and retouching was (and remains) considered low status and cheating, and many artists and other sceners complained (and complains) loudly when finding someone out. Before 1995, complaints about scanning weren’t usually about copied source material, but about the lack of craft: the process still mattered more than originality and imagination.
Around the turn of the millennium, this attitude started to shift. Many sceners were now well into their twenties or thirties, and with maturity came a thirst for original work - both among artists and audience. Some artists, however, had a hard time breaking free from the comfort of copying or, worse, simply converting. The practice continued, but a greater stigma was now attached to it. Hence, Vallejo was discarded in favor of material that could more safely be passed off as one’s own. Today’s various art sharing websites have made this easier than ever, but that also means plagiarizing other hobbyist artists, which has a different sort of tinge to it than teenagers ripping off big name fantasy painters.
Steve Jobs once said that good artists copy and great artists steal, and attributed the quote to Picasso. As with many good quotes, it’s often referred to out of context, and without much thought. The actual source seems to be T. S. Eliot, who wrote that “Immature poets imitate; mature poets steal; bad poets deface what they take, and good poets make it into something better, or at least something different. The good poet welds his theft into a whole of feeling which is unique, utterly different from that from which it was torn; the bad poet throws it into something which has no cohesion.”
It’s easy to misconstrue Jobs’ version of the quote as a carte blanche for simply reproducing someone else’s work, but what Eliot describes is how artists understand art, and how they incorporate inspiration from other works into their own: He’s not suggesting that great poets copy Shakespeare verbatim and pass it off as theirs. In fairness, neither did Jobs: At their height, Apple decidedly improved what they stole - especially the GUI.
The distinction between copying and original work spans a gray area, and when pressed about copying, demo scene artists will usually mumble something about how everyone uses “references”. For people not generally involved in painting, this might sound plausible enough, but references aren’t the same as making copies of pre-existing art. References are an aid for visually understanding a subject and achieving realism, because nobody can perfectly draw, say, a train from memory alone.
Hergé was a stickler for realism and often did near-perfect reproductions of references in Tintin - but always in his own distinct “lignie claire” style.
Some will use existing photos, some will walk down to the local train station with a camera, others still will bring a sketchbook and make detailed pencil studies. If striving for accuracy and detail, photo references are invaluable. Sometimes an artist will work from a photo they’ve taken or commissioned themselves, thus being in control of the subject and composition. Anders Zorn and Pascal Dagnan-Bouveret are two of a plethora of classic painters who used photo references for some of their most recognizable works; Zorn himself was an avid photographer.
Norman Rockwell demonstrating his use of a Balopticon.
Famous Americana illustrator Norman Rockwell frequently used a Balopticon to project photos onto a canvas and traced the projection. He described this technique with no small amount of self-deprecation: “The Balopticon is an evil, inartistic, habit-forming, lazy and vicious machine. I use one often - and though am thoroughly ashamed of it. I hide it whenever I hear people coming.” Yet, his personal style is unmistakable and the photo compositions were his own. Dutch renaissance master Vermeer is suggested to have used a similar technique with a camera obscura.
The key difference between a reference and a copy is that in a copy, the source is a work of art by someone else, and the original artist’s subject, style, intent, composition and choices are transferred onto the new work. Perfectly reproducing the Mona Lisa may take time and skill, but the reproduction is a copy, not an original work based on a reference. Trying to pass it off as your own is plagiarism, and this is what most sceners actually mean when they say “copy”.
To the left is a skillful 1994 pixel rendition by Tyshdomos of the caricature to the right, by Sebastian Krüger. The original was no doubt made using at least one reference. The pixel art version, while showing much more than just a shallow understanding of the source material, is still a copy of the style, intent and choices of Krüger. Tyshdomos usually credited the original artist in his images.
As opposed to the more traditional plagiarism on the scene, pre-existing digital images require no tedious manual transfer using a mouse. It’s simply a matter of scaling them down to a suitable retro resolution and adding a sprinkle of your own dithering to make it seem more handmade. Suddenly - as with scanning - the grind of the copy is no longer a factor, and the craft is seemingly reduced to covering up the picture’s origin.
In the present day, typical retro sceners are in their forties and fifties and have families, established careers and comfortable middle class salaries. The scene is no longer a place for cutthroat teenage social games, but an indulgent hobby and time sink of choice. It’s about creating for the sake of creating, for the love of the craft, for the joy of the process. It’s about getting better at something that is, ultimately, utterly inconsequential in the grand scheme of things. It’s even pointless as a middle class status marker: few people brag to their neighbours about having coded a texture-mapped cube in a peculiar graphics mode on a long forgotten home computer.
Most pixel artists have long since left the blatant plagiarism behind and are now accomplished, mature creators. They’re capable of thinking up original ideas and realizing them in their own, unique styles. As with any hobby, there’s still status to be had among the in-group, but the strict pecking order of teenagers has been replaced with a laid-back attitude of friendship, sharing and mutual appreciation of the demomaking craft in general.
Despite this, there are graphics artists who continue to plagiarize, and those who’ve started to rely on generative AI. Some are upfront about this, too, and clearly label AI generated images as such. Others tell outright lies or are very quiet or avoidant when discussing their process. Often, there’s a bit of manually added pixels in these pictures for good measure, like a sprig of parsley on a microwave meal being passed off as a labour of love.
Just like with copying, there’s an ongoing discussion about AI on the scene, and there are as many different views as there are sceners. The general consensus seems to be in the camp of honoring the craft, or at the very least practicing transparency about the creative process. This is reflected in the rules of most demo parties, which often explicitly state that the use of generative AI is forbidden - a rule that is seemingly hard to enforce and frequently broken.
Elements of Green, original pixel art by Prowler. In this timelapse we can follow the process from a pencil sketch (perhaps based on photo references) to finished piece, via both digital painting and traditional pixelling.
Some sceners claim that the end result is all that matters, and that discussing or even disclosing the process is pointless. Another view is that generative AI is just another tool, like a paint program, and that its usage is a natural progression for a culture that has always been about exploring the intersection of digital technology and art.
The Joy of Not Painting?
The scene - like creative communities in general - has always been full of contradictions and paradoxes, in views as well as methods. In some cases, what could be considered plagiarism is the central point of an entire body of work: Batman Group is a demo group that almost exclusively makes Batman-themed demos, showcasing astonishing skill in raw tech as well as aesthetics and storytelling. In other cases, it may be a question of satire or utilizing a culturally powerful pastiche. One of my own favourite demos of all time, Deep - The Psilocybin Mix, makes heavy use of (very apparent) photo montages. These are things both artists and audiences have to live and deal with on a case-to-case basis.
For me personally, generative AI ruins much of the fun. I still enjoy creating pixel art and making little animations and demos. My own creative process remains satisfying as an isolated activity. Alas, obvious AI generated imagery - as well as middle-aged men plagiarizing other, sometimes much younger, hobbyist artists - makes me feel disappointed and empty. It’s not as much about effort as it is about the loss of style and personality; soul, if you will. The result is defacement, to echo T. S. Eliot, rather than inspired improvement. Even in more elaborate AI-based works, it’s hard to tell where the prompt ends and the pixelling begins.
In the commercial world of late stage capitalism, I’d expect nothing less than cutting corners. For me, the scene is about something else. It’s a place of refuge from the constant churn of increased efficiency, and an escape from the sickening void of the online attention economy. It’s where we can spend months putting yet another row of moving pixels on the screen to break some old record, because the platform doesn’t change and nobody is paying us to be quick about it. It’s where I instinctively want to go for things that aren’t the result of a few minutes in front of DALL-E. I can get that everywhere else, at any time.
Farting around with Amigas in 2026 means actively choosing to make things harder for the sake of making things harder. Making that choice and still outsourcing the bulk of the craft and creative process is like claiming to be a passionate hobby cook while serving professionally catered dinners and pretending they’re your own concoctions.
There’s not much to be done about it, because the scene has no governing body or court of appeals - and I dearly hope it stays that way. I just can’t wrap my head around the point of using AI in this setting: It feels antithetical to a culture that so adamantly celebrates creativity, technical limitations, extremely specialized skills, and anti-commercial sharing of art and software.
What’s interesting is that those most reliant on AI and plagiarism seem to feel the same way. Otherwise, they wouldn’t be so secretive about it.
...
Read the original on www.datagubbe.se »
Where the limit is appliedWhat could fix this
Starting with the M4 and including the new M5 generations of Apple Silicon, macOS no longer offers or allows full-resolution HiDPI 4k modes for external displays.
The maximum HiDPI mode available on a 3840x2160 panel is now just 3360x1890 - M2/M3 machines did not have this limitation.
With this regression Apple is leaving users to choose between:
Full screen real estate at 4k (3840x2160) with blurry text due to HiDPI being disabled.
Reduced screen real estate at 3.3k (3360x1890) with sharp text (HiDPI) but significantly less usable working space, and macOS’s UI looking ridiculously oversized.
The DCP (Display Coprocessor) reports identical capabilities on both M2 Max and M5 Max for the same display. The M5 Max hardware supports 8K (7680x4320) at 60Hz per Apple’s own specs. However, the M4/M5 generation appears to have introduced a new per-sub-pipe framebuffer budget system (IOMFBMaxSrcPixels) that caps the single-stream scaler path (sub-pipe 0) at 6720 pixels wide - exactly the backing store width for 3360x1890 HiDPI. The M2 Max used a completely different architecture with a flat per-controller budget of 7680 pixels wide, which is why it worked.
Both machines report identical DCP parameters for the LG display:
What: Wrote a display override plist to /Library/Displays/Contents/Resources/Overrides/DisplayVendorID-1e6d/DisplayProductID-7750 containing scale-resolutions entries for 7680x4320 HiDPI.
Result: No effect on M5 Max. The identical plist produces 3840x2160 HiDPI on M2 Max. WindowServer on M5 Max refuses to enumerate the mode regardless of plist content.
The override plist that works on M2 Max:
What: Wrote a patched EDID into the override plist’s IODisplayEDID key with:
Result: No effect with these values. However, waydabber (incredibly helpful BetterDisplay developer) has confirmed that software EDID overrides can work on M4 - he got an 8K framebuffer on a 4K TV by adding a valid 8K timing and defining it as the native resolution. The catch: even with a correct override, a 4K panel can’t actually accept an 8K signal, so this confirms the mechanism (scaled modes derive from the system’s idea of native resolution) without providing a practical fix.
What: Created a patched EDID with VIC 199 (7680x4320@60Hz) added to the CEA Video Data Block, keeping the preferred detailed timing at 3840x2160. Successfully flashed to the LG monitor’s EEPROM via BetterDisplay.
Result: The DCP read VIC 199 from the hardware EDID and updated its reported capabilities: MaxW changed from 3840 to 7680, MaxH from 2160 to 4320, and MaxActivePixelRate to 1,990,656,000. The DCP also allocated 2 pipes (PipeIDs=(0,2), MaxPipes=2) as it would for a real 8K display. However, the sub-pipe 0 framebuffer budget (MaxSrcRectWidthForPipe) remained at 6720, and no 3840x2160 HiDPI mode appeared.
A further attempt added a DisplayID Type I Detailed Timing for 7680x4320@30Hz marked as preferred and native. This did generate a 3840x2160 scale=2.0 mode in the CG mode list. However, when selected, macOS attempted to output 7680x4320 on the wire (since the EDID declared it as a supported output mode), which the LG could not display. A DisplayID Display Parameters block (declaring 7680x4320 as native pixel format without creating an output timing) did not generate any new modes.
What: Created a patched EDID binary with boosted range limits only (keeping preferred timing at native 3840x2160 to avoid breaking display output), attempted to flash to the LG monitor’s EEPROM via BetterDisplay’s “Upload EDID” feature.
Result: The range-limits-only flash did not change any DCP parameters. The DCP derives MaxActivePixelRate from the preferred timing’s pixel clock, not from the range limits. A subsequent flash with VIC 199 added to the Video Data Block was successful (see “EDID Hardware Flash” section above).
What: Attempted to modify the DCP’s DisplayHints dictionary and ConnectionMapping array directly in the IOKit registry using IORegistryEntrySetCFProperty, targeting higher MaxW, MaxH, and MaxActivePixelRate values.
Result: The DCP driver explicitly rejects userspace property writes with kIOReturnUnsupported (kern_return=-536870201). These properties are owned by the kernel-level AppleDisplayCrossbar driver and cannot be modified from userspace.
What: Used IOServiceRequestProbe to trigger the DCP to re-read display information after writing override plists.
Result: No effect on mode enumeration. The DCP re-reads from the physical display, not from software overrides.
What: Deleted ~/Library/Preferences/ByHost/com.apple.windowserver.displays.*.plist and attempted to restart WindowServer. Also performed a full reboot.
Result: killall WindowServer on macOS 26 does not actually restart WindowServer (no display flicker, no session interruption). Full reboot with the override plist in place still did not produce the 3840x2160 HiDPI mode. The cache was not the issue.
What: Disconnected the third display (U13ZA) to test whether the DCP’s bandwidth budget across display pipes was the constraint.
Result: No effect. With only 2 displays (LG + built-in), the mode list remained identical. The limitation is not related to the number of connected displays.
What: Considered switching from USB-C/DisplayPort to HDMI.
Result: Not attempted; HDMI 2.0 has less bandwidth (14.4 Gbps vs 25.92 Gbps on DP 1.4 HBR3), so would be the same or worse.
What: Used SLConfigureDisplayWithDisplayMode from the private SkyLight framework to attempt to directly apply a 3840x2160 HiDPI mode (7680x4320 pixel backing, scale=2.0) to the LG display. The mode was sourced from both the CG mode list and from other displays.
Result: Returns error code 1000 when the mode is not in the display’s own mode list. The SkyLight display configuration API validates modes against the same DCP-derived mode list as WindowServer. There is no private API path to bypass the mode list validation.
Where the limit is applied#
The DCP reports identical capability parameters on both machines - MaxActivePixelRate, MaxW, MaxH, MaxTotalPixelRate all match. These come from the display’s EDID, so that’s expected.
The difference shows up in WindowServer’s mode list. On M2 Max, CGSGetNumberOfDisplayModes includes 3840x2160 at scale=2.0. On M5 Max, with the same DCP parameters and the same override plists, that mode doesn’t exist.
The IOMFBMaxSrcPixels property on the IOMobileFramebufferShim IOKit service exposes framebuffer size budgets. The M2 Max and M5 Max use fundamentally different structures here, which is the root cause of the regression.
Every external display controller gets a flat MaxSrcRectWidth of 7680 and MaxSrcRectTotal of 33,177,600 (exactly 7680 x 4320). The LG is assigned to PipeIDs=(1). With a 7680-pixel budget, 3840x2160 HiDPI (7680x4320 backing store) fits comfortably.
M5 Max restructured to per-sub-pipe budgets within each controller:
The 4 values in MaxSrcRectWidthForPipe are sub-pipes within each display controller, not separate display outputs. A single-stream 4K display only uses sub-pipe 0. Sub-pipes 1-3 are for multi-pipe configurations (8K displays use 2 sub-pipes simultaneously, which is why an 8K EDID causes the DCP to assign PipeIDs=(0,2) with MaxPipes=2).
Single-stream output (used by all standard displays)
Sub-pipe 0’s budget of 6720 pixels lines up exactly with the observed cap: 3360x1890 HiDPI needs a 6720x3780 backing store. For 3840x2160 HiDPI, the backing store would need to be 7680 pixels wide. Sub-pipes 1-3 have this budget, but they’re only accessible in multi-pipe mode for displays that genuinely output above 4K.
This property is set by the kernel-level IOMobileFramebufferShim driver and can’t be modified from userspace.
The budget is fixed at boot#
Testing confirmed that MaxSrcRectWidthForPipe is set when the driver loads and does not change at runtime, regardless of what you do:
It’s possible the driver reads EDID content during early boot to determine these allocations (as waydabber’s analysis suggests), but that hasn’t been confirmed with a cold boot test using a modified EDID yet.
“Generally 3840x2160 HiDPI is not available with any M4 generation Mac on non-8K displays due to the new dynamic nature of how the system allocates resources. There might be exceptions maybe - when the system concludes that no other displays could be attached and there are resources left still for a higher resolution framebuffer. But normally the system allocates as low framebuffer size as possible, anticipating further displays to be connected and saving room for those.”
The IOMFBMaxSrcPixels data fits this description. The M5 Max supports up to 4 external displays, and the GPU driver pre-allocates framebuffer budgets across all pipes at boot to cover the chip’s maximum supported display configuration. Pipe 0 gets a reduced budget of 6720 to leave room for displays that could be plugged in. Even in clamshell mode with only the LG connected, the budget stays at 6720 - the driver doesn’t care how many displays are actually present.
The DCP reports identical capabilities on M2 Max and M5 Max (same MaxW, MaxH, MaxActivePixelRate)The M2 Max uses a flat per-controller framebuffer budget (MaxSrcRectWidth=7680), giving every external display enough backing store width for 3840x2160 HiDPIThe M5 Max restructured to per-sub-pipe budgets (MaxSrcRectWidthForPipe=(6720, 7680, 7680, 7680)), where the single-stream sub-pipe (the only one a 4K display can use) is capped at 6720This caps the backing store width and therefore caps HiDPI at 3360x1890 on M5 MaxDisconnecting other displays, switching ports, or closing the laptop lid doesn’t change the sub-pipe budgetsAdding VIC 199 (8K) to the EDID changes DCP-reported MaxW/MaxH but doesn’t affect the sub-pipe budgetAdding a DisplayID 7680x4320 timing creates a 3840x2160 scale=2.0 mode, but macOS tries to output 8K on the wire (treating it as a real output mode rather than a scaling mode), which the 4K panel can’t display
The scaled resolution modes on M4/M5 are derived from whatever the system believes is the display’s native resolution. On M2/M3, the system would generate HiDPI modes up to 2.0x the native resolution (so 3840x2160 native got you a 7680x4320 backing store). On M4/M5, the single-stream sub-pipe budget caps this at around 1.75x. Whether this is a hardware constraint in the new scaler architecture or a conservative firmware allocation policy is unclear without Apple’s documentation - but the architectural change from M2’s flat budget to M5’s sub-pipe budget is the direct cause of the regression.
What could fix this#
This needs a change from Apple in the IOMobileFramebufferShim driver’s sub-pipe budget allocation. Specifically, sub-pipe 0’s MaxSrcRectWidthForPipe needs to be 7680 instead of 6720 when a 3840x2160 display is connected. A few ways they could approach it:
Raise sub-pipe 0’s budget to 7680 for external display controllers (matching the M2 Max’s flat allocation)Dynamically reallocate sub-pipe budgets based on actually connected displays and their capabilities
The M2 Max’s flat per-controller budget of 7680 proves the display controller hardware can handle it. The M5 Max’s multi-pipe sub-pipes (1-3) also have 7680, but these are only used for 8K multi-stream output. I’ve filed Apple Feedback FB22365722.
A 5K or 8K panel may not hit the exact same limit since its EDID native resolution is high enough that 1.75x scaling still provides a usable backing store.
Commands to reproduce this on any Mac. All except #3 work without special permissions. Command #6 is the most useful single diagnostic for this issue.
# 1. DCP rate limits and native caps per display
ioreg -l -w0 | grep -o ‘“MaxActivePixelRate”=[0-9]*\|“MaxW”=[0-9]*\|“MaxH”=[0-9]*’ \
| paste - - - | sort -u
# 2. System profiler display summary
system_profiler SPDisplaysDataType
# 3. All HiDPI modes for a display (requires Screen Recording permission)
# Use BetterDisplay, SwitchResX, or any tool that calls
# CGSGetNumberOfDisplayModes / CGSGetDisplayModeDescriptionOfLength.
# Example output format shown below.
# 4. Display connection details and DisplayHints
ioreg -l -w0 | grep -B5 -A2 ‘MaxActivePixelRate’ | grep -v EventLog
# 5. ConnectionMapping (per-pipe allocation)
ioreg -l -w0 | grep “ConnectionMapping”
# 6. Per-pipe framebuffer budgets (the key constraint on M4/M5)
ioreg -l -w0 | grep “IOMFBMaxSrcPixels”
Note: Commands 2, 4, 5 were captured without the LG connected. The mode list (command 3) was captured with the LG connected in a separate session.
Note: 3840x2160 at scale = 2.0 is present as the highest available HiDPI mode.
When LG is connected, reports identical values to M5 Max:
Graphics/Displays:
Apple M5 Max:
Chipset Model: Apple M5 Max
Type: GPU
Bus: Built-In
Total Number of Cores: 40
Vendor: Apple (0x106b)
Metal Support: Metal 4
Displays:
LG HDR 4K:
Resolution: 6720 x 3780
UI Looks like: 3360 x 1890 @ 60.00Hz
Main Display: Yes
Mirror: Off
Online: Yes
Rotation: Supported
Color LCD:
Display Type: Built-in Liquid Retina XDR Display
Resolution: 3456 x 2234 Retina
Mirror: Off
Online: Yes
Automatically Adjust Brightness: Yes
Connection Type: Internal
U13ZA:
Resolution: 3840 x 2400 (WQUXGA)
UI Looks like: 1920 x 1200 @ 60.00Hz
Mirror: Off
Online: Yes
Rotation: Supported
...
Read the original on smcleod.net »
Fifteen years ago today, I posted a thread on the Overclock.net forums. I was sixteen, I had an HP Compaq TC4400 that I’d convinced my parents would “improve my school work”, and I was frustrated that Firefox didn’t have an official 64-bit build. So I compiled one myself, called it Waterfox, stuck it on SourceForge and went back to my A levels.
Within a week it had 50,000 downloads, completely unexpected. Frustratingly, being on an island in the Mediterranean meant there was no support network or anyone to turn to with regards to “what’s next”. Had I been stateside, with the infrastructure and institutional knowledge of “tech”, who knows - I might’ve had a guiding hand on how to manage something like this and work with the momentum. But alas, I would have to learn a lot of painful lessons myself.
Fast forward to today, 15 years later, and Waterfox is still here. So am I, albeit a bit older and significantly more tired. At best estimates, Waterfox probably has around 1M monthly active users.
If you go and look at that original OCN thread, it’s a very different world. People are talking about Silverlight support, MSVCR100.dll errors, and Peacekeeper benchmark scores. Someone asks for a 64-bit Chromium build and the thread title gets updated with every new Firefox version, all the way up to 56.0.2.
Originally, and under the username MrAlex, I was only trying to earn enough forum reputation so I could trade and buy second hand PC parts. I didn’t have a plan and I certainly didn’t have a business model. I just thought it was cool that you could take someone else’s source code, compile it with some changes, and end up with something different. Open source is a wonderful thing when you’re sixteen and don’t know anything about the software development lifecycle, yearning for knowledge.
You can scour the internet, read this blog or view the media carousel at the bottom for then until now, but the short version: Waterfox grew - a lot - over 25 million lifetime downloads, and that figure is from calculations about seven years ago so the real number is certainly higher. I went to university, studying Electronics Engineering at York before a masters in Software Engineering at Oxford. I tried to start a charitable search engine, which failed as badly run startups tend to do. Ecosia reached out and something nice happened - Waterfox users helped plant over 350,000 trees in a single year.
Then System1 came along. I joined them, served as VP of Engineering, and helped scale the browser engineering team through a NYSE IPO - a genuine education, though companies change and focus shifts.
So I took Waterfox back under BrowserWorks, independent once again. The three years since have been simultaneously the most difficult and the most rewarding of Waterfox’s existence.
I’m not going to pretend the economics of running a privacy focused independent browser are great, because they’re really not. When Bing terminated all third party search contracts it hit hard - search partnerships are basically how independent browsers survive, and revenue has been poor since. There have been a few months in the red recently.
Other ways browsers make money just feel icky, and it’s not something that Waterfox stands for either.
But, pain and all, I keep coming back. Every time I think about stepping away, someone sends a kind message through the donation page, or I see a thread somewhere of someone discovering Waterfox for the first time and being pleasantly surprised. There’s a community here that cares, and I care about it.
I want users to know that whatever future steps I’ll take, they’ll always be with Waterfox and its sustainability in mind.
This year will see Waterfox shipping a native content blocker built on Brave’s adblock library - and it’s worth explaining what that means and why.
The blocker runs in the main browser process rather than as a web extension, which means it isn’t subject to the limitations that extension based blockers like uBlock Origin face. It’s faster, more tightly integrated, and doesn’t depend on a separate extension process or require us to constantly pull in upstream updates. Brave’s adblock library is also mature - it has paid engineers working on it, a wide filterset, and crucially it’s licensed under MPL2, the same licence as Waterfox, which makes it a natural fit. uBlock Origin, as good as it is, carries a GPLv3 licence that would’ve created real compatibility headaches.
For how it works in practice: by default, text ads will remain visible on our default search partner’s page - currently Startpage. The idea is that this is what will keep the lights on. This mirrors the approach Brave takes with their search partner.
Users who want to disable that entirely can do so with a single toggle in settings, and it has nothing to do with any of Brave’s crypto or rewards ecosystem - we’re just using the adblocking library. Everyone else gets a fast, native adblocker out of the box, no extension required.
If you already use an adblocker, don’t worry, you can carry on using it. This will be enabled for new users or users who aren’t already using an adblocker.
In the meanwhile, Waterfox’s membership of the Browser Choice Alliance alongside Google and Opera, is pushing for fair competition and actual user choice in the browser market.
And we still don’t have AI in the browser. That hasn’t changed. The browser’s job is to load web pages, keep your data private, and get out of the way. It seems other browsers have forgotten that.
Oh and one last thing - distribution is important too, so there’s a bigger focus on different packages and architecture support (Linux, you are such a pain to target) - more specifically for ARM64.
I’d like to think so. The browser market is more diverse than it’s ever been in terms of soft forks - everyone and their mum seems to be launching a variation of Firefox. Running an independent browser is getting harder, not easier. But there are more people who care about privacy now than there were when I was compiling a blue Firefox on a tablet PC in my bedroom. More people who want software that respects them.
Waterfox started because a sixteen year old wanted a faster browser. Fifteen years later, it’s still here because enough people want a browser that works for them - not for AI companies, and not for anyone else.
Thanks to everyone who’s been part of this - from the OCN community who gave those early builds a chance, to the people who send donations with messages that make my day, to the contributors who submit patches and file bugs. This project has always been bigger than me, even when I’m the only one working on it.
Here’s to the next 15! 🍻
I also wouldn’t be where I am with the constant moral support of my parents, Angela & Lakis, who since day dot have been proud of everything I’ve done, even if it’s felt like I was failing and flailing. My friends, numerous to count, but especially Lee who I’m surprised hasn’t once told me to shut up about my trials and tribulations. And finally, my wonderful girlfriend and partner Lucy who has been giving helpful design tips because while I have wonderful taste (only half joking) my creative talent is unfortunately lacking.
Read the media coverage Waterfox has received over the last 15 years.
...
Read the original on www.waterfox.com »
The federal government released an app yesterday, March 27th, and it’s spyware.
The White House app markets itself as a way to get “unparalleled access” to the Trump administration, with press releases, livestreams, and policy updates. The kind of content that every RSS feed on the planet delivers with one permission: network access. But the White House app, version 47.0.1 (because subtlety died a long time ago), requests precise GPS location, biometric fingerprint access, storage modification, the ability to run at startup, draw over other apps, view your Wi-Fi connections, and read badge notifications. It also ships with 3 embedded trackers including Huawei Mobile Services Core (yes, the Chinese company the US government sanctioned, shipping tracking infrastructure inside the sitting president’s official app), and it has an ICE tip line button that redirects straight to ICE’s reporting page.
This thing also has a “Text the President” button that auto-fills your message with “Greatest President Ever!” and then collects your name and phone number. There’s no specific privacy policy for the app, just a generic whitehouse.gov policy that doesn’t address any of the app’s tracking capabilities.
The White House app might actually be one of the milder ones. I’ve been going through every federal agency app I can find on Google Play, pulling their permissions from Exodus Privacy (which audits Android APKs for trackers and permissions), and what I found deserves its own term. I’m calling it Fedware.
Ok so let me walk you through what the federal government is running on your phone.
The FBI’s app, myFBI Dashboard, requests 12 permissions including storage modification, Wi-Fi scanning, account discovery (it can see what accounts are on your device), phone state reading, and auto-start at boot. It also contains 4 trackers, one of which is Google AdMob, which means the FBI’s official app ships with an ad-serving SDK while also reading your phone identity. From what I found, the FBI’s news app has more trackers embedded than most weather apps.
The FEMA app requests 28 permissions including precise and approximate location, and has gone from 4 trackers in older versions down to 1 in v3.0.14. Twenty-eight permissions for an app whose primary function is showing you weather alerts and shelter locations. To put that in context, the AP News app delivers the same kind of disaster coverage with a fraction of the permissions.
IRS2Go has 3 trackers and 10 permissions in its latest version, and according to a TIGTA audit, the IRS released this app to the public before the required Privacy Impact Assessment was even signed, which violated OMB Circular A-130. The app shares device IDs, app activity, and crash logs with third parties, and TIGTA found that the IRS never confirmed that filing status and refund amounts were masked and encrypted in the app interface.
MyTSA comes in lighter with 9 permissions and 1 tracker, but still requests precise and approximate location. The TSA’s own Privacy Impact Assessment says the app stores location locally and claims it never transmits GPS data to TSA. I’ll give them credit for documenting that, because most of these apps have privacy policies that read like ransom notes.
CBP Mobile Passport Control is where things get genuinely alarming. This one requests 14 permissions including 7 classified as “dangerous”: background location tracking (it follows you even when the app is closed), camera access, biometric authentication, and full external storage read/write. And the whole CBP ecosystem, from CBP One to CBP Home to Mobile Passport Control, feeds data into a network that retains your faceprints for up to 75 years and shares it across DHS, ICE, and the FBI.
The government also built a facial recognition app called Mobile Fortify that ICE agents carry in the field. It draws from hundreds of millions of images across DHS, FBI, and State Department databases. ICE Homeland Security Investigations signed a $9.2 million contract with Clearview AI in September 2025, giving agents access to over 50 billion facial images scraped from the internet. DHS’s own internal documents admit Mobile Fortify can be used to amass biographical information of “individuals regardless of citizenship or immigration status”, and CBP confirmed it will “retain all photographs” including those of U. S. citizens, for 15 years.
Photos submitted through CBP Home, biometric scans from Mobile Passport Control, and faces captured by Mobile Fortify all feed this system. And the EFF found that ICE does not allow people to opt out of being scanned, and agents can use a facial recognition match to determine your immigration status even when other evidence contradicts it. A U. S.-born citizen was told he could be deported based on a biometric match alone.
SmartLINK is the ICE electronic monitoring app, built by BI Incorporated, a subsidiary of the GEO Group (a private prison company that profits directly from how many people ICE monitors), under a $2.2 billion contract. The app collects geolocation, facial images, voice prints, medical information including pregnancy data, and phone numbers of your contacts. ICE’s contract gives them “unlimited rights to use, dispose of, or disclose” all data collected. The app’s former terms of service allowed sharing “virtually any information collected through the application, even beyond the scope of the monitoring plan.” SmartLINK went from 6,000 users in 2019 to over 230,000 by 2022, and in 2019, ICE used GPS data from these monitors to coordinate one of the largest immigration raids in history, arresting around 700 people across six cities in Mississippi.
And if you think your location data is safe because you use regular apps and avoid government ones, the federal government is buying that data too. Companies like Venntel collect 15 billion location points from over 250 million devices every day through SDKs embedded in over 80,000 apps (weather, navigation, coupons, games). DHS, FBI, DOD, and the DEA purchase this data without warrants, creating a constitutional loophole around the Supreme Court’s 2018 Carpenter v. United States ruling that requires a warrant for cellphone location history. The Defense Department even purchased location data from prayer apps to monitor Muslim communities. Police departments used similar data to track racial justice protesters.
And then there’s the IRS-ICE data sharing deal from April 2025. The IRS and ICE signed a Memorandum of Understanding allowing ICE to receive names, addresses, and tax data for people with removal orders. ICE submitted 1.28 million names. The IRS erroneously shared the data of thousands of people who should never have been included. The acting IRS Commissioner, Melanie Krause, resigned in protest. The chief privacy officer quit. One person leaving changes nothing about the institution, and the data was already out the door. A federal judge blocked further sharing in November 2025, ruling it likely violates IRS confidentiality protections, but by then the IRS was already building an automated system to give ICE bulk access to home addresses with minimal human oversight. The court order is a speed bump, and they’ll find another route.
The apps, the databases, and the data broker contracts all feed the same pipeline, and no single agency controls it because they all share it.
The GAO reported in 2023 that nearly 60% of 236 privacy and security recommendations issued since 2010 had still not been implemented. Congress has been told twice, in 2013 and 2019, to pass comprehensive internet privacy legislation. It has done neither. And it won’t, because the surveillance apparatus serves the people who run it, and the people who run it write the laws. Oversight is theater. The GAO issues a report, Congress holds a hearing, everyone performs concern for the cameras, and then the contracts get renewed and the data keeps flowing. It’s working exactly as designed.
The federal government publishes content available through standard web protocols and RSS feeds, then wraps that content in applications that demand access to your location, biometrics, storage, contacts, and device identity. They embed advertising trackers in FBI apps. They sell the line that you need their app to receive their propaganda while the app quietly collects data that flows into the same surveillance pipeline feeding ICE raids and warrantless location tracking. Every single one of these apps could be replaced by a web page, and they know that. The app exists because a web page can’t read your fingerprint, track your GPS in the background, or inventory the other accounts on your device.
You don’t need their app. You don’t need their permission to access public information. You already have a browser, an RSS reader, and the ability to decide for yourself what runs on your own hardware. Use them.
...
Read the original on www.sambent.com »
The measure, spearheaded by state Rep. Liz Berry (D-Seattle), outlaws noncompete agreements: in general, contracts that let employers forbid workers from creating or joining a competing business for a set amount of time.
Industries that utilize noncompete agreements, otherwise known as restrictive covenants, include technology, health care, finance and sales. The law, signed Monday, takes effect on June 30, 2027.
“Washington state is standing up for workers,” Berry said in a news release published Wednesday. “If you want to take a new job with better pay or leave to start your own company, your old job shouldn’t be able to block you from pursuing your dream.”
On the effective date, restrictive covenants will be unenforceable for all Washington-based workers and businesses, according to the new law. New noncompete agreements are illegal. Employers must notify current and former workers in writing about any voided noncompete agreements by Oct. 1, 2027.
The measure furthers a state law from 2019 that limited noncompete agreements to employees who earned more than about $126,859 and contractors who made more than around $317,147, according to the 2026 earnings thresholds posted by the Washington State Department of Labor and Industries.
The state’s latest approach echoes a decision made in 2024 under former President Joe Biden’s administration to prohibit noncompete agreements across the U. S. However, the Federal Trade Commission rolled back the ban this year.
“After the Non-Compete Rule was issued, several employers and trade groups filed lawsuits challenging it,” the agency wrote in a rule published in February. “Federal district courts in three jurisdictions issued opinions in lawsuits challenging the Non-Compete Rule.”
In Washington, the new injunction also clarifies nonsolicitation agreements, which bar former workers from courting clients and co-workers at their past workplaces.
Nonsolicitation agreements are not the same as noncompete agreements, and they are not prohibited. “However, the definition of (the) nonsolicitation agreement must be narrowly construed,” per the law.
Locally, attorneys are providing guidance to workplaces about the new measure.
“Washington now joins a small but growing number of states that have declared non-competition covenants void and unenforceable,” Alex Cates, senior counsel at law firm Holland and Knight, wrote in an advisory Tuesday. “This is a major change.”
States with full noncompete bans include California, North Dakota, Minnesota and Oklahoma, per the Economic Innovation Group, a bipartisan public policy organization.
...
Read the original on www.seattletimes.com »
webminal.org runs on a single CentOS Linux box with 8GB RAM. That’s it. No Kubernetes, no microservices, no auto-scaling. One server since 2011. It has survived:
* That one time in 2017 when a Spanish tech blog sent 10,000 users in one day
* My friend Freston’s insistence that Slackware is the only real distro
The idea was simple. I was sitting at my Windows machine at work, wanting to learn Linux. What if I could open a browser, practice on a real Linux terminal - no “Run” button, no “Execute” button, just a real server- gain the confidence, and then spin my chair to a real Linux machine and actually use it? No fear, no hesitation, because I already know what I’m doing.
We just gave the entire site a redesign. Every page, from scratch. Here’s what changed:
Root Lab - practice real sysadmin skills with full root access. We use User Mode Linux to give you a complete kernel with real block devices. Practice fdisk, LVM, RAID, mkfs, systemctl,crontab, firewalld, SSH keys, awk & sed - things you can’t do on a shared terminal.
Live command ticker - that scrolling bar on the homepage? It’s real. Powered by eBPF (execsnoop) tracing commands in real-time. 28 million and counting.
Linode → DigitalOcean → AWS → GCP → OVH → IBM Cloud → Linode
Full circle. Along the way we built: a browser IDE with VS Code/Theia, Docker-over-LXC root environments, Asciinema screencasting, a shared file pool, ttyrec-to-GIF publishing, a custom useradd binary (the default was too slow with 300k+ users), and an OpenVZ-based VM provisioning system. Some still running, some killed by time or money.
I’m from India. Freston is from the Netherlands. We met on LinuxForums.org in 2010. Until 2015, we had never seen each other’s face — not even on Skype. All communication happened over SSH into our server in a screen session.
$ screen -x chat
$ cat > /dev/null
hey, should we add MySQL support?
That’s how an entire platform was built. No Slack, no Zoom, no Jira tickets. Just two guys writing messages in a terminal.
Python: 2.7 (yes, really)
Framework: Flask 0.12.5
Terminal: Shellinabox (abandoned in 2017, still works perfectly)
Root labs: User Mode Linux (a technology from 2001)
Monitoring: eBPF/execsnoop (the only modern thing)
Database: MySQL on a server that survived a fire
Frontend: No React, no Vue, no npm. Just HTML and inline CSS.
Every tech conference talk would tell you this stack is wrong. But it serves 500k users and has been up for 15 years.
We tried replacing Shellinabox with the modern WebSocket-based terminal. It lasted a few hours in production before users reported blank screens and Firefox incompatibility.
Shellinabox is from 2005. It’s ugly, it’s slow, and it works through every firewall, proxy, and corporate network on earth. We switched back. Sometimes the old thing is the right thing.
Everyone uses Docker. We use User Mode Linux — a full Linux kernel running in userspace, created by Jeff Dike in 2001.
Why? Because when a student types fdisk /dev/sdb, they need a real block device. Docker can’t give you that. UML can.
* Copy-on-write overlay - one golden image shared by everyone
When the student types poweroff, the UML exits, and they’re back in their normal shell. Total isolation. Zero risk to the host.
The COW overlay means 100 concurrent users add only ~2GB of disk. The golden image is shared.
That 28,469,041 commands executed counter on the homepage? It’s real. We use execsnoop2 from bcc-tools.
The live ticker you see scrolling on the homepage — those are real commands being typed by real users right now. Anonymized, safe commands only. No arguments, no paths, no passwords. Just $ ls, $ gcc, $ vim flowing by like a heartbeat.
The Linux kernel itself tells us when someone runs their first ls.
“I am a Windows system admin without a lot of free time and this site has really helped me get familiar with Linux. I even use the site on my tablet. The tutorials you offer are really great too. Thanks for all you do.”
“I am a student studying Electronic Engineering in Korea. I am studying Linux by your site and it really helped me a lot!”
“The tutorial is great! I also laughed at some points. Your site is absolutely amazing. Please make more! Keep the great work up!”
Webminal has zero revenue. No ads, no tracking, no VC funding. I pay for the server from my savings. I’ve spent more money on this project than on personal or family stuff.
More than once, I thought about killing it. 15 years is a long time. There were months when I was between jobs, watching my savings shrink, and the server bill kept coming. Every month I’d think - maybe this is the month I pull the plug. Then I’d get a job, the thought would go away, and Webminal would live another year. I applied to YC. Rejected. Tried to monetize - PayPal, Stripe, paid plans. Never worked. The users who need Webminal most are students who can’t afford $4/month. So it stays free.
500,000 people have typed their first ls on Webminal. Some of them are sysadmins now. Some run their own servers. One of them probably manages more infrastructure than I ever will.
As long as it helps a single student, Webminal will run.
If you want to help to upgrade from 8GB to 128GB so more students can run root labs at the same time. Every bit counts: Sponsor @Lakshmipathi on GitHub Sponsors · GitHub
...
Read the original on community.webminal.org »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.