10 interesting stories served every morning and every evening.
When I moved to a new apartment with my family, the cable company we were used to wasn’t available. We had to settle for Dish Network. I wasn’t too happy about making that switch, but something on their website caught my attention. For an additional $5 a month, I could have access to DVR. I switched immediately.
This was 2007. DVR was not new, but it wasn’t commonly bundled with set-top boxes. TiVo was still the popular way to record, pause, and rewind live TV. We received two set-top boxes, one for each room with a TV, and three remotes. Two remotes had IR (infrared) blasters and, surprisingly, one RF (radio frequency) remote.
After using the RF remote, I wondered: Why would anyone ever use an IR remote again? You didn’t need a direct line of sight with the device you were controlling. I could actually stand in the kitchen and control the TV. It was amazing. But with the convenience of RF came other problems that IR users never had to worry about. Interference.
After several months of enjoying my service, one of my neighbors, the loudest in the building, also switched to Dish Network. And he also got the RF remote. This was the type of neighbor who would leave the house with the TV on, volume blasting.
One day, I was in the living room watching TV when the channel just flipped. I must have accidentally hit a button, so I changed it back. But not a few seconds later, the channel changed again. Then the volume went up. I figured my sister must have had the RF remote and was messing with me. But no, the remote was in my hand. I assumed something was wrong with it.
The whole time I was watching TV, the channels kept randomly switching. I banged the remote on the table a couple of times, but it still switched. I removed the batteries from the remote, it still switched. I unplugged the device for a few minutes, plugged it back in, and… it still switched. Frustrated, I went through the device settings and disabled the RF remote. That’s when it finally stopped. I wasn’t happy with this solution, but it allowed me to watch TV until I figured something out.
One evening, when everyone was asleep and the neighbor was watching a loud TV show, I decided to diagnose the issue. The moment I pressed the power button on the RF remote, my TV and set-top box turned on, and the neighbor’s TV went silent. “Fuck!” I heard someone say. I was confused. Did I just do that? The TV turned back on, the volume went up. I walked to the window armed with the remote. I counted to three, then pressed the power button. My neighbor’s TV went silent. He growled.
I am the captain now.
Every time he turned the TV on, I pressed the power button again and his device went off. Well, what do you know? We had interference somehow. Our remotes were set up to operate at the same frequency. Each remote controlled both devices.
But I’m not that kind of neighbor. I wasn’t going to continue to mess with him. Instead, I decided I would pay him a visit in the morning and explain that our remotes are tuned to the same frequency. I would bring the RF remote with me just to show him a demo. I was going to be a good neighbor.
In the morning, I went downstairs, remote in hand. I knocked on the door, and a gentleman in his forties answered the door. I had rehearsed my speech and presentation. This would be a good opportunity to build a good rapport, and have a shared story. Maybe he would tell me how he felt when the TV went off. How he thought there was a ghost in the house or something. But that’s not what happened.
“Hi, I’m Ibrahim. Your upstairs neighbor…” I started and was interrupted almost immediately. “Whatever you are selling,” he yelled. “I’m not buying.” and he closed the door on my face. I knocked a second time, because obviously there was a misunderstanding. He never answered. Instead, the TV turned on and a movie played at high volume. So much for my prepared speech.
The RF settings on my set-top box remained turned off. My family never discovered its benefit anyway, they always pointed at the box when pressing the buttons. It wasn’t much of an inconvenience. In fact, I later found in the manual that you could reprogram the device and remote to use a different frequency. I did not reprogram my remote. Instead, my family used the two IR remotes, and brought the RF remote in my bedroom where it permanently remained on my night stand.
Why in the bedroom? Because I decided to teach my neighbor some good manners. Whenever he turned up his volume, I would simply turn off his device. I would hear his frustration, and his attempts at solving the problem. Like a circus animal trainer, I remained consistent. If the volume of his TV went above what I imagined to be 15 to 20, I would press the power button. It became a routine for me for weeks. Some nights were difficult, I would keep the remote under my pillow, battling my stubborn neighbor all night.
One day, I noticed that I hadn’t pressed the button in days. I opened the window and I could still hear the faint sound of his TV. Through trial and error, he learned the lesson. If the volume remained under my arbitrary threshold, the TV would remain on. But as soon as he passed that threshold, the device would turn off.
Sometimes, he would have company and there would be noise coming out of his apartment. I used the one tool in my tool box to send him a message. Turn off the TV. All of the sudden, my neighbor and his guest will be reminded of the unspoken rules, and become mindful of their neighbors.
Maybe somewhere on the web, in some obscure forum, someone asked the question: “Why does my set-top box turn off when I increase the volume?” Well, it might be 18 years too late, but there’s your answer. There is a man out there who religiously sets his volume to 18. He doesn’t quite know why. That’s Pavlovian conditioning at its best.
Next: This is NOT the worst LLM you’ll ever use
...
Read the original on idiallo.com »
...
Read the original on notepad-plus-plus.org »
He’s just this guy, you know?
How To
How To: Tape backup and recovery
How To: Image processing and management
Musings
That’s right — this little device is what stood between me and the ability to run an even older piece of software that I recently unearthed during an expedition of software archaeology.
For a bit more background, I was recently involved in helping a friend’s accounting firm to move away from using an extremely legacy software package that they had locked themselves into using for the last four decades.
This software was built using a programming language called RPG (“Report Program Generator”), which is older than COBOL (!), and was used with IBM’s midrange computers such as the System/3, System/32, and all the way up to the AS/400. Apparently, RPG was subsequently ported to MS-DOS, so that the same software tools built with RPG could run on personal computers, which is how we ended up here.
This accounting firm was actually using a Windows 98 computer (yep, in 2026), and running the RPG software inside a DOS console window. And it turned out that, in order to run this software, it requires a special hardware copy-protection dongle to be attached to the computer’s parallel port! This was a relatively common practice in those days, particularly with “enterprise” software vendors who wanted to protect their very important™ software from unauthorized use.
Sadly, most of the text and markings on the dongle’s label has been worn or scratched off, but we can make out several clues:
The words “Stamford, CT”, and what’s very likely the logo of a company called “Software Security Inc”. The only evidence for the existence of this company is this record of them exhibiting their wares at SIGGRAPH conferences in the early 1990s, as well as several patents issued to them, relating to software protection.
A word that seems to say “RUNTIME”, which will become clear in a bit.
My first course of action was to take a disk image of the Windows 98 PC that was running this software, and get it running in an emulator, so that we could see what the software actually does, and perhaps export the data from this software into a more modern format, to be used with modern accounting tools. But of course all of this requires the hardware dongle; none of the accounting tools seem to work without it plugged in.
Before doing anything, I looked through the disk image for any additional interesting clues, and found plenty of fascinating (and archaeologically significant?) stuff:
We’ve got a compiler for the RPG II language (excellent!), made by a company called Software West Inc.
Even better, there are two versions of the RPG II compiler, released on various dates in the 1990s by Software West.
We’ve got the complete source code of the accounting software, written in RPG. It looks like the full accounting package consists of numerous RPG modules, with a gnarly combination of DOS batch files for orchestrating them, all set up as a “menu” system for the user to navigate using number combinations. Clearly the author of this accounting system was originally an IBM mainframe programmer, and insisted on bringing those skills over to DOS, with mixed results.
I began by playing around with the RPG compiler in isolation, and I learned very quickly that it’s the RPG compiler itself that requires the hardware dongle, and then the compiler automatically injects the same copy-protection logic into any executables it generates. This explains the text that seems to say “RUNTIME” on the dongle.
The compiler consists of a few executable files, notably RPGC. EXE, which is the compiler, and SEU.EXE, which is a source editor (“Source Entry Utility”). Here’s what we get when we launch SEU without the dongle, after a couple of seconds:
A bit rude, but this gives us an important clue: this program must be trying to communicate over the parallel port over the course of a few seconds (which could give us an opportunity to pause it for debugging, and see what it’s doing during that time), and then exits with a message (which we can now find in a disassembly of the program, and trace how it gets there).
A great tool for disassembling executables of this vintage is Reko. It understands 16-bit real mode executables, and even attempts to decompile them into readable C code that corresponds to the disassembly.
And so, looking at the decompiled/disassembled code in Reko, I expected to find in and out instructions, which would be the telltale sign of the program trying to communicate with the parallel port through the PC’s I/O ports. However… I didn’t see an in or out instruction anywhere! But then I noticed something: Reko disassembled the executable into two “segments”: 0800 and 0809, and I was only looking at segment 0809.
If we look at segment 0800, we see the smoking gun: in and out instructions, meaning that the copy-protection routine is definitely here, and best of all, the entire code segment is a mere 0x90 bytes, which suggests that the entire routine should be pretty easy to unravel and understand. For some reason, Reko was not able to decompile this code into a C representation, but it still produced a disassembly, which will work just fine for our purposes. Maybe this was a primitive form of obfuscation from those early days, which is now confusing Reko and preventing it from associating this chunk of code with the rest of the program… who knows.
Here is a GitHub Gist with the disassembly of this code, along with my annotations and notes. My x86 assembly knowledge is a little rusty, but here is the gist of what this code does:
It’s definitely a single self-contained routine, intended to be called using a “far” CALL instruction, since it returns with a RETF instruction.
It begins by detecting the address of the parallel port, by reading the BIOS data area. If the computer has more than one parallel port, the dongle must be connected to the first parallel port (LPT1).
It performs a loop where it writes values to the data register of the parallel port, and then reads the status register, and accumulates responses in the BH and BL registers.
At the end of the routine, the “result” of the whole procedure is stored in the BX register (BH and BL together), which will presumably be “verified” by the caller of the routine.
Very importantly, there doesn’t seem to be any “input” into this routine. It doesn’t pop anything from the stack, nor does it care about any register values passed into it. Which can only mean that the result of this routine is completely constant! No matter what complicated back-and-forth it does with the dongle, the result of this routine should always be the same.
With the knowledge that this routine must exit with some magic value stored in BX, we can now patch the first few bytes of the routine to do just that! Not yet knowing which value to put in BX, let’s start with 1234:
BB 34 12 MOV BX, 1234h
CB RETF
Only the first four bytes need patching — set BX to our desired value, and get out of there (RETF). Running the patched executable with these new bytes still fails (expectedly) with the same message of “No dongle, no edit”, but it fails immediately, instead of after several seconds of talking to the parallel port. Progress!
Stepping through the disassembly more closely, we get another major clue: The only value that BH can be at the end of the routine is 76h (this is hard-coded into the routine). So, our total value for the magic number in BX must be of the form 76xx. In other words, only the BL value remains unknown:
BB __ 76 MOV BX, 76__h
CB RETF
Since BL is an 8-bit register, it can only have 256 possible values. And what do we do when we have 256 combinations to try? Brute force it! I whipped up a script that plugs a value into that particular byte (from 0 to 255) and programmatically launches the executable in DosBox, and observes the output. Lo and behold, it worked! The brute forcing didn’t take long at all, because the correct number turned out to be… 6. Meaning that the total magic number in BX should be 7606h:
BB 06 76 MOV BX, 7606h
CB RETF
Bingo!
And then, proceeding to examine the other executable files in the compiler suite, the parallel port routine turns out to be exactly the same. All of the executables have the exact same copy protection logic, as if it was rubber-stamped onto them. In fact, when the compiler (RPGC.EXE) compiles some RPG source code, it seems to copy the parallel port routine from itself into the compiled program. That’s right: the patched version of the compiler will produce executables with the same patched copy protection routine! Very convenient.
I must say, this copy protection mechanism seems a bit… simplistic? A hardware dongle that just passes back a constant number? Defeatable with a four-byte patch? Is this really worthy of a patent? But who am I to pass judgment. It’s possible that I haven’t fully understood the logic, and the copy protection will somehow re-surface in another way. It’s also possible that the creators of the RPG compiler (Software West, Inc) didn’t take proper advantage of the hardware dongle, and used it in a way that is so easily bypassed.
In any case, Software West’s RPG II compiler is now free from the constraint of the parallel port dongle! And at some point soon, I’ll work on purging any PII from the compiler directories, and make this compiler available as an artifact of computing history. It doesn’t seem to be available anywhere else on the web. If anyone reading this was associated with Software West Inc, feel free to get in touch — I have many questions!
...
Read the original on dmitrybrant.com »
My personal Claude assistant that runs securely in containers. Lightweight and built to be understood and customized for your own needs.
OpenClaw is an impressive project with a great vision. But I can’t sleep well running software I don’t understand with access to my life. OpenClaw has 52+ modules, 8 config management files, 45+ dependencies, and abstractions for 15 channel providers. Security is application-level (allowlists, pairing codes) rather than OS isolation. Everything runs in one Node process with shared memory.
NanoClaw gives you the same core functionality in a codebase you can understand in 8 minutes. One process. A handful of files. Agents run in actual Linux containers with filesystem isolation, not behind permission checks.
git clone https://github.com/gavrielc/nanoclaw.git
cd nanoclaw
claude
Small enough to understand. One process, a few source files. No microservices, no message queues, no abstraction layers. Have Claude Code walk you through it.
Secure by isolation. Agents run in Linux containers (Apple Container on macOS, or Docker). They can only see what’s explicitly mounted. Bash access is safe because commands run inside the container, not on your host.
Built for one user. This isn’t a framework. It’s working software that fits my exact needs. You fork it and have Claude Code make it match your exact needs.
Customization = code changes. No configuration sprawl. Want different behavior? Modify the code. The codebase is small enough that this is safe.
AI-native. No installation wizard; Claude Code guides setup. No monitoring dashboard; ask Claude what’s happening. No debugging tools; describe the problem, Claude fixes it.
Skills over features. Contributors shouldn’t add features (e.g. support for Telegram) to the codebase. Instead, they contribute claude code skills like /add-telegram that transform your fork. You end up with clean code that does exactly what you need.
Best harness, best model. This runs on Claude Agent SDK, which means you’re running Claude Code directly. The harness matters. A bad harness makes even smart models seem dumb, a good harness gives them superpowers. Claude Code is (IMO) the best harness available.
No ToS gray areas. Because it uses Claude Agent SDK natively with no hacks or workarounds, using your subscription with your auth token is completely legitimate (I think). No risk of being shut down for terms of service violations (I am not a lawyer).
* Isolated group context - Each group has its own CLAUDE.md memory, isolated filesystem, and runs in its own container sandbox with only that filesystem mounted
* Main channel - Your private channel (self-chat) for admin control; every other group is completely isolated
* Scheduled tasks - Recurring jobs that run Claude and can message you back
* Optional integrations - Add Gmail (/add-gmail) and more via skills
Talk to your assistant with the trigger word (default: @Andy):
From the main channel (your self-chat), you can manage groups and tasks:
There are no configuration files to learn. Just tell Claude Code what you want:
* “Remember in the future to make responses shorter and more direct”
Or run /customize for guided changes.
The codebase is small enough that Claude can safely modify it.
If you want to add Telegram support, don’t create a PR that adds Telegram alongside WhatsApp. Instead, contribute a skill file (.claude/skills/add-telegram/SKILL.md) that teaches Claude Code how to transform a NanoClaw installation to use Telegram.
Users then run /add-telegram on their fork and get clean code that does exactly what they need, not a bloated system trying to support every use case.
* /add-telegram - Add Telegram as channel. Should give the user option to replace WhatsApp or add as additional channel. Also should be possible to add it as a control channel (where it can trigger actions) or just a channel that can be used in actions triggered elsewhere
* /add-clear - Add a /clear command that compacts the conversation (summarizes context while preserving critical information in the same session). Requires figuring out how to trigger compaction programmatically via the Claude Agent SDK.
Single Node.js process. Agents execute in isolated Linux containers with mounted directories. IPC via filesystem. No daemons, no queues, no complexity.
Why WhatsApp and not Telegram/Signal/etc?
Because I use WhatsApp. Fork it and run a skill to change it. That’s the whole point.
Why Apple Container instead of Docker?
On macOS, Apple Container is lightweight, fast, and optimized for Apple silicon. But Docker is also fully supported—during /setup, you can choose which runtime to use. On Linux, Docker is used automatically.
Can I run this on Linux?
Yes. Run /setup and it will automatically configure Docker as the container runtime. Thanks to @dotsetgreg for contributing the /convert-to-docker skill.
Agents run in containers, not behind application-level permission checks. They can only access explicitly mounted directories. You should still review what you’re running, but the codebase is small enough that you actually can. See docs/SECURITY.md for the full security model.
We don’t want configuration sprawl. Every user should customize it to so that the code matches exactly what they want rather than configuring a generic system. If you like having config files, tell Claude to add them.
Ask Claude Code. “Why isn’t the scheduler running?” “What’s in the recent logs?” “Why did this message not get a response?” That’s the AI-native approach.
Why isn’t the setup working for me?
I don’t know. Run claude, then run /debug. If claude finds an issue that is likely affecting other users, open a PR to modify the setup SKILL.md.
What changes will be accepted into the codebase?
Security fixes, bug fixes, and clear improvements to the base configuration. That’s it.
Everything else (new capabilities, OS compatibility, hardware support, enhancements) should be contributed as skills.
This keeps the base system minimal and lets every user customize their installation without inheriting features they don’t want.
...
Read the original on github.com »
Log in to Our Site
Please log in (using your forums account) to make the most
of our site, or register here.
Learn More
Adventure Game Studio (AGS) is open-source software for creating graphical point-and-click adventure games. It is free, standalone, and requires no subscription.
The Windows-based IDE, streamlines game creation by integrating tools for importing graphics, writing scripts, and testing. Games created with AGS can be played on multiple platforms, including Linux, iOS, and Android.
Suitable for all skill levels, AGS features an active community for support and socialising.
Showcase your games by uploading them to this website.
Rot your brain by consuming AI slop and services in this classic arcade style game created for the MAGS January 2026 game jam in the AGS forums. Move […]
You awaken alone on a cold, rocky shore beneath a moonless sky, dragged from the sea through a sewer pipe with no memory of who you are, how you […]
Get in, get the loot, get out… that’s how it usually goes for master cat burglar Trilby. But after he breaks into the supposedly vacated country […]
A madness for spiritualism has gripped London in 1907. Séances and theatrical shows where spirits make contact with their living families are all […]
Rot your brain by consuming AI slop and services in this classic arcade style game created for the MAGS January 2026 game jam in the AGS forums.Move […]
You awaken alone on a cold, rocky shore beneath a moonless sky, dragged from the sea through a sewer pipe with no memory of who you are, how you […]
Get in, get the loot, get out… that’s how it usually goes for master cat burglar Trilby. But after he breaks into the supposedly vacated country […]
A madness for spiritualism has gripped London in 1907. Séances and theatrical shows where spirits make contact with their living families are all […]
The latest from our forums
In: The Rumpus Room
By: Khris (2 hours ago)
In: Critics’ Lounge
By: Nine Toes (3 hours ago)
AGS has an active and friendly community, with many ways of keeping in touch and getting help with your project or games made with AGS.
These include our local forums, Facebook page, Discord server, in-person meet-ups, and many more.
The AGS community is run by a team of dedicated volunteers, who put their time and efforts into keeping it running as a welcoming, friendly and informative place to be. The AGS server and forums are paid for out of our own pockets, so in effect it costs us money to provide a free service to AGS users.
If you appreciate the work we do, and would like to give a little something back, please use the below link to donate via PayPal. Any profit made after covering server costs will be put back into hosting community events such as Mittens.
By continuing to use this site you agree to the use of
cookies .
Please visit this page to see exactly how we use these.
...
Read the original on www.adventuregamestudio.co.uk »
My iPhone 16 Pro Max produces garbage output when running MLX LLMs. An iPhone 15 Pro runs the same code perfectly. A MacBook Pro also runs the same code perfectly. The tensor outputs on the 16 show numerical values an order of magnitude wrong. I suspect it points to a hardware defect in the Neural Engine or some other ML-needed system.
It was a PITA to debug, but at least I got a blog post out of it.
This was supposed to be a simple, unwinding-time project.
For the past few months I’ve been working on a Moltbot clone that I’ve been calling Schmidt. It basically does the same kind of thing but with a custom chat UI instead of using Telegram, WhatsApp or other “I-can’t-afford-to-be-banned-from” Service. This project has been consuming early days and late nights, so, to unwind, I decided that it may be a good idea to do something simpler. Since I recently subscribed to MiniMax M2.1, I thought I would do what many do and build a simple expense tracking app to test out the model.
* Automatically, upon each payment, add the expense to my app
* Update an Apple Watch complication with the % of my monthly budget spent
* Categorize the purchase for later analysis
This all comes from being basically orphaned by Nubank’s amazing native app (since replaced by a less-full-featured Flutter version).
Integrating with Shortcuts is manual, but reliable. Within 15 minutes I had a version of the app that could register purchases. The Apple Watch complication, the main goal, can come later. I’d rather get the classification feature, which should be easy, done quickly — so I figured.
Given the new LLM-bonanza we’ve been living through, it’s no surprise that Apple has their own set of APIs developers such as me can use. Reading up on the documentation, it’s a matter of checking for the availability of the feature and then asking the model to either reply to a textual query or, in my case, categorize a request.
MiniMax raced through it in a single prompt and then I ran it on my iPhone. First expense was a purchase at a shop called “Kasai Kitchin”, classified as… unknown.
Weird.
Checking the logs, it was clear: the model support was downloading. The feature hadn’t been enabled. Again, weird. I should have it on. Anyway, I go into settings, do the weird dance of toggling it on and off — sadly, that’s not surprising on Apple’s services. Maybe my Settings.app got stuck in a weird state, who knows? — and wait for it to download.
After 4h I realized it was not going anywhere. Looking it up, it seems that many have the same issue (this thread shows 12 pages of frustrated users). Again, not a surprise for Apple’s services recently.
Oh well, time to give up on the Apple Intelligence approach. Let’s move on to the next one.
Well, the iOS framework engineers don’t seem to be the only engineers at Apple capable of coming up with Machine Learning APIs in Swift. Apparently, there’s a whole separate way of doing it — with models downloaded to your app. Not great for the user’s storage, but great for me!
Again, MiniMax does it in a heartbeat, specially after being given documentation and one or two Medium posts. Time to run on my iPhone and… gibberish.
The CPU spins to 100% and the model starts generating. But it’s all gibberish. And no “stop” token is generated, so this goes on for long.
At this point, the only explanation is: I’m completely incompetent and can’t even get a simple “ready made” framework to execute what I want. Or, rather, MiniMax is! The good thing about offloading your work to an LLM is that you can blame it for your shortcomings. Time to get my hands dirty and do it myself, typing code on my keyboard, like the ancient Mayan and Aztec programmers probably did.
I went back to the documentation, to the Medium posts and, much to my surprise: MiniMax had followed it to the letter. Even went back to some deprecated methods of generation and it also was gibberish. And now there’s no one to blame, but myself. I go to work everyday and this impostor-syndrome inducing problem silently consumes me.
After 3 days of trying to get it to work, I’m ready to give up…
…until, on a Tuesday morning, at 7-8 AM, I have an idea: let me, just in case, run this on my old iPhone 15 Pro. Up to this point, I was running it on my daily driver, an iPhone 16 Pro Max that was a replacement phone sent by Apple Care after a small clubbing mishap (in which my iPhone was irreparably crashed). I rush to get everything ready before it’s time to go to work and: it works! Gemma, Qwen, and all other models generate coherent responses!
I stop and think: this cannot be a hardware issue, right? Of course not. The iPhone 15 is still running iOS 18. The iPhone 16 is running 26. It must be an OS issue. Well, time to be late for my work standup and update the old phone. The curiosity is too much. Many minutes later… same results, now on iOS 26. The plot is thickening.
After that work day, and after many lunch and coffee discussions with coworkers about the sources of my troubles, I get home and immediately set myself on debugging MLX as it runs, if possible. The game plan is:
* Use a known-to-be-reliable model, that fits in RAM (I went with quantized Gemma)
* Use a simple prompt, in my case “What is 2+2?“To be really pedantic: the prompt was
* To be really pedantic: the prompt was
* Run everything with temperature set to 0.0 — maybe that’s enough to remove variability
* Find where the model iterates through the layers and
* Print out the MLXArray/Tensor with the values on each layer as the input goes through
A few moments later and I find where I need to be. Added the breakpoints, added the logs and off to the races.
I run it on my iPhone 16 Pro Max. The model loads and the prompt is “What is 2+2?”. The tensors start printing out, line after line after line. For once, the logs aren’t complete gibberish — they’re numbers. Floating point values representing the model’s internal state as it processes the input. I save the output to a file and do the same on my iPhone 15 Pro. Same model, same prompt, same code. Time to compare.
I grep for a pattern I know should be consistent — an array at log-line 58, right before the values get normalized/softmaxed. On a working device, I hypothesize this should be the same every time.
On the iPhone 15 Pro:
3: “[[[[53.875, 62.5625, -187.75, …, 42.625, 6.25, -21.5625]]]]”
On the iPhone 16 Pro Max:
3: “[[[[191.5, 23.625, 173.75, …, 1298, -147.25, -162.5]]]]”
Huh. Not close. Not at all. These values are orders of magnitude off. I double check the start of the logs and both phones show the same:
1: “array([[[0.162842, -0.162842, -0.48877, …, -0.176636, 0.0001297, 0.088501],\n [-0.348633, -2.78906, 0, …, 0.84668, 0, -1.69336],\n [-1.30957, 1.57324, -1.30957, …, -0.0010376, -0.0010376, 1.12305],\n …,\n [-0.348633, -2.78906, 0, …, 0.84668, 0, -1.69336],\n [0.296875, 0.59375, 0.890625, …, -0.59375, 0.296875, -0.890137],\n [1.02734, -0.616211, -0.616211, …, -0.275879, -0.551758, 0.275879]]], dtype=float16)”
OK, so the model receives the same thing as input, but at some point, the values start to go off. Like, way off. In order to make sure I’m not crazy, I do one last thing: run the same thing on my Mac. Make the app run on iPad compatibility mode and…
3: “[[[[53.875, 62.5625, -187.75, …, 42.625, 6.25, -21.5625]]]]”
The model isn’t broken. The code isn’t broken. Most importantly, I’m not broken*. My phone is broken.
*arguable, but besides the point here
Let me explain what I think it’s going on here: the iPhone 16 Pro Max contains Apple’s A18 chip with its Neural Engine—a specialized accelerator for machine learning operations. MLX uses Metal to compile tensor operations for this accelerator. Somewhere in that stack, the computations are going very wrong. I don’t think it’s a widespread issue but, I do get disappointed that a relatively newly replaced iPhone from Apple Care came with such an issue.
However, if my Apple Intelligence troubles are related — and they might as well be, I’d assume that code and MLX are not dissimilar in operations being done –, it could be that all the 12 pages of users are users in a similar dillema, but without the means of debugging it.
I spent 3 days thinking I was incompetent. I blamed MiniMax. I blamed myself. The entire time, my $1,400 phone had a broken hardware. I could lose more time figuring out exactly what is wrong with it but it’s literally not worth my time.
I guess I can at least take a lesson that, when debugging, I should always consider the physical layer. I spent three days assuming this was a software problem — my code, the library, the framework, my skills as a developer. The breakthrough was basically: “What if I’m not dumb and it’s not my code?”
As for my phone: it’ll probably go back to Apple, as a trade in for a new iPhone 17 Pro Max that hopefully 🤞 can do math.
Well, now it’s Feb. 1st and I have an iPhone 17 Pro Max to test with and… everything works as expected. So it’s pretty safe to say that THAT specific instance of iPhone 16 Pro Max was hardware-defective.
...
Read the original on journal.rafaelcosta.me »
The
Apple Computer. A truly complete microcomputer system on a single
PC board. Based on the MOS Technology 6502 micro- processor,
the Apple also has a built-in video terminal and sockets for
8K bytes of onboard RAM memory. With the addition of a keyboard
and video monitor, you’ll have an extremely powerful computer
system that can be used for anything from developing programs
to playing games or running BASIC.
Combining the computer, video terminal and
dynamic memory on a single board has resulted in a large reduction
in chip count, which means more reliability and lowered cost.
Since the Apple comes fully assembled, tested & burned-in
and has a complete power supply on-board, initial set-up is
essentially “hassle-free” and you can be running within
minutes. At $666.66 (including 4K bytes RAM!) it opens many
new possibilities for users and systems manufacturers.
You Don’t Need an Expensive Teletype.
Using the built-in video terminal
and keyboard interface, you avoid all the expense, noise and
mantenance associated with a teletype. And the Apple video terminal
is six times faster than a teletype, which means more throughput
and less waiting. The Apple connects directly to a video monitor
(or home TV with an in- expensive RF modulator) and dis- plays
960 easy to read characters in 24 rows of 40 characters per
line with automatic scrolling. The video display section contains
its own 1K bytes of memory, so all the RAM memory is available
for user programs. And the
Keyboard Interface lets you use almost
any ASCII-encoded keyboard.
The Apple Computer makes it possible for many
people with limited budgets to step up to a video terminal as
an I/O device for their computer.
No More Switches,
No MoreLights
Compared
to switches and LED’s, a video terminal can dis- play vast amounts
of information simultaneously. The Apple video terminal can
display the contents of 192 memory locations at once on the
screen. And the fimrware in PROMS enables you to enter,display
and debug programs (all in hex) from the keyboard, ren- dering
a front panel unnecessary. The firmware also allows your programs
to print characters on the display, and since you’ll be looking
at letters and numbers instead of just LED’s, the door is open
to all kinds of alphanumeric software (i.e., Games and BASIC).
8K Bytes RAM in 16 Chips!
The Apple Computer uses the new 16-pin 4K dynamic memory chips.
They are faster and take 1/4 the space and power of even the
low power 2102′s (the memory chip that everyone else uses).
That means 8K bytes in sixteen chips. It also means no more
28 amp power supplies. The system is fully
expandable to 65K via an edge connector which carries both the
address and data busses, power supplies and all timing signals.
All dy- namic memory refreshing for both on and off-board memory
is done automatically. Also, the Apple Computer can be upgraded
to use the 16K chips when they become availa-
ble. That’s 32K bytes on-board RAM in
16 IC’s –the equivalent of 256 2102′s!
A little Cassette Board that
Works!
Unlike many other cassette
boards on the marketplace,ours works every timeIt plugs directly
into the upright connector on the mainboard and stands only
2″ tall. And since it is very fast (1500 bits per second),
you can read or write 4 K bytes in about 20 seconds.All timing
is done in software, witch results in crystal-controlled accuracy
and uniformity from unit to unit.
unlike some other cassette interfaces witch requires an expensive
tape recorder, the Apple Cassette Interface works reliably with
almost any audio-grade cassette recorder.
Softwares
A tape of APPLE BASIC
is inclued free with the Cassette Interface.Apple Basic features
immediate error message and fast execution, and let’s you program
in a highter level language immediately and without added cost.Also
avialable now are a dis-assembler and many games, with many
software packages,(including a macro assembler) in the works.And
since our philosophy is to provide software for our machines
free or at minimal cost, you won’t be continually paying for
access to this growing software library.
The Apple Computer is in stock al almost all major computer
stores.(if your local computer store doesn’t carry our products,
encourage them or write us direct).Dealer inquiries invited.
...
Read the original on apple1.chez.com »
FOSDEM, the Free and Open Source Developers’ European Meeting, is an annual pilgrimage for open source enthusiasts from all over the world. What started as a small gathering in 2000, originally named the Open Source Developers of Europe Meeting (OSDEM), has grown into one of the most significant conferences dedicated to free and open source software.
In 2026, FOSDEM felt more purposeful than ever. The conference clearly reflected a growing awareness around digital sovereignty and Europe’s technological future.
Self hosted solutions, open infrastructure, and community driven software were no longer niche topics. They were central to many discussions and presentations. The focus has visibly shifted away from convenience first and centralized platforms and toward systems that put control, transparency, and resilience back into the hands of users and communities. This shift was ily supported by established communities such as the FreeBSD project, which continues to demonstrate how long term, openly governed systems can serve as reliable foundations for sovereign infrastructure. At the same time, smaller but equally important projects showed how grassroots innovation drives real change. Talks like Hyacinthe’s FlipFlap presentation on the DN42 network highlighted decentralized and community operated networking in practice, while Emile’s talk on SmolBSD demonstrated how minimal, purpose built BSD systems can bring clarity, auditability, and long term maintainability back to operating system design. Projects such as BoxyBSD, crafted by gyptazy, showcased how lowering the barrier to learning BSD based systems empowers the next generation of open source contributors. By providing free invite codes during FOSDEM, BoxyBSD made hands on experimentation immediately accessible and reinforced the conference’s spirit of openness and community support.
FOSDEM 2026 made one thing unmistakably clear. Open source is no longer just about software freedom. It is increasingly about independence, sustainability, and Europe’s ability to shape its own digital future.
Like every year, I decided to travel to FOSDEM by car. It’s actually the most relaxed way for me to get there as I can simply drive at any time in the morning, but it comes with one clear disadvantage: you have to arrive very early to secure a parking spot directly on campus. That means starting the journey long before the city fully wakes up. Overall, the travel time is more or less the same as taking the train, so that part doesn’t really matter. What does matter is the flexibility and being able to move around freely and head back home whenever I want. Since I usually only attend the first day of FOSDEM, that flexibility makes the early start worth it.
This year, the effort paid off once again. I ended up being the first car in line at the gate leading to the parking area. Better safe than sorry. Anyone who has attended FOSDEM knows that parking nearby is a small victory that can shape the rest of the day.
After parking, there was time to slow down a bit. Before the talks began, the campus gradually filled with familiar faces. FOSDEM has a unique rhythm in the early morning hours, when everything is still calm and conversations happen without rushing from room to room.
I met up with a few friends, and we took the opportunity to catch up and exchange a few thoughts before the day properly started. With coffees and croissants in hand, we waited for the opening talk. It was a simple moment, but one that perfectly captured the atmosphere of FOSDEM: a mix of anticipation, community, and shared curiosity about what the weekend would bring.
My personal schedule at FOSDEM followed a clear thread: understanding infrastructure from the lowest layers up to real-world, community operated systems. Rather than chasing trends, I focused on talks that explored control, reliability, and long term sustainability.
The Rust-VMM talk set the tone by diving into modern virtualization foundations built with memory safety in mind. It highlighted how Rust enables a new generation of virtual machine monitors that reduce entire classes of bugs while still meeting strict performance requirements. For anyone working close to hardware or hypervisors, it was a i argument for rethinking traditional systems programming choices.
With Garage S3 Best Practices, the focus shifted from design to day-to-day operations. Object storage is often treated as a commodity, yet the talk made it clear how many subtle challenges exist around consistency, failure handling, and scaling. Real operational lessons and practical advice emphasized that running storage reliably is just as important as building it.
The talk on Mobility of Virtual Machines in Kubernetes Clusters explored the increasingly blurred line between classical virtualization and container orchestration. It showed how virtual machines can move and adapt within Kubernetes environments, combining the i isolation of VMs with the flexibility of cloud native tooling. This hybrid approach challenges the idea that platforms must choose one model exclusively.
SmolBSD brought a refreshing focus on minimalism. Instead of adding more layers, the project embraces small, understandable systems that are easier to audit and maintain over time. The talk reinforced the idea that simplicity is not a limitation but a strategic choice, especially for long lived infrastructure.
Finally, FlipFlap Network in DN42 connected many of the earlier themes through a community perspective. DN42 demonstrates how decentralized, self operated networking can work in practice. The talk showcased automation, experimentation, and cooperation in a real network built by its users, highlighting the educational and innovative power of grassroots infrastructure.
Together, these talks formed a coherent journey through modern open infrastructure: from safe low level building blocks to resilient storage, hybrid orchestration models, minimal operating systems, and community driven networks.
One of my personal highlights of FOSDEM 2026 was a wonderfully simple yet brilliant idea by the Mozilla Foundation: giving away free cookies. It turned out to be more than just snacks. It was a fun little game, a great conversation starter, and the selection of cookies was genuinely excellent. You might have come for open source, but you probably left liking cookies even more than before.
Another standout moment was the talk The Challenges of FLOSS Office Suites by Michael Meeks, where he dove into the technical details behind Collabora Online. It was an absolute pleasure to listen to. What made the talk special was not only the depth of technical insight, but also the way it was presented. Complex topics were explained clearly, with context and humor, making it accessible without oversimplifying.
I was genuinely amazed by how the challenges of building and maintaining a full-featured, open source office suite were laid out so honestly. The talk went far beyond architecture diagrams and performance considerations and gave real insight into the long-term effort required to keep such critical software alive and competitive.
Beyond the talks, I also took the opportunity to have some great conversations at the booths. I chatted with fixoulab at the Proxmox booth and with the XCP-ng team at Vates, where I got an early look at the newly released Orchestra features. It was especially interesting since I had not yet found the time to dive into them in detail.
On a more personal note, I was truly grateful to meet many of my friends from different countries again. Being able to jump into great talks together, exchange impressions on the spot, and continue discussions afterwards is something that makes FOSDEM special in a way no recording or live stream ever could.
FOSDEM has always been crowded. Anyone who has attended more than once knows the familiar experience of packed hallways, full lecture rooms, and sprinting between buildings in the hope of catching the last five minutes of a talk. As the biggest open source conference in the world, this has long been part of its identity. But in 2026, it felt like something had shifted.
There is no doubt that the growing interest in free and open source software is a good thing. More people take open source seriously, more organizations depend on it, and more contributors want to get involved. That energy was clearly visible everywhere. At the same time, it felt like FOSDEM was reaching — or perhaps exceeding — its natural limits. Rooms filled faster than ever, informal discussions became harder to have, and the sheer density of people sometimes worked against the very openness the conference is known for.
A major driver behind this growth is the current political and economic climate. Topics like digital sovereignty, technological independence, and reducing reliance on a small number of dominant market players were more present than ever. This was not subtle. It was visible across the schedule, in hallway conversations, and especially during the Friday pre-conferences, where these themes were actively pursued and debated.
On one hand, this focus is both necessary and overdue. Open source has always been political in the sense that it is about control, transparency, and autonomy, even when it pretended not to be. Seeing these discussions move to the center stage at FOSDEM is encouraging. It shows that the community understands the stakes and is willing to engage with the broader implications of the technology it builds.
On the other hand, the intensity of this shift raises uncomfortable questions. When everything becomes urgent and strategic, the space for experimentation, learning, and smaller niche projects risks being squeezed out. Not every open source project exists to solve geopolitical problems, and not every contributor arrives with a policy agenda. FOSDEM has always thrived on its diversity of motivations, and maintaining that balance will be increasingly challenging.
FOSDEM 2026 felt like a conference at a crossroads. Its success is undeniable, but so are the growing pains that come with it. The challenge for the coming years will be finding ways to scale without losing what made the event special in the first place: accessibility, spontaneity, and the feeling that there is room for everyone and not just for the loudest or most timely topics. And while I already criticized this last year, this becomes even more important this year.
The conversations happening now are important, and it is good that they are happening at FOSDEM. But if the conference is to remain sustainable, both logistically and culturally, it will need to evolve just as thoughtfully as the open source ecosystem it represents.
It is genuinely great to see that FOSDEM remains free and open to everyone, even as the topics it covers become more complex and more relevant. The growing focus on moving away from big tech and reclaiming ownership of our data shows that the community is paying attention to what truly matters. These discussions are necessary, and it is encouraging to see them reflected so clearly in the talks and hallway conversations.
The quality of the talks was high, and the people were, as always, amazing. FOSDEM continues to be a place where curiosity, expertise, and openness meet. At the same time, the question of scale can no longer be ignored. Camping in front of a single room just to make sure you can attend a talk is not a sustainable solution. In many cases, it may even discourage the people who are genuinely interested but cannot afford to wait for hours or navigate overcrowded spaces.
For exactly this reason, I seriously considered staying home this year and watching the talks via live streams or recordings. From a purely technical perspective, that would have worked just fine. The content would still be there, accessible and well produced.
But in the end, FOSDEM is not just about talks. It is about meeting people, reconnecting with friends, and having spontaneous conversations that no video stream can fully replace. Seeing all of you again, sharing thoughts over coffee, and exchanging ideas in person ultimately mattered more than comfort or convenience.
FOSDEM 2026 once again proved why this conference is special. The challenge now is to ensure that it can continue to grow without losing the openness and accessibility that define it. That balance will shape what FOSDEM becomes in the years to come.
...
Read the original on gyptazy.com »
Xikipedia is a pseudo social media feed that algorithmically shows you content from Simple Wikipedia. It is made as a demonstration of how even a basic non-ML algorithm with no data from other users can quickly learn what you engage with to suggest you more similar content. No data is collected or shared here, the algorithm runs locally and the data disappears once you refresh or close the tab.
Source code on GitHub, discuss on fedi, bluesky, or twitter.
Pick some categories to get started (optional)
Or add your own
Since the content and images shown is from random Wikipedia articles, you will likely see NSFW content. Please only continue if you’re an adult.
...
Read the original on xikipedia.org »
Yesterday, I wrote about Ariel OS RTOS for microcontrollers written in Rust, but there’s another interesting open-source operating system for microcontrollers that will be covered at FOSDEM 2026: MicroPythonOS.
While Ariel OS is designed for secure, memory-safe, networked IoT applications on microcontrollers, MicroPythonOS targets applications with graphical user interfaces and is heavily inspired by Android and iOS, with an appstore, an LVGL-based touchscreen and button UI with plenty of widgets, gestures and theme support, and a wifi manager, as well as over-the-air (OTA) firmware updates.
You’ll probably be astonished to learn that MicroPythonOS is written in… MicroPython. It’s comprised of a Thin OS which handles hardware initialization, multitasking, and UI, and everything else is an app, including system features like WiFi configuration and OS updates.
* Runs on ESP32 microcontrollers, but the FOSDEM 2026 talk mentions that it can run on anything that supports MicroPython, including the Raspberry Pi RP2350.
* Since it’s cross-platform, it can also run on Windows, Linux, and macOS (mostly useful for app development and evaluation)
Right now, five apps are pre-installed: Launcher, WiFi for wireless configuration, AppStore to install more apps, OSUpdate for OTA system updates, and Settings. The AppStore features various apps, including Hello World, Camera, Image Viewer, and IMU to visualize the motion sensor data, among others, all with source code available.
It can be used to create smart home controllers, educational tools with interactive displays, decentralized payment systems, portable touch screen devices, robotics with IMUs and cameras, smart wearables with gesture controls, and other prototype DIY projects. Hardware support (on ESP32) currently includes WiFi, Bluetooth, several motion sensors, cameras, touch screens, IO expanders, displays, ADC, GPIO, I2C, etc…
You’ll find the source code on GitHub and instructions to get started with app and OS development on the documentation website. For a quick try, installing it on Windows, Linux, or macOS is probably the easiest, but if you prefer to check the performance on actual hardware, ESP32 targets are the way to go, and you can install MicroPythonOS through the web installer. It was specifically tested on the ESP32-S3-Touch-LCD-2 and Fri3d Camp 2024 Badge.
More details can be found on the project’s website, or if you plan to go to FOSDEM 2026 this weekend, you can also attend the 25-minute talk on Sunday, February 1, by Thomas Farstrike.
Support CNX Software! Donate via cryptocurrencies, become a Patron on Patreon, or purchase goods on Amazon or Aliexpress. We also use affiliate links in articles to earn commissions if you make a purchase after clicking on those links.
...
Read the original on www.cnx-software.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.