10 interesting stories served every morning and every evening.
...
Read the original on notepad-plus-plus.org »
When I moved to a new apartment with my family, the cable company we were used to wasn’t available. We had to settle for Dish Network. I wasn’t too happy about making that switch, but something on their website caught my attention. For an additional $5 a month, I could have access to DVR. I switched immediately.
This was 2007. DVR was not new, but it wasn’t commonly bundled with set-top boxes. TiVo was still the popular way to record, pause, and rewind live TV. We received two set-top boxes, one for each room with a TV, and three remotes. Two remotes had IR (infrared) blasters and, surprisingly, one RF (radio frequency) remote.
After using the RF remote, I wondered: Why would anyone ever use an IR remote again? You didn’t need a direct line of sight with the device you were controlling. I could actually stand in the kitchen and control the TV. It was amazing. But with the convenience of RF came other problems that IR users never had to worry about. Interference.
After several months of enjoying my service, one of my neighbors, the loudest in the building, also switched to Dish Network. And he also got the RF remote. This was the type of neighbor who would leave the house with the TV on, volume blasting.
One day, I was in the living room watching TV when the channel just flipped. I must have accidentally hit a button, so I changed it back. But not a few seconds later, the channel changed again. Then the volume went up. I figured my sister must have had the RF remote and was messing with me. But no, the remote was in my hand. I assumed something was wrong with it.
The whole time I was watching TV, the channels kept randomly switching. I banged the remote on the table a couple of times, but it still switched. I removed the batteries from the remote, it still switched. I unplugged the device for a few minutes, plugged it back in, and… it still switched. Frustrated, I went through the device settings and disabled the RF remote. That’s when it finally stopped. I wasn’t happy with this solution, but it allowed me to watch TV until I figured something out.
One evening, when everyone was asleep and the neighbor was watching a loud TV show, I decided to diagnose the issue. The moment I pressed the power button on the RF remote, my TV and set-top box turned on, and the neighbor’s TV went silent. “Fuck!” I heard someone say. I was confused. Did I just do that? The TV turned back on, the volume went up. I walked to the window armed with the remote. I counted to three, then pressed the power button. My neighbor’s TV went silent. He growled.
I am the captain now.
Every time he turned the TV on, I pressed the power button again and his device went off. Well, what do you know? We had interference somehow. Our remotes were set up to operate at the same frequency. Each remote controlled both devices.
But I’m not that kind of neighbor. I wasn’t going to continue to mess with him. Instead, I decided I would pay him a visit in the morning and explain that our remotes are tuned to the same frequency. I would bring the RF remote with me just to show him a demo. I was going to be a good neighbor.
In the morning, I went downstairs, remote in hand. I knocked on the door, and a gentleman in his forties answered the door. I had rehearsed my speech and presentation. This would be a good opportunity to build a good rapport, and have a shared story. Maybe he would tell me how he felt when the TV went off. How he thought there was a ghost in the house or something. But that’s not what happened.
“Hi, I’m Ibrahim. Your upstairs neighbor…” I started and was interrupted almost immediately. “Whatever you are selling,” he yelled. “I’m not buying.” and he closed the door on my face. I knocked a second time, because obviously there was a misunderstanding. He never answered. Instead, the TV turned on and a movie played at high volume. So much for my prepared speech.
The RF settings on my set-top box remained turned off. My family never discovered its benefit anyway, they always pointed at the box when pressing the buttons. It wasn’t much of an inconvenience. In fact, I later found in the manual that you could reprogram the device and remote to use a different frequency. I did not reprogram my remote. Instead, my family used the two IR remotes, and brought the RF remote in my bedroom where it permanently remained on my night stand.
Why in the bedroom? Because I decided to teach my neighbor some good manners. Whenever he turned up his volume, I would simply turn off his device. I would hear his frustration, and his attempts at solving the problem. Like a circus animal trainer, I remained consistent. If the volume of his TV went above what I imagined to be 15 to 20, I would press the power button. It became a routine for me for weeks. Some nights were difficult, I would keep the remote under my pillow, battling my stubborn neighbor all night.
One day, I noticed that I hadn’t pressed the button in days. I opened the window and I could still hear the faint sound of his TV. Through trial and error, he learned the lesson. If the volume remained under my arbitrary threshold, the TV would remain on. But as soon as he passed that threshold, the device would turn off.
Sometimes, he would have company and there would be noise coming out of his apartment. I used the one tool in my tool box to send him a message. Turn off the TV. All of the sudden, my neighbor and his guest will be reminded of the unspoken rules, and become mindful of their neighbors.
Maybe somewhere on the web, in some obscure forum, someone asked the question: “Why does my set-top box turn off when I increase the volume?” Well, it might be 18 years too late, but there’s your answer. There is a man out there who religiously sets his volume to 18. He doesn’t quite know why. That’s Pavlovian conditioning at its best.
Next: This is NOT the worst LLM you’ll ever use
...
Read the original on idiallo.com »
He’s just this guy, you know?
How To
How To: Tape backup and recovery
How To: Image processing and management
Musings
That’s right — this little device is what stood between me and the ability to run an even older piece of software that I recently unearthed during an expedition of software archaeology.
For a bit more background, I was recently involved in helping a friend’s accounting firm to move away from using an extremely legacy software package that they had locked themselves into using for the last four decades.
This software was built using a programming language called RPG (“Report Program Generator”), which is older than COBOL (!), and was used with IBM’s midrange computers such as the System/3, System/32, and all the way up to the AS/400. Apparently, RPG was subsequently ported to MS-DOS, so that the same software tools built with RPG could run on personal computers, which is how we ended up here.
This accounting firm was actually using a Windows 98 computer (yep, in 2026), and running the RPG software inside a DOS console window. And it turned out that, in order to run this software, it requires a special hardware copy-protection dongle to be attached to the computer’s parallel port! This was a relatively common practice in those days, particularly with “enterprise” software vendors who wanted to protect their very important™ software from unauthorized use.
Sadly, most of the text and markings on the dongle’s label has been worn or scratched off, but we can make out several clues:
The words “Stamford, CT”, and what’s very likely the logo of a company called “Software Security Inc”. The only evidence for the existence of this company is this record of them exhibiting their wares at SIGGRAPH conferences in the early 1990s, as well as several patents issued to them, relating to software protection.
A word that seems to say “RUNTIME”, which will become clear in a bit.
My first course of action was to take a disk image of the Windows 98 PC that was running this software, and get it running in an emulator, so that we could see what the software actually does, and perhaps export the data from this software into a more modern format, to be used with modern accounting tools. But of course all of this requires the hardware dongle; none of the accounting tools seem to work without it plugged in.
Before doing anything, I looked through the disk image for any additional interesting clues, and found plenty of fascinating (and archaeologically significant?) stuff:
We’ve got a compiler for the RPG II language (excellent!), made by a company called Software West Inc.
Even better, there are two versions of the RPG II compiler, released on various dates in the 1990s by Software West.
We’ve got the complete source code of the accounting software, written in RPG. It looks like the full accounting package consists of numerous RPG modules, with a gnarly combination of DOS batch files for orchestrating them, all set up as a “menu” system for the user to navigate using number combinations. Clearly the author of this accounting system was originally an IBM mainframe programmer, and insisted on bringing those skills over to DOS, with mixed results.
I began by playing around with the RPG compiler in isolation, and I learned very quickly that it’s the RPG compiler itself that requires the hardware dongle, and then the compiler automatically injects the same copy-protection logic into any executables it generates. This explains the text that seems to say “RUNTIME” on the dongle.
The compiler consists of a few executable files, notably RPGC. EXE, which is the compiler, and SEU.EXE, which is a source editor (“Source Entry Utility”). Here’s what we get when we launch SEU without the dongle, after a couple of seconds:
A bit rude, but this gives us an important clue: this program must be trying to communicate over the parallel port over the course of a few seconds (which could give us an opportunity to pause it for debugging, and see what it’s doing during that time), and then exits with a message (which we can now find in a disassembly of the program, and trace how it gets there).
A great tool for disassembling executables of this vintage is Reko. It understands 16-bit real mode executables, and even attempts to decompile them into readable C code that corresponds to the disassembly.
And so, looking at the decompiled/disassembled code in Reko, I expected to find in and out instructions, which would be the telltale sign of the program trying to communicate with the parallel port through the PC’s I/O ports. However… I didn’t see an in or out instruction anywhere! But then I noticed something: Reko disassembled the executable into two “segments”: 0800 and 0809, and I was only looking at segment 0809.
If we look at segment 0800, we see the smoking gun: in and out instructions, meaning that the copy-protection routine is definitely here, and best of all, the entire code segment is a mere 0x90 bytes, which suggests that the entire routine should be pretty easy to unravel and understand. For some reason, Reko was not able to decompile this code into a C representation, but it still produced a disassembly, which will work just fine for our purposes. Maybe this was a primitive form of obfuscation from those early days, which is now confusing Reko and preventing it from associating this chunk of code with the rest of the program… who knows.
Here is a GitHub Gist with the disassembly of this code, along with my annotations and notes. My x86 assembly knowledge is a little rusty, but here is the gist of what this code does:
It’s definitely a single self-contained routine, intended to be called using a “far” CALL instruction, since it returns with a RETF instruction.
It begins by detecting the address of the parallel port, by reading the BIOS data area. If the computer has more than one parallel port, the dongle must be connected to the first parallel port (LPT1).
It performs a loop where it writes values to the data register of the parallel port, and then reads the status register, and accumulates responses in the BH and BL registers.
At the end of the routine, the “result” of the whole procedure is stored in the BX register (BH and BL together), which will presumably be “verified” by the caller of the routine.
Very importantly, there doesn’t seem to be any “input” into this routine. It doesn’t pop anything from the stack, nor does it care about any register values passed into it. Which can only mean that the result of this routine is completely constant! No matter what complicated back-and-forth it does with the dongle, the result of this routine should always be the same.
With the knowledge that this routine must exit with some magic value stored in BX, we can now patch the first few bytes of the routine to do just that! Not yet knowing which value to put in BX, let’s start with 1234:
BB 34 12 MOV BX, 1234h
CB RETF
Only the first four bytes need patching — set BX to our desired value, and get out of there (RETF). Running the patched executable with these new bytes still fails (expectedly) with the same message of “No dongle, no edit”, but it fails immediately, instead of after several seconds of talking to the parallel port. Progress!
Stepping through the disassembly more closely, we get another major clue: The only value that BH can be at the end of the routine is 76h (this is hard-coded into the routine). So, our total value for the magic number in BX must be of the form 76xx. In other words, only the BL value remains unknown:
BB __ 76 MOV BX, 76__h
CB RETF
Since BL is an 8-bit register, it can only have 256 possible values. And what do we do when we have 256 combinations to try? Brute force it! I whipped up a script that plugs a value into that particular byte (from 0 to 255) and programmatically launches the executable in DosBox, and observes the output. Lo and behold, it worked! The brute forcing didn’t take long at all, because the correct number turned out to be… 6. Meaning that the total magic number in BX should be 7606h:
BB 06 76 MOV BX, 7606h
CB RETF
Bingo!
And then, proceeding to examine the other executable files in the compiler suite, the parallel port routine turns out to be exactly the same. All of the executables have the exact same copy protection logic, as if it was rubber-stamped onto them. In fact, when the compiler (RPGC.EXE) compiles some RPG source code, it seems to copy the parallel port routine from itself into the compiled program. That’s right: the patched version of the compiler will produce executables with the same patched copy protection routine! Very convenient.
I must say, this copy protection mechanism seems a bit… simplistic? A hardware dongle that just passes back a constant number? Defeatable with a four-byte patch? Is this really worthy of a patent? But who am I to pass judgment. It’s possible that I haven’t fully understood the logic, and the copy protection will somehow re-surface in another way. It’s also possible that the creators of the RPG compiler (Software West, Inc) didn’t take proper advantage of the hardware dongle, and used it in a way that is so easily bypassed.
In any case, Software West’s RPG II compiler is now free from the constraint of the parallel port dongle! And at some point soon, I’ll work on purging any PII from the compiler directories, and make this compiler available as an artifact of computing history. It doesn’t seem to be available anywhere else on the web. If anyone reading this was associated with Software West Inc, feel free to get in touch — I have many questions!
...
Read the original on dmitrybrant.com »
My personal Claude assistant that runs securely in containers. Lightweight and built to be understood and customized for your own needs.
OpenClaw is an impressive project with a great vision. But I can’t sleep well running software I don’t understand with access to my life. OpenClaw has 52+ modules, 8 config management files, 45+ dependencies, and abstractions for 15 channel providers. Security is application-level (allowlists, pairing codes) rather than OS isolation. Everything runs in one Node process with shared memory.
NanoClaw gives you the same core functionality in a codebase you can understand in 8 minutes. One process. A handful of files. Agents run in actual Linux containers with filesystem isolation, not behind permission checks.
git clone https://github.com/gavrielc/nanoclaw.git
cd nanoclaw
claude
Small enough to understand. One process, a few source files. No microservices, no message queues, no abstraction layers. Have Claude Code walk you through it.
Secure by isolation. Agents run in Linux containers (Apple Container on macOS, or Docker). They can only see what’s explicitly mounted. Bash access is safe because commands run inside the container, not on your host.
Built for one user. This isn’t a framework. It’s working software that fits my exact needs. You fork it and have Claude Code make it match your exact needs.
Customization = code changes. No configuration sprawl. Want different behavior? Modify the code. The codebase is small enough that this is safe.
AI-native. No installation wizard; Claude Code guides setup. No monitoring dashboard; ask Claude what’s happening. No debugging tools; describe the problem, Claude fixes it.
Skills over features. Contributors shouldn’t add features (e.g. support for Telegram) to the codebase. Instead, they contribute claude code skills like /add-telegram that transform your fork. You end up with clean code that does exactly what you need.
Best harness, best model. This runs on Claude Agent SDK, which means you’re running Claude Code directly. The harness matters. A bad harness makes even smart models seem dumb, a good harness gives them superpowers. Claude Code is (IMO) the best harness available.
No ToS gray areas. Because it uses Claude Agent SDK natively with no hacks or workarounds, using your subscription with your auth token is completely legitimate (I think). No risk of being shut down for terms of service violations (I am not a lawyer).
* Isolated group context - Each group has its own CLAUDE.md memory, isolated filesystem, and runs in its own container sandbox with only that filesystem mounted
* Main channel - Your private channel (self-chat) for admin control; every other group is completely isolated
* Scheduled tasks - Recurring jobs that run Claude and can message you back
* Optional integrations - Add Gmail (/add-gmail) and more via skills
Talk to your assistant with the trigger word (default: @Andy):
From the main channel (your self-chat), you can manage groups and tasks:
There are no configuration files to learn. Just tell Claude Code what you want:
* “Remember in the future to make responses shorter and more direct”
Or run /customize for guided changes.
The codebase is small enough that Claude can safely modify it.
If you want to add Telegram support, don’t create a PR that adds Telegram alongside WhatsApp. Instead, contribute a skill file (.claude/skills/add-telegram/SKILL.md) that teaches Claude Code how to transform a NanoClaw installation to use Telegram.
Users then run /add-telegram on their fork and get clean code that does exactly what they need, not a bloated system trying to support every use case.
* /add-telegram - Add Telegram as channel. Should give the user option to replace WhatsApp or add as additional channel. Also should be possible to add it as a control channel (where it can trigger actions) or just a channel that can be used in actions triggered elsewhere
* /add-clear - Add a /clear command that compacts the conversation (summarizes context while preserving critical information in the same session). Requires figuring out how to trigger compaction programmatically via the Claude Agent SDK.
Single Node.js process. Agents execute in isolated Linux containers with mounted directories. IPC via filesystem. No daemons, no queues, no complexity.
Why WhatsApp and not Telegram/Signal/etc?
Because I use WhatsApp. Fork it and run a skill to change it. That’s the whole point.
Why Apple Container instead of Docker?
On macOS, Apple Container is lightweight, fast, and optimized for Apple silicon. But Docker is also fully supported—during /setup, you can choose which runtime to use. On Linux, Docker is used automatically.
Can I run this on Linux?
Yes. Run /setup and it will automatically configure Docker as the container runtime. Thanks to @dotsetgreg for contributing the /convert-to-docker skill.
Agents run in containers, not behind application-level permission checks. They can only access explicitly mounted directories. You should still review what you’re running, but the codebase is small enough that you actually can. See docs/SECURITY.md for the full security model.
We don’t want configuration sprawl. Every user should customize it to so that the code matches exactly what they want rather than configuring a generic system. If you like having config files, tell Claude to add them.
Ask Claude Code. “Why isn’t the scheduler running?” “What’s in the recent logs?” “Why did this message not get a response?” That’s the AI-native approach.
Why isn’t the setup working for me?
I don’t know. Run claude, then run /debug. If claude finds an issue that is likely affecting other users, open a PR to modify the setup SKILL.md.
What changes will be accepted into the codebase?
Security fixes, bug fixes, and clear improvements to the base configuration. That’s it.
Everything else (new capabilities, OS compatibility, hardware support, enhancements) should be contributed as skills.
This keeps the base system minimal and lets every user customize their installation without inheriting features they don’t want.
...
Read the original on github.com »
My iPhone 16 Pro Max produces garbage output when running MLX LLMs. An iPhone 15 Pro runs the same code perfectly. A MacBook Pro also runs the same code perfectly. The tensor outputs on the 16 show numerical values an order of magnitude wrong. I suspect it points to a hardware defect in the Neural Engine or some other ML-needed system.
It was a PITA to debug, but at least I got a blog post out of it.
This was supposed to be a simple, unwinding-time project.
For the past few months I’ve been working on a Moltbot clone that I’ve been calling Schmidt. It basically does the same kind of thing but with a custom chat UI instead of using Telegram, WhatsApp or other “I-can’t-afford-to-be-banned-from” Service. This project has been consuming early days and late nights, so, to unwind, I decided that it may be a good idea to do something simpler. Since I recently subscribed to MiniMax M2.1, I thought I would do what many do and build a simple expense tracking app to test out the model.
* Automatically, upon each payment, add the expense to my app
* Update an Apple Watch complication with the % of my monthly budget spent
* Categorize the purchase for later analysis
This all comes from being basically orphaned by Nubank’s amazing native app (since replaced by a less-full-featured Flutter version).
Integrating with Shortcuts is manual, but reliable. Within 15 minutes I had a version of the app that could register purchases. The Apple Watch complication, the main goal, can come later. I’d rather get the classification feature, which should be easy, done quickly — so I figured.
Given the new LLM-bonanza we’ve been living through, it’s no surprise that Apple has their own set of APIs developers such as me can use. Reading up on the documentation, it’s a matter of checking for the availability of the feature and then asking the model to either reply to a textual query or, in my case, categorize a request.
MiniMax raced through it in a single prompt and then I ran it on my iPhone. First expense was a purchase at a shop called “Kasai Kitchin”, classified as… unknown.
Weird.
Checking the logs, it was clear: the model support was downloading. The feature hadn’t been enabled. Again, weird. I should have it on. Anyway, I go into settings, do the weird dance of toggling it on and off — sadly, that’s not surprising on Apple’s services. Maybe my Settings.app got stuck in a weird state, who knows? — and wait for it to download.
After 4h I realized it was not going anywhere. Looking it up, it seems that many have the same issue (this thread shows 12 pages of frustrated users). Again, not a surprise for Apple’s services recently.
Oh well, time to give up on the Apple Intelligence approach. Let’s move on to the next one.
Well, the iOS framework engineers don’t seem to be the only engineers at Apple capable of coming up with Machine Learning APIs in Swift. Apparently, there’s a whole separate way of doing it — with models downloaded to your app. Not great for the user’s storage, but great for me!
Again, MiniMax does it in a heartbeat, specially after being given documentation and one or two Medium posts. Time to run on my iPhone and… gibberish.
The CPU spins to 100% and the model starts generating. But it’s all gibberish. And no “stop” token is generated, so this goes on for long.
At this point, the only explanation is: I’m completely incompetent and can’t even get a simple “ready made” framework to execute what I want. Or, rather, MiniMax is! The good thing about offloading your work to an LLM is that you can blame it for your shortcomings. Time to get my hands dirty and do it myself, typing code on my keyboard, like the ancient Mayan and Aztec programmers probably did.
I went back to the documentation, to the Medium posts and, much to my surprise: MiniMax had followed it to the letter. Even went back to some deprecated methods of generation and it also was gibberish. And now there’s no one to blame, but myself. I go to work everyday and this impostor-syndrome inducing problem silently consumes me.
After 3 days of trying to get it to work, I’m ready to give up…
…until, on a Tuesday morning, at 7-8 AM, I have an idea: let me, just in case, run this on my old iPhone 15 Pro. Up to this point, I was running it on my daily driver, an iPhone 16 Pro Max that was a replacement phone sent by Apple Care after a small clubbing mishap (in which my iPhone was irreparably crashed). I rush to get everything ready before it’s time to go to work and: it works! Gemma, Qwen, and all other models generate coherent responses!
I stop and think: this cannot be a hardware issue, right? Of course not. The iPhone 15 is still running iOS 18. The iPhone 16 is running 26. It must be an OS issue. Well, time to be late for my work standup and update the old phone. The curiosity is too much. Many minutes later… same results, now on iOS 26. The plot is thickening.
After that work day, and after many lunch and coffee discussions with coworkers about the sources of my troubles, I get home and immediately set myself on debugging MLX as it runs, if possible. The game plan is:
* Use a known-to-be-reliable model, that fits in RAM (I went with quantized Gemma)
* Use a simple prompt, in my case “What is 2+2?“To be really pedantic: the prompt was
* To be really pedantic: the prompt was
* Run everything with temperature set to 0.0 — maybe that’s enough to remove variability
* Find where the model iterates through the layers and
* Print out the MLXArray/Tensor with the values on each layer as the input goes through
A few moments later and I find where I need to be. Added the breakpoints, added the logs and off to the races.
I run it on my iPhone 16 Pro Max. The model loads and the prompt is “What is 2+2?”. The tensors start printing out, line after line after line. For once, the logs aren’t complete gibberish — they’re numbers. Floating point values representing the model’s internal state as it processes the input. I save the output to a file and do the same on my iPhone 15 Pro. Same model, same prompt, same code. Time to compare.
I grep for a pattern I know should be consistent — an array at log-line 58, right before the values get normalized/softmaxed. On a working device, I hypothesize this should be the same every time.
On the iPhone 15 Pro:
3: “[[[[53.875, 62.5625, -187.75, …, 42.625, 6.25, -21.5625]]]]”
On the iPhone 16 Pro Max:
3: “[[[[191.5, 23.625, 173.75, …, 1298, -147.25, -162.5]]]]”
Huh. Not close. Not at all. These values are orders of magnitude off. I double check the start of the logs and both phones show the same:
1: “array([[[0.162842, -0.162842, -0.48877, …, -0.176636, 0.0001297, 0.088501],\n [-0.348633, -2.78906, 0, …, 0.84668, 0, -1.69336],\n [-1.30957, 1.57324, -1.30957, …, -0.0010376, -0.0010376, 1.12305],\n …,\n [-0.348633, -2.78906, 0, …, 0.84668, 0, -1.69336],\n [0.296875, 0.59375, 0.890625, …, -0.59375, 0.296875, -0.890137],\n [1.02734, -0.616211, -0.616211, …, -0.275879, -0.551758, 0.275879]]], dtype=float16)”
OK, so the model receives the same thing as input, but at some point, the values start to go off. Like, way off. In order to make sure I’m not crazy, I do one last thing: run the same thing on my Mac. Make the app run on iPad compatibility mode and…
3: “[[[[53.875, 62.5625, -187.75, …, 42.625, 6.25, -21.5625]]]]”
The model isn’t broken. The code isn’t broken. Most importantly, I’m not broken*. My phone is broken.
*arguable, but besides the point here
Let me explain what I think it’s going on here: the iPhone 16 Pro Max contains Apple’s A18 chip with its Neural Engine—a specialized accelerator for machine learning operations. MLX uses Metal to compile tensor operations for this accelerator. Somewhere in that stack, the computations are going very wrong. I don’t think it’s a widespread issue but, I do get disappointed that a relatively newly replaced iPhone from Apple Care came with such an issue.
However, if my Apple Intelligence troubles are related — and they might as well be, I’d assume that code and MLX are not dissimilar in operations being done –, it could be that all the 12 pages of users are users in a similar dillema, but without the means of debugging it.
I spent 3 days thinking I was incompetent. I blamed MiniMax. I blamed myself. The entire time, my $1,400 phone had a broken hardware. I could lose more time figuring out exactly what is wrong with it but it’s literally not worth my time.
I guess I can at least take a lesson that, when debugging, I should always consider the physical layer. I spent three days assuming this was a software problem — my code, the library, the framework, my skills as a developer. The breakthrough was basically: “What if I’m not dumb and it’s not my code?”
As for my phone: it’ll probably go back to Apple, as a trade in for a new iPhone 17 Pro Max that hopefully 🤞 can do math.
Well, now it’s Feb. 1st and I have an iPhone 17 Pro Max to test with and… everything works as expected. So it’s pretty safe to say that THAT specific instance of iPhone 16 Pro Max was hardware-defective.
...
Read the original on journal.rafaelcosta.me »
Xikipedia is a pseudo social media feed that algorithmically shows you content from Simple Wikipedia. It is made as a demonstration of how even a basic non-ML algorithm with no data from other users can quickly learn what you engage with to suggest you more similar content. No data is collected or shared here, the algorithm runs locally and the data disappears once you refresh or close the tab.
Source code on GitHub, discuss on fedi, bluesky, or twitter.
Pick some categories to get started (optional)
Or add your own
Since the content and images shown is from random Wikipedia articles, you will likely see NSFW content. Please only continue if you’re an adult.
...
Read the original on xikipedia.org »
It still shocks me how much difference there is between AI users. I think it explains a lot about the often confusing (to me) coverage in the media about AI and its productivity impact.
I think it’s clear there are two types of users to me now, and by extension, the organisations they work for.
First, you have the “power users”, who are all in on adopting new AI technology - Claude Code, MCPs, skills, etc. Surprisingly, these people are often not very technical. I’ve seen far more non-technical people than I’d expect using Claude Code in terminal, using it for dozens of non-SWE tasks. Finance roles seem to be getting enormous value out of it (unsurprisingly, as Excel on the finance side is remarkably limiting when you start getting used to the power of a full programming ecosystem like Python).
Secondly, you have the people who are generally only chatting to ChatGPT or similar. So many people I wouldn’t expect are still in this camp.
One extremely jarring realisation was just how poor Microsoft Copilot is. It has enormous market share in enterprise as it is bundled in with various Office 365 subscriptions, yet feels like a poorly cloned version of the (already not great) ChatGPT interface. The “agent” feature is absolutely laughable compared to what a CLI coding agent (including Microsoft’s own GitHub confusingly-named-Copilot CLI).
To really underline this, Microsoft itself is rolling out Claude Code to internal teams, despite (obviously) having access to Copilot at near zero cost, and significant ownership of OpenAI. I think this sums up quite how far behind they are
The problem is that in enterprise Copilot is often the only allowed AI tool, so that’s all you can use without either potentially losing your job or spending a lot of effort trying to procure and use another AI tool. It’s slow, the code execution tool in it doesn’t work properly and fails horribly with large(ish) files, seemingly due to very very aggressive memory and CPU limitations.
This is becoming an existential risk for many enterprises. Senior decision makers are no doubt using these tools with such poor results and are therefore writing off AI, and/or spending a fortune with various large consulting and management consultancy outfits to get not very far.
Enterprise corporate IT policy results in a completely disastrous combination of limitations that basically ensure that people cannot successfully use more ‘cutting edge’ AI tooling.
Firstly, they tend to have extremely locked down environments, with no ability to run even a basic script interpreter locally (VBA if you are lucky, but even that may be limited by various Group Policies). Secondly, they’re locked into legacy software with no real “internal facing” APIs on their core workflows, which means agents have nothing to connect to even if you could run them.
Finally, they tend to have extremely siloed engineering departments (which may be completely outsourced), so there’s nobody internally who could build the infrastructure to run safely sandboxed agents even if they wanted to.
The security concerns are real. You definitely do not want people YOLOing coding agents over production databases with no control, and as I’ve covered, sandboxing agents is difficult.
However, this does cause a real problem in so much that you don’t have an engineering team that can help build the infrastructure to run safely sandboxed agents against your datasets.
I’ve also spoken to many smaller companies that don’t have all this baggage and are absolutely flying with AI. The gap is so obvious when you can see both sides of it.
On one hand, you have Microsoft’s (awful) Copilot integration for Excel (in fairness, the Gemini integration in Google Sheets is also bad). So you can imagine financial directors trying to use it and it making a complete mess of the most simple tasks and never touching it again.
On the other you have a non-technical executive who’s got his head round Claude Code and can run e.g. Python locally. I helped one recently almost one-shot converting a 30 sheet mind numbingly complicated Excel financial model to Python with Claude Code.
Once the model is in Python, you effectively have a data science team in your pocket with Claude Code. You can easily run Monte Carlo simulations, pull external data sources as inputs, build web dashboards and have Claude Code work with you to really integrate weaknesses in your model (or business). It’s a pretty magical experience watching someone realise they have so much power at their fingertips, without having to grind away for hours/days in Excel.
This effectively leads to a situation where smaller company employees are able to be so much more productive than the equivalent at an enterprise. It often used to be that people at small companies really envied the resources & teams that their larger competitors had access to - but increasingly I think the pendulum is swinging the other way.
I’m starting to get a feel for what the future of work looks like. The first observation is that (often) the real leaps are being made organically by employees, not from a top down AI strategy. Where I see the real productivity gains are small teams deciding to try and build an AI assisted workflow for a process, and as they are the ones that know that process inside out they can get very good results - unlike an often outsourced software engineering team who have absolutely zero experience doing the process that they are helping automate. I think this is the opposite of what most ‘digital transformation’ projects looked like in enterprise.
Secondly, companies that have some sort of APIs for internal systems are going to be able to do far more than those that don’t. This might be as simple as a readonly data warehouse employees can connect to and run queries on behalf of users, or it could be as far as many complex core business processes being completely APId.
Thirdly, this all needs to be wrapped up in some sort of secure mechanism, but I actually think a hosted VM running some sort of code agent with well thought through network restrictions would work well, at least for read only reporting. For creating and editing data I don’t think we quite have the model for non technical users (especially) to be able to use agents safely (yet).
Finally, legacy enterprise SaaS players either have enormous lock in, or are extremely vulnerable depending on how you look at it. Most are not “API-first” products, and the APIs they have tend to be really for developer usage - not optimised for thousands of employees to ping in weird and wonderful inefficient ways. But if they are the source of truth for the company, they are going to be very difficult to migrate away from and bottleneck a lot of productivity gains.
Again, smaller companies tend to use newer products which have far better thought through APIs (simply because they weren’t often originally created many decades ago with various interfaces grafted on over time).
The user prompts, the agent synthesises - connecting to APIs and producing outputs on demand.
What I’ve come to realise is that the power of having a bash sandbox with a programming language and API access to systems, combined with an agentic harness, results in outrageously good results for non technical users. It can effectively replace nearly every standard productivity app out there - both classic Microsoft Office style ones - and also web apps. It can build any report you ask for - and export it however you like. To me this seems like the future of knowledge work.
The bifurcation is real and seems to be, if anything, speeding up dramatically. I don’t think there’s ever been a time in history where a tiny team can outcompete a company one thousand times its size so easily.
...
Read the original on martinalderson.com »
Termux is an Android terminal application and Linux environment.
Note that this repository is for the app itself (the user interface and the terminal emulation). For the packages installable inside the app, see termux/termux-packages.
Quick how-to about Termux package management is available at Package Management. It also has info on how to fix repository is under maintenance or down errors when running apt or pkg commands.
We are looking for Termux Android application maintainers.
NOTICE: Termux may be unstable on Android 12+. Android OS will kill any (phantom) processes greater than 32 (limit is for all apps combined) and also kill any processes using excessive CPU. You may get [Process completed (signal 9) - press Enter] message in the terminal without actually exiting the shell process yourself. Check the related issue #2366, issue tracker, phantom cached and empty processes docs and this TLDR comment on how to disable trimming of phantom and excessive cpu usage processes. A proper docs page will be added later. An option to disable the killing should be available in Android 12L or 13, so upgrade at your own risk if you are on Android 11, specially if you are not rooted.
The core Termux app comes with the following optional plugin apps.
NOTICE: It is highly recommended that you update to v0.118.0 or higher ASAP for various bug fixes, including a critical world-readable vulnerability reported here. See below for information regarding Termux on Google Play.
Termux can be obtained through various sources listed below for only Android >= 7 with full support for apps and packages.
Support for both app and packages was dropped for Android 5 and 6 on 2020-01-01 at v0.83, however it was re-added just for the app without any support for package updates on 2022-05-24 via the GitHub sources. Check here for the details.
The APK files of different sources are signed with different signature keys. The Termux app and all its plugins use the same sharedUserId com.termux and so all their APKs installed on a device must have been signed with the same signature key to work together and so they must all be installed from the same source. Do not attempt to mix them together, i.e do not try to install an app or plugin from F-Droid and another one from a different source like GitHub. Android Package Manager will also normally not allow installation of APKs with different signatures and you will get errors on installation like App not installed, Failed to install due to an unknown error, INSTALL_FAILED_UPDATE_INCOMPATIBLE, INSTALL_FAILED_SHARED_USER_INCOMPATIBLE, signatures do not match previously installed version, etc. This restriction can be bypassed with root or with custom roms.
If you wish to install from a different source, then you must uninstall any and all existing Termux or its plugin app APKs from your device first, then install all new APKs from the same new source. Check Uninstallation section for details. You may also want to consider Backing up Termux before the uninstallation so that you can restore it after re-installing from Termux different source.
In the following paragraphs, “bootstrap” refers to the minimal packages that are shipped with the termux-app itself to start a working shell environment. Its zips are built and released here.
Termux application can be obtained from F-Droid from here.
You do not need to download the F-Droid app (via the Download F-Droid link) to install Termux. You can download the Termux APK directly from the site by clicking the Download APK link at the bottom of each version section.
It usually takes a few days (or even a week or more) for updates to be available on F-Droid once an update has been released on GitHub. The F-Droid releases are built and published by F-Droid once they detect a new GitHub release. The Termux maintainers do not have any control over the building and publishing of the Termux apps on F-Droid. Moreover, the Termux maintainers also do not have access to the APK signing keys of F-Droid releases, so we cannot release an APK ourselves on GitHub that would be compatible with F-Droid releases.
The F-Droid app often may not notify you of updates and you will manually have to do a pull down swipe action in the Updates tab of the app for it to check updates. Make sure battery optimizations are disabled for the app, check https://dontkillmyapp.com/ for details on how to do that.
Only a universal APK is released, which will work on all supported architectures. The APK and bootstrap installation size will be ~180MB. F-Droid does not support architecture specific APKs.
Termux application can be obtained on GitHub either from GitHub Releases for version >= 0.118.0 or from GitHub Build Action workflows. For android >= 7, only install apt-android-7 variants. For android 5 and 6, only install apt-android-5 variants.
The APKs for GitHub Releases will be listed under Assets drop-down of a release. These are automatically attached when a new version is released.
The APKs for GitHub Build action workflows will be listed under Artifacts section of a workflow run. These are created for each commit/push done to the repository and can be used by users who don’t want to wait for releases and want to try out the latest features immediately or want to test their pull requests. Note that for action workflows, you need to be logged into a GitHub account for the Artifacts links to be enabled/clickable. If you are using the GitHub app, then make sure to open workflow link in a browser like Chrome or Firefox that has your GitHub account logged in since the in-app browser may not be logged in.
The APKs for both of these are debuggable and are compatible with each other but they are not compatible with other sources.
Both universal and architecture specific APKs are released. The APK and bootstrap installation size will be ~180MB if using universal and ~120MB if using architecture specific. Check here for details.
Security warning: APK files on GitHub are signed with a test key that has been shared with community. This IS NOT an official developer key and everyone can use it to generate releases for own testing. Be very careful when using Termux GitHub builds obtained elsewhere except https://github.com/termux/termux-app. Everyone is able to use it to forge a malicious Termux update installable over the GitHub build. Think twice about installing Termux builds distributed via Telegram or other social media. If your device get caught by malware, we will not be able to help you.
The test key shall not be used to impersonate @termux and can’t be used for this anyway. This key is not trusted by us and it is quite easy to detect its use in user generated content.
There is currently a build of Termux available on Google Play for Android 11+ devices, with extensive adjustments in order to pass policy requirements there. This is under development and has missing functionality and bugs (see here for status updates) compared to the stable F-Droid build, which is why most users who can should still use F-Droid or GitHub build as mentioned above.
Currently, Google Play will try to update installations away from F-Droid ones. Updating will still fail as sharedUserId has been removed. A planned 0.118.1 F-Droid release will fix this by setting a higher version code than used for the PlayStore app. Meanwhile, to prevent Google Play from attempting to download and then fail to install the Google Play releases over existing installations, you can open the Termux apps pages on Google Play and then click on the 3 dots options button in the top right and then disable the Enable auto update toggle. However, the Termux apps updates will still show in the PlayStore app updates list.
If you want to help out with testing the Google Play build (or cannot install Termux from other sources), be aware that it’s built from a separate repository (https://github.com/termux-play-store/) - be sure to report issues there, as any issues encountered might very well be specific to that repository.
Uninstallation may be required if a user doesn’t want Termux installed in their device anymore or is switching to a different install source. You may also want to consider Backing up Termux before the uninstallation.
To uninstall Termux completely, you must uninstall any and all existing Termux or its plugin app APKs listed in Termux App and Plugins.
Go to Android Settings -> Applications and then look for those apps. You can also use the search feature if it’s available on your device and search termux in the applications list.
Even if you think you have not installed any of the plugins, it’s strongly suggested to go through the application list in Android settings and double-check.
All community links are available here.
The main ones are the following.
You can help debug problems of the Termux app and its plugins by setting appropriate logcat Log Level in Termux app settings -> -> Debugging -> Log Level (Requires Termux app version >= 0.118.0). The Log Level defaults to Normal and log level Verbose currently logs additional information. Its best to revert log level to Normal after you have finished debugging since private data may otherwise be passed to logcat during normal operation and moreover, additional logging increases execution time.
The plugin apps do not execute the commands themselves but send execution intents to Termux app, which has its own log level which can be set in Termux app settings -> Termux -> Debugging -> Log Level. So you must set log level for both Termux and the respective plugin app settings to get all the info.
Once log levels have been set, you can run the logcat command in Termux app terminal to view the logs in realtime (Ctrl+c to stop) or use logcat -d > logcat.txt to take a dump of the log. You can also view the logs from a PC over ADB. For more information, check official android logcat guide here.
Moreover, users can generate termux files stat info and logcat dump automatically too with terminal’s long hold options menu More -> Report Issue option and selecting YES in the prompt shown to add debug info. This can be helpful for reporting and debugging other issues. If the report generated is too large, then Save To File option in context menu (3 dots on top right) of ReportActivity can be used and the file viewed/shared instead.
Users must post complete report (optionally without sensitive info) when reporting issues. Issues opened with (partial) screenshots of error reports instead of text will likely be automatically closed/deleted.
The termux-shared library was added in v0.109. It defines shared constants and utils of the Termux app and its plugins. It was created to allow for the removal of all hardcoded paths in the Termux app. Some of the termux plugins are using this as well and rest will in future. If you are contributing code that is using a constant or a util that may be shared, then define it in termux-shared library if it currently doesn’t exist and reference it from there. Update the relevant changelogs as well. Pull requests using hardcoded values will/should not be accepted. Termux app and plugin specific classes must be added under com.termux.shared.termux package and general classes outside it. The termux-shared LICENSE must also be checked and updated if necessary when contributing code. The licenses of any external library or code must be honoured.
The main Termux constants are defined by TermuxConstants class. It also contains information on how to fork Termux or build it with your own package name. Changing the package name will require building the bootstrap zip packages and other packages with the new $PREFIX, check Building Packages for more info.
Check Termux Libraries for how to import termux libraries in plugin apps and Forking and Local Development for how to update termux libraries for plugins.
The versionName in build.gradle files of Termux and its plugin apps must follow the semantic version 2.0.0 spec in the format major.minor.patch(-prerelease)(+buildmetadata). When bumping versionName in build.gradle files and when creating a tag for new releases on GitHub, make sure to include the patch number as well, like v0.1.0 instead of just v0.1. The build.gradle files and attach_debug_apks_to_release workflow validates the version as well and the build/attachment will fail if versionName does not follow the spec.
Commit messages must use the Conventional Commits spec so that chagelogs as per the Keep a Changelog spec can automatically be generated by the create-conventional-changelog script, check its repo for further details on the spec. The first letter for type and description must be capital and description should be in the present tense. The space after the colon : is necessary. For a breaking change, add an exclamation mark ! before the colon :, so that it is highlighted in the chagelog automatically.
Only the types listed below must be used exactly as they are used in the changelog headings. For example, Added: Add foo, Added|Fixed: Add foo and fix bar, Changed!: Change baz as a breaking change, etc. You can optionally add a scope as well, like Fixed(terminal): Fix some bug. Do not use anything else as type, like add instead of Added, etc.
* Changed for changes in existing functionality.
* Check TermuxConstants javadocs for instructions on what changes to make in the app to change package name.
* You also need to recompile bootstrap zip for the new package name. Check building bootstrap, here and here.
* Currently, not all plugins use TermuxConstants from termux-shared library and have hardcoded com.termux values and will need to be manually patched.
* If forking termux plugins, check Forking and Local Development for info on how to use termux libraries for plugins.
...
Read the original on github.com »
Skip to main contentA whistleblower trapped inside a “pig butchering” scam compound gave WIRED a vast trove of its internal materials—including 4,200 pages of messages that lay out its operations in unprecedented detail. Just before 8am one day last April, an office manager who went by the name Amani sent out a motivational message to his colleagues and subordinates. “Every day brings a new opportunity—a chance to connect, to inspire, and to make a difference,” he wrote in his 500-word post to an office-wide WhatsApp group. “Talk to that next customer like you’re bringing them something valuable—because you are.”Amani wasn’t rallying a typical corporate sales team. He and his underlings worked inside a “pig butchering” compound, a criminal operation built to carry out scams—promising romance and riches from crypto investments—that often defraud victims out of hundreds of thousands or even millions of dollars at a time.He Leaked the Secrets of a Southeast Asian Scam Compound. Then He Had to Get Out AliveThe workers Amani was addressing were eight hours into their 15-hour night shift in a high-rise building in the Golden Triangle special economic zone in Northern Laos. Like their marks, most of them were victims, too: forced laborers trapped in the compound, held in debt bondage with no passports. They struggled to meet scam revenue quotas to avoid fines that deepened their debt. Anyone who broke rules or attempted to escape faced far worse consequences: beatings, torture, even death.The bizarre reality of daily life in a Southeast Asian scam compound—the tactics, the tone, the mix of cruelty and upbeat corporate prattle—is revealed at an unprecedented level of resolution in a leak of documents to WIRED from a whistleblower inside one such sprawling fraud operation. The facility, known as the Boshang compound, is one of dozens of scam operations across Southeast Asia that have enslaved hundreds of thousands of people. Often lured from the poorest regions of Asia and Africa with fake job offers, these conscripts have become engines of the most lucrative form of cybercrime in the world, coerced into stealing tens of billions of dollars.Last June, one of those forced laborers, an Indian man named Mohammad Muzahir, contacted WIRED while he was still captive inside the scam compound that had trapped him. Over the following weeks, Muzahir, who initially identified himself only as “Red Bull,” shared with WIRED a trove of information about the scam operation. His leaks included internal documents, scam scripts, training guides, operational flowcharts, and photographs and videos from inside the compound.Of all Muzahir’s leaks, the most revealing is a collection of screen recordings in which he scrolled through three months’ worth of the compound’s internal WhatsApp group chats. Those videos, which WIRED converted into 4,200 pages of screenshots, capture hour-by-hour conversations between the compound’s workers and their bosses—and the nightmare workplace culture of a pig butchering organization.“It’s a slave colony that’s trying to pretend it’s a company,” says Erin West, a former Santa Clara County, California, prosecutor who leads an anti-scam organization called Operation Shamrock and who reviewed the chat logs obtained by WIRED. Another researcher who reviewed the leaked chat logs, Jacob Sims of Harvard University’s Asia Center, also remarked on their “Orwellian veneer of legitimacy.”“It’s terrifying, because it’s manipulation and coercion,” says Sims, who studies Southeast Asian scam compounds. “Combining those two things together motivates people the most. And it’s one of the key reasons why these compounds are so profitable.”In another chat message, sent within hours of Amani’s saccharine pep talk, a higher-level boss weighed in: “Don’t resist the company’s rules and regulations,” he wrote. “Otherwise you can’t survive here.” The staffers responded with 26 emoji reactions, all thumbs-ups and salutes.Scam compound whistleblower Mohammad Muzahir, photographed in India after returning home from his ordeal as a forced laborer in the Golden Triangle.In total, according to WIRED’s analysis of the group chat, more than 30 of the compound’s workers successfully defrauded at least one victim in the 11 weeks of records available, totaling to around $2.2 million in stolen funds. Yet the bosses in the chat frequently voiced their disappointment in the group’s performance, berated the staff for lack of effort, and imposed fine after fine.Rather than explicit imprisonment, the compound relied on a system of indentured servitude and debt to control its workers. As Muzahir described it, he was paid a base salary of 3,500 Chinese yuan a month (about $500), which in theory entailed 75 hours a week of night shifts including breaks to eat. Although his passport had been taken from him, he was told that if he could pay off his “contract” with a $5,400 payment, it would be returned to him and he would be allowed to leave.In reality, the WhatsApp chats reveal how even that meager salary was almost entirely chipped away with fines. One message warns that anyone who fails to start a “first chat”—an introductory conversation with a scam victim—on any given day will be fined 50 yuan, and the failure will be announced to the group. Filing a false progress report results in a fine of 1,000 yuan. Falling asleep in the office, or “watching unrelated video, chatting with friends, and any activity that is not related to the job” are each punishable with a 200 yuan fine, as is any “disturbance” in the dormitory, where workers sleep five or six to a room in bunk beds.One message notes a fine of 500 yuan for a worker who slept late, and another fined 200 yuan for not being in the dorm at “check-in time” following his shift. Resist a fine by not signing a form that admits to the misbehavior, and the fine is doubled.An org chart for part of the Boshang scam compound, assembled from leaked messages and Muzahir’s knowledge of the operation.Muzahir himself described being fined so much that he was virtually broke. The food in the office cafeteria was also frequently denied as a punishment, the messages showed, with workers’ ID badges that granted access to the canteen sometimes being taken away for seven days for small infractions like tardiness. Even the freedom to bring in snacks and drinks—other than betel nuts, a stimulant—could be rescinded if staff underperformed. Time off was also withheld, with staff sometimes forced to work seven nights a week, Muzahir says.Yet those punishments could be avoided, the bosses frequently promised, if they successfully scammed someone—or “opened a customer,” as the bosses euphemistically described scamming a new victim. (Scamming the same victim multiple times was called a “recharge.”) In theory, workers were entitled to a commission, over and above their salary, for any scams they pulled off. Muzahir says he successfully perpetrated two scams during his months in the compound—both of which left him racked with regret, he says—and he was never paid after either of them.Bosses nonetheless used workers’ illusory hope of paying off their debt—or even going home rich—as a motivator. “I understand—when penalties or fines come your way, it’s easy to feel disheartened. But I urge you not to see it as a punishment, but as a lesson and an investment in your own growth,” wrote Amani. “Don’t fear the fine. Let it fuel your fire.”The more senior boss, who went by the name Da Hai, spelled out the carrot-and-stick approach more clearly. “The company’s incentives are much higher than the fines, so as long as you work hard to open new customers you will receive a generous reward!” he wrote.One of the bosses’ tactics was to play teams off one another, reprimanding underperforming workers while pointing to the success of other scammers in the compound. Each room of the office appears to have had a Chinese ceremonial drum, played when a worker successfully scammed a victim for a six-figure sum. “Do you know why the next office is beating drums?” wrote a higher-level boss called Alang.A victim had paid “480k,” a boss who goes by the name Libo answers.“It doesn’t matter, because he belongs to others,” Alang responds. “The important thing is, which one of you can play the drum?”A Chinese ceremonial drum stands in the office, ready to be struck when workers pulled off scams of a hundred thousand dollars or more.Beyond these manipulative tactics, the messages occasionally offer glimpses of a far harsher reality—as does the personal experience and testimony of Muzahir himself. Muzahir describes hearing stories of people who were tortured and says he was himself threatened by Amani with beating and electrocution if he didn’t find new “clients.” Sometimes coworkers disappeared without explanation.Eventually Muzahir came up with a plan to trick his captors into letting him leave. When the bosses caught on, he was held in a room, beaten, slapped and kicked, denied food and water, and made to drink a solution with a white powder dissolved in it, which seems to have been intended to make him more cooperative with their interrogation.Occasional messages in the chat logs hint that these cruel punishments lurked underneath the compound’s motivational messages. At one point, the boss Alang mentions a girl who “sneaked away from the company and went to work in a brothel,” and another person in the group mentions that the “company” still holds her passport. Among the captive workers, Muzahir says, rumor had it that the girl was in fact sold into prostitution, a practice documented in other accounts from scam compound survivors.At another point, while chastising the group for underperformance, the boss Da Hai hints at the large sum of money workers needed to produce if they ever hoped to leave the compound. “You continue to violate the company’s regulations,” he writes to the group. “If you continue like this, please prepare your compensation and get out of here.”Such references to paying “compensation” for release are in fact “coded words for ransom and debt bondage,” says Harvard’s Sims. The nation of Laos, Sims points out, is a signatory to the Palermo Protocol, which classifies anyone held in debt and forced to work without freedom of movement a victim of human trafficking. “There is no gray area here.”A Day in the Life of a ScammerThe leaked WhatsApp chats include a message from a boss who went by the name Terry laying out a strict work schedule for those under his supervision. “Obey and respect the working time,” the message says. Each shift would start at around 11:30 pm Beijing time—10:30 pm in Laos—with people told to arrive a few minutes early. Before the day ended at 2 pm Beijing time, there would be two break periods, one of which was set aside for meals. By 5 pm everyone was required to be back in their dormitories and “sleep or keep silence, no disturbing the others.” If the rules weren’t followed, fines would be issued and ID badges could be taken away.The reason for this nocturnal schedule was to sync with the waking hours of victims in the US—almost entirely Indian-American men. (It’s a common practice to pair scammers with victims of their own ethnicity, to avoid language and culture barriers.)In grim contrast to their actual lives, all staffers were required to post an imaginary daily schedule for their fake personas—the wealthy, attractive women they’d pretend to be during scams. In hour-by-hour breakdowns, they describe mornings spent meditating, practicing yoga, taking walks, and “setting positive intentions” for the day. Other activities include a “relaxed” lunch with their team, dinner with loved ones, and time at the gym—when in reality they were spending entire nights in front of a screen in a fluorescent-lit office space.Many of the staffers writing the schedules were nonetheless admonished for not sticking to the script while scamming. “The purpose of editing a daily plan is to let everyone know clearly what you are going to share with your clients today when you start working,” one boss complained. “I find that many people just do it to get the job done and don’t apply your plan to your clients.”An example of the schedules workers were required to post daily—not for themselves, but for the wealthy female personas they adopted in their chats with victims.The announcement of a successful scam posted to the compound’s WhatsApp group. This one celebrates a $338,000 theft.During each day’s work, the forced scammers were also required—under the threat of more fines—to report their scamming efforts back to the bosses in detail. The WhatsApp logs are filled with lengthy messages from every team member that offer those reports in identical message templates, listing their “team,” their name, and their recent online activity with the fake profiles. They would report how many active social media accounts they were operating, if any of their accounts were suspended, how many chats they’d started, how many were ongoing, any successful scams, and their target for the month. The internal chats also show scammers sharing with bosses and colleagues screenshots of their victim chats on Facebook Messenger, Instagram, Snapchat, and other chat apps, while asking questions about potential victims.Bosses frequently gave pointed feedback about how workers were managing the meta-narrative of their scams. “When sharing travel topics, you need to know how to share details,” one chat says. Another message from a boss admonishes workers not to mention the car their persona drives if they can’t provide a convincing photo of it.Managers would keep a close eye on the activity. On multiple occasions, bosses ask the forced workers to connect their WhatsApp accounts to the managers’ computers so they could monitor the conversations themselves.The 25 scripts and guides Muzahir shared with WIRED, too, offer a window into the tactics and training of the compound’s workers. Many of the guidance documents pertain to the nitty gritty of carrying out cryptocurrency investment scams, including how to build a friendship that can segue into an investment proposition, how to explain what cryptocurrency is, and what to do once a target agrees to make an investment.One document lists “100 chat topics,” geared toward building the emotional intimacy required for a romance scam (“What was your dream when you were little?” “What was the last time I cried for?”). Another suggests providing an update about having gotten into a car accident. “On my way to work in the morning, my car was hit by a car following at a traffic light, which almost delayed my meeting in the morning. Thank you for your concern. I am fine.”Multiple documents guide scammers to pretend they are currently making an investment, then introduce the idea that banks are resistant to letting their customers convert their money into cryptocurrency. “If we transfer or withdraw funds, they will have one less customer,” one proposed scam script says. “If everyone does this, then the bank will be in crisis and there will be a situation of capital rupture. I can understand their motives, but as a bank customer, I should not be hindered from transferring assets reasonably and legally. This is what makes me angry.”The documents also display a technique that researchers say is often used in Southeast Asian investment and romance scams: Attackers intentionally mention the concept of scams—even directly talking about the threat of investment scams—as a way of inoculating themselves against suspicion. The idea is that if a person is willing to talk openly about scams and isn’t avoiding the subject or acting strange about it, then they couldn’t be a scammer themself.A flow chart of the compound’s operations that Muzahir created and shared with WIRED in his first conversation with a reporter.That strategy goes so far as to include mentally preparing a victim for the anti-fraud warnings from their bank or even law enforcement that they may have to ignore in order to transfer large amounts of fiat currency into cryptocurrency. “I was going to transfer funds to my coinbase today, but I was deliberately delayed and obstructed by the bank staff,” one script reads, referring to the popular crypto wallet service Coinbase. “I also received an anti-fraud call from the FBI today, which wasted a lot of my time.”The materials Muzahir provided from the Boshang compound also document the key role generative AI tools play in its deceptions. Muzahir described to WIRED how the compound workers are trained in using tools like ChatGPT and Deepseek to come up with responses in chats with victims and craft natural-sounding turns of phrase. But even more crucial was the compound’s use of deepfake AI software to allow scammers to convincingly video chat with victims at their request using an AI-generated face, impersonating an individual whose photos they’ve stolen for a fake persona.The internal chat logs Muzahir captured describe a dedicated “AI room” where a female model conducts face-swapped calls on request with an endless parade of victims. One WhatsApp message from a boss to the group chat notes that “Sana (our model who helps us to call) is not available tonight. she is not feeling well. Therefore, don’t promise your customers to call them. Maybe she will come at work in the morning. Plan your work accordin[g]ly.”Other chats about the AI room relate to scheduling challenges given demand for face-swapped calls and the fact that a single model can only do one deepfake call at a time. One chat, for example, notes: “If there is a ‘busy’ sign on her door, change it to ‘free’ when you come out, so as to avoid crowding and frequent door openings.”The scripts Muzahir shared also include tips for delaying a video chat with a victim—perhaps until the scammer is prepared to use deepfake tools. “When we meet, it will not be awkward but rather we will look forward to it,” says one script about what to say when a victim asks to video chat. It continues, “We are strengthening our relationship every day. You have also seen my photos. When we meet, can you recognize me?”As dystopian as the Golden Triangle compound described in the leaked documents may be, its work environment appears to have been relatively lax compared to other compounds in countries like Cambodia or Myanmar. In those facilities, Operation Shamrock’s Erin West says, she has heard firsthand stories of workers being beaten simply for missing their quota of scams or being forced to work 18-hour shifts while standing, with none of the pretense of voluntary work in a corporate environment.The relative leniency of Muzahir’s compound, says Harvard’s Sims, likely stems from scam operations’ sense of total control in Laos’ Golden Triangle region—a zone of the country controlled largely by Chinese business interests that has become a host to crimes ranging from narcotics and organ sales to illegal wildlife trafficking. Even human trafficking victims who escape from a compound there, Sims points out, can be tracked down relatively easily thanks to Chinese organized crime’s influence over local law enforcement. “These guys don’t have to be held in a cell,” Sims says. “The whole place is a closed circuit.”Nonetheless, the Boshang compound that held Muzahir appears to have moved in November from the Golden Triangle to Cambodia, a country that’s become by some measures an even safer base for scammers to operate from. Based on messages from his former coworkers, Muzahir says he’s determined that the operation and its captive workers are now based in the town of Chrey Thom, what Sims and West both describe as a growing hot spot for scam operations.The move may have been precipitated, Sims speculates, by police raids on compounds across the region around that time. Many of those raids appear to have been part of a “performative crackdown,” as Sims puts it. (One such raid in June targeted the building where Muzahir’s compound had previously been located, but Muzahir says the workers who were rounded up by police were quickly released again and returned to work.)Nonetheless, the nuisance of even those superficial disruptions may have persuaded the operation’s bosses to relocate to Cambodia. In that country, even the family of the country’s prime minister, Hun Manet, has been linked to a corporate conglomerate that oversees a subsidiary with documented ties to the burgeoning scam industry. “It’s been a very hospitable environment to do this work,” West says.One of Muzahir’s old bosses also confirmed to him in a private text exchange that the compound is still “recruiting” new workers—victims trapped in a system of modern slavery hidden under a thin facade of a willing workplace.“This is a place to work, not to enjoy,” that same boss had written in the group chat during Muzahir’s time in the compound, in a rare moment when the mask of a normal office environment seemed to slip. “You can only enjoy life when you leave here.”Let us know what you think about this article. Submit a letter to the editor at mail@wired.com.
...
Read the original on www.wired.com »
So… Here we are again.
Today, after a minor disaster with my Obsidian vault, I decided to restore from Time Machine, and… I realized that it had silently broken across both my Tahoe machines. I use a Synology NAS as Time Machine target, exporting the share over SMB and that has worked flawlessly for years, but this came as a surprise because I could have sworn it was working fine a couple of months ago–but no, it wasn’t.
For clarity: It just stopped doing backups, silently. No error messages, no notifications, nothing. Just no backups for around two months. On my laptop, I only noticed because I was trying to restore a file and the latest backup was from December. On my desktop, I had a Thunderbolt external drive as a secondary backup.
After some research, I found out that the issue is with Apple’s unilateral decision to change their SMB defaults (without apparently notifying anyone), and came across a few possible fixes.
What Seems To Be Working Now
I found this gist, which I am reproducing here for posterity, that seems to be working for me, but which entails editing the nsmb.conf file on the Mac itself–which is not exactly ideal, since I’m pretty sure Apple will break this again in the future.
…and adding the following lines (the file should be empty):
The explanation here is that macOS Tahoe changed the default from signing_required=no to stricter control, and NAS devices with relaxed SMB settings cannot handle this without explicit configuration.
Another common pitfall is name encoding issues in machine names, so you should remove Non-ASCII Characters from the .sparsebundle name (that wasn’t an issue for me, but YMMV).
On the Synology side, the recommendation was to go to Control Panel > File Services > SMB > Advanced and set:
That doesn’t quite match my DSM UI, but it’s close enough, and my settings now look like this:
My SMB settings, as of DSM 7.3.2-86009-1
Since I’m tired of Apple breaking Time Machine every few years and the lack of transparency around this (it’s not Synology’s fault), I have decided to implement a more robust solution that doesn’t depend on Synology’s SMB implementation.
I already have a Proxmox server with ZFS as the backend storage that has an LXC container running Samba for general file sharing, so I decided to look into that as a possible Time Machine target.
As it happens, mbentley/timemachine is a Docker image specifically designed for this purpose, and it seems to be well-maintained, so I’m testing it like this:
Right now the first option seems to be working, but I will probably switch to the Docker solution in the near future, since it gives me more control over the SMB implementation and avoids relying on Synology’s software.
But if anyone from Apple is reading this: please, stop breaking Time Machine every few years. It’s a critical piece of infrastructure for many users, and the lack of communication around these changes is frustrating.
The Third Way: Borg Backup
I have been using Borg for some time now on Fedora, and I am considering using it for my Macs as well. Vorta seems decent, I just haven’t tried it yet.
Plus I’m annoyed enough that earlier this morning I tried to set up a new iOS device and the infamous Restore in Progress: An estimated 100 MB will be downloaded… bug (which has bitten me repeatedly over the last six years) is still there.
The usual fix was hitting Reset Network Settings and a full hardware reboot, plus reconnecting to Wi-Fi… But this time it took three attempts.
Come on, Apple, get your act together. Hire people who care about the OS experience, not just Liquid Glass.
...
Read the original on taoofmac.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.