10 interesting stories served every morning and every evening.
For every cloud service I use, I want to have a local copy of my data for backup purposes and independence. Unfortunately, the gphotos-sync tool stopped
working in March
2025 when Google restricted the OAuth scopes, so I needed an alternative for my existing Google Photos setup. In this post, I describe how I have set up
Immich, a self-hostable photo manager.
Here is the end result: a few (live) photos from NixCon
2025:
I am running Immich on my Ryzen 7 Mini PC (ASRock DeskMini
X600), which consumes less than 10 W of power in idle and has plenty of resources for VMs (64 GB RAM, 1 TB disk). You can read more about it in my blog post from July 2024:
I installed Proxmox, an Open Source virtualization platform, to divide this mini server into VMs, but you could of course also install Immich directly on any server.
I created a VM (named “photos”) with 500 GB of disk space, 4 CPU cores and 4 GB of RAM.
For the initial import, you could assign more CPU and RAM, but for normal usage, that’s enough.
I (declaratively) installed
NixOS on that VM as described in this blog post:
Afterwards, I enabled Immich, with this exact configuration:
At this point, Immich is available on localhost, but not over the network, because NixOS enables a firewall by default. I could enable the
services.immich.openFirewall option, but I actually want Immich to only be available via my Tailscale VPN, for which I don’t need to open firewall access — instead, I use tailscale serve to forward traffic to localhost:2283:
photos# tailscale serve –bg http://localhost:2283
Because I have Tailscale’s MagicDNS
and TLS certificate provisioning
enabled, that means I can now open https://photos.example.ts.net in my browser on my PC, laptop or phone.
At first, I tried importing my photos using the official Immich CLI:
% nix run nixpkgs#immich-cli — login https://photos.example.ts.net secret
% nix run nixpkgs#immich-cli — upload –recursive /home/michael/lib/photo/gphotos-takeout
Unfortunately, the upload was not running reliably and had to be restarted manually a few times after running into a timeout. Later I realized that this was because the Immich server runs background jobs like thumbnail creation, metadata extraction or face detection, and these background jobs slow down the upload to the extent that the upload can fail with a timeout.
The other issue was that even after the upload was done, I realized that Google Takeout archives for Google Photos contain metadata in separate JSON files next to the original image files:
Unfortunately, these files are not considered by immich-cli.
Luckily, there is a great third-party tool called
immich-go, which solves both of these issues! It pauses background tasks before uploading and restarts them afterwards, which works much better, and it does its best to understand Google Takeout archives.
I ran immich-go as follows and it worked beautifully:
% immich-go \
upload \
from-google-photos \
–server=https://photos.example.ts.net \
–api-key=secret \
~/Downloads/takeout-*.zip
My main source of new photos is my phone, so I installed the Immich app on my iPhone, logged into my Immich server via its Tailscale URL and enabled automatic backup of new photos via the icon at the top right.
I am not 100% sure whether these settings are correct, but it seems like camera photos generally go into Live Photos, and Recent should cover other files…?!
If anyone knows, please send an explanation (or a link!) and I will update the article.
I also strongly recommend to disable notifications for Immich, because otherwise you get notifications whenever it uploads images in the background. These notifications are not required for background upload to work, as an Immich
developer confirmed on
Reddit. Open
Settings → Apps → Immich → Notifications and un-tick the permission checkbox:
Immich’s documentation on
backups contains some good recommendations. The Immich developers recommend backing up the entire contents of UPLOAD_LOCATION, which is /var/lib/immich on NixOS. The
backups subdirectory contains SQL dumps, whereas the 3 directories upload,
library and profile contain all user-uploaded data.
Hence, I have set up a systemd timer that runs rsync to copy /var/lib/immich
onto my PC, which is enrolled in a 3-2-1 backup
scheme.
Immich (currently?) does not contain photo editing features, so to rotate or crop an image, I download the image and use GIMP.
To share images, I still upload them to Google Photos (depending on who I share them with).
The two most promising options in the space of self-hosted image management tools seem to be Immich and Ente.
I got the impression that Immich is more popular in my bubble, and Ente made the impression on me that its scope is far larger than what I am looking for:
Ente is a service that provides a fully open source, end-to-end encrypted platform for you to store your data in the cloud without needing to trust the service provider. On top of this platform, we have built two apps so far: Ente Photos (an alternative to Apple and Google Photos) and Ente Auth (a 2FA alternative to the deprecated Authy).
I don’t need an end-to-end encrypted platform. I already have encryption on the transit layer (Tailscale) and disk layer (LUKS), no need for more complexity.
Immich is a delightful app! It’s very fast and generally seems to work well.
The initial import is smooth, but only if you use the right tool. Ideally, the official immich-cli could be improved. Or maybe immich-go could be made the official one.
I think the auto backup is too hard to configure on an iPhone, so that could also be improved.
But aside from these initial stumbling blocks, I have no complaints.
Table Of Contents
...
Read the original on michael.stapelberg.ch »
To use the Mastodon web application, please enable JavaScript. Alternatively, try one of the native apps for Mastodon for your platform.
...
Read the original on grapheneos.social »
The controversy highlights a wider trend in which more of what people see online is pre-processed by AI before reaching them. Smartphone makers like Samsung and Google have long used AI to “enhance” images. Samsung previously admitted to using AI to sharpen moon photos, while Google’s Pixel “Best Take” feature stitches together facial expressions from multiple shots to create a single “perfect” group picture.
...
Read the original on www.ynetnews.com »
To use the Mastodon web application, please enable JavaScript. Alternatively, try one of the native apps for Mastodon for your platform.
...
Read the original on social.growyourown.services »
NanoKVM is a hardware KVM switch developed by the Chinese company Sipeed. Released last year, it enables remote control of a computer or server using a virtual keyboard, mouse, and monitor. Thanks to its compact size and low price, it quickly gained attention online, especially when the company promised to release its code as open-source. However, as we’ll see, the device has some serious security issues. But first, let’s start with the basics.
As mentioned, NanoKVM is a KVM switch designed for remotely controlling and managing computers or servers. It features an HDMI port, three USB-C ports, an Ethernet port for network connectivity, and a special serial interface. The package also includes a small accessory for managing the power of an external computer.
Using it is quite simple. First, you connect the device to the internet via an Ethernet cable. Once online, you can access it through a standard web browser (though JavaScript JIT must be enabled). The device supports Tailscale VPN, but with some effort (read: hacking), it can also be configured to work with your own VPN, such as WireGuard or OpenVPN server. Once set up, you can control it from anywhere in the world via your browser.
The device could be connected to the target computer using an HDMI cable, capturing the video output that would normally be displayed on a monitor. This allows you to view the computer’s screen directly in your browser, essentially acting as a virtual monitor.
Through the USB connection, NanoKVM can also emulate a keyboard, mouse, CD-ROM, USB drive, and even a USB network adapter. This means you can remotely control the computer as if you were physically sitting in front of it - but all through a web interface.
While it functions similarly to remote management tools like RDP or VNC, it has one key difference: there’s no need to install any software on the target computer. Simply plug in the device, and you’re ready to manage it remotely. NanoKVM even allows you to enter the BIOS, and with the additional accessory for power management, you can remotely turn the computer on, off, or reset it.
This makes it incredibly useful - you can power on a machine, access the BIOS, change settings, mount a virtual bootable CD, and install an operating system from scratch, just as if you were physically there. Even if the computer is on the other side of the world.
NanoKVM is also quite affordable. The fully-featured version, which includes all ports, a built-in mini screen, and a case, costs just over €60, while the stripped-down version is around €30. By comparison, a similar RaspberryPi-based device, PiKVM, costs around €400. However, PiKVM is significantly more powerful and reliable and, with a KVM splitter, can manage multiple devices simultaneously.
As mentioned earlier, the announcement of the device caused quite a stir online - not just because of its low price, but also due to its compact size and minimal power consumption. In fact, it can be powered directly from the target computer via a USB cable, which it also uses to simulate a keyboard, mouse, and other USB devices. So you have only one USB cable - in one direction it powers NanoKVM, on the other it helps it to simulate keyboard mouse and other devices on a computer you want to manage.
The device is built on the open-source RISC-V processor architecture, and the manufacturer eventually did release the device’s software under an open-source license at the end of last year. (To be fair, one part of the code remains closed, but the community has already found a suitable open-source replacement, and the manufacturer has promised to open this portion soon.)
However, the real issue is security.
Understandably, the company was eager to release the device as soon as possible. In fact, an early version had a minor hardware design flaw - due to an incorrect circuit cable, the device sometimes failed to detect incoming HDMI signals. As a result, the company recalled and replaced all affected units free of charge. Software development also progressed rapidly, but in such cases, the primary focus is typically on getting basic functionality working, with security taking a backseat.
So, it’s not surprising that the developers made some serious missteps - rushed development often leads to stupid mistakes. But some of the security flaws I discovered in my quick (and by no means exhaustive) review are genuinely concerning.
One of the first security analysis revealed numerous vulnerabilities - and some rather bizarre discoveries. For instance, a security researcher even found an image of a cat embedded in the firmware. While the Sipeed developers acknowledged these issues and relatively quickly fixed at least some of them, many remain unresolved.
After purchasing the device myself, I ran a quick security audit and found several alarming flaws. The device initially came with a default password, and SSH access was enabled using this preset password. I reported this to the manufacturer, and to their credit, they fixed it relatively quickly. However, many other issues persist.
The user interface is riddled with security flaws - there’s no CSRF protection, no way to invalidate sessions, and more. Worse yet, the encryption key used for password protection (when logging in via a browser) is hardcoded and identical across all devices. This is a major security oversight, as it allows an attacker to easily decrypt passwords. More problematic, this needed to be explained to the developers. Multiple times.
Another concern is the device’s reliance on Chinese DNS servers. And configuring your own (custom) DNS settings is quite complicated. Additionally, the device communicates with Sipeed’s servers in China - downloading not only updates but also the closed-source component mentioned earlier. For this closed source component it needs to verify an identification key, which is stored on the device in plain text. Alarmingly, the device does not verify the integrity of software updates, includes a strange version of the WireGuard VPN application (which does not work on some networks), and runs a heavily stripped-down version of Linux that lacks systemd and apt. And these are just a few of the issues.
Were these problems simply oversights? Possibly. But what additionally raised red flags was the presence of tcpdump and aircrack - tools commonly used for network packet analysis and wireless security testing. While these are useful for debugging and development, they are also hacking tools that can be dangerously exploited. I can understand why developers might use them during testing, but they have absolutely no place on a production version of the device.
And then I discovered something even more alarming - a tiny built-in microphone that isn’t clearly mentioned in the official documentation. It’s a miniature SMD component, measuring just 2 x 1 mm, yet capable of recording surprisingly high-quality audio.
What’s even more concerning is that all the necessary recording tools are already installed on the device! By simply connecting via SSH (remember, the device initially used default passwords!), I was able to start recording audio using the amixer and arecord tools. Once recorded, the audio file could be easily copied to another computer. With a little extra effort, it would even be possible to stream the audio over a network, allowing an attacker to eavesdrop in real time.
Physically removing the microphone is possible, but it’s not exactly straightforward. As seen in the image, disassembling the device is tricky, and due to the microphone’s tiny size, you’d need a microscope or magnifying glass to properly desolder it.
To summarize: the device is riddled with security flaws, originally shipped with default passwords, communicates with servers in China, comes preinstalled with hacking tools, and even includes a built-in microphone - fully equipped for recording audio - without clear mention of it in the documentation. Could it get any worse?
I am pretty sure these issues stem from extreme negligence and rushed development rather than malicious intent. However, that doesn’t make them any less concerning.
That said, these findings don’t mean the device is entirely unusable.
Since the device is open-source, it’s entirely possible to install custom software on it. In fact, one user has already begun porting his own Linux distribution - starting with Debian and later switching to Ubuntu. With a bit of luck, this work could soon lead to official Ubuntu Linux support for the device.
This custom Linux version already runs the manufacturer’s modified KVM code, and within a few months, we’ll likely have a fully independent and significantly more secure software alternative. The only minor inconvenience is that installing it requires physically opening the device, removing the built-in SD card, and flashing the new software onto it. However, in reality, this process isn’t too complicated.
And while you’re at it, you might also want to remove the microphone… or, if you prefer, connect a speaker. In my test, I used an 8-ohm, 0.5W speaker, which produced surprisingly good sound - essentially turning the NanoKVM into a tiny music player. Actually, the idea is not so bad, because PiKVM also included 2-way audio support for their devices end of last year.
All this of course raises an interesting question: How many similar devices with hidden functionalities might be lurking in your home, just waiting to be discovered? And not just those of Chinese origin. Are you absolutely sure none of them have built-in miniature microphones or cameras?
You can start with your iPhone - last year Apple has agreed to pay $95 million to settle a lawsuit alleging that its voice assistant Siri recorded private conversations. They shared the data with third parties and used them for targeted ads. “Unintentionally”, of course! Yes, that Apple, that cares about your privacy so much.
And Google is doing the same. They are facing a similar lawsuit over their voice assistant, but the litigation likely won’t be settled until this fall. So no, small Chinese startup companies are not the only problem. And if you are worried about Chinese companies obligations towards Chinese government, let’s not forget that U. S. companies also have obligations to cooperate with U.S. government. While Apple is publicly claiming they do not cooperate with FBI and other U. S. agencies (because thy care about your privacy so much), some media revealed that Apple was holding a series secretive Global Police Summit at its Cupertino headquarters where they taught police how to use their products for surveillance and policing work. And as one of the police officers pointed out - he has “never been part of an engagement that was so collaborative.”. Yep.
If you want to test the built-in microphone yourself, simply connect to the device via SSH and run the following two commands:
* arecord -Dhw:0,0 -d 3 -r 48000 -f S16_LE -t wav test.wav & > /dev/null & (this will capture the sound to a file named test.wav)
Now, speak or sing (perhaps the Chinese national anthem?) near the device, then press Ctrl + C, copy the test.wav file to your computer, and listen to the recording.
...
Read the original on telefoncek.si »
The Core Project is a highly modular based system with community build extensions.
It starts with a recent Linux kernel, vmlinuz, and our root filesystem and start-up scripts packaged with a basic set of kernel modules in core.gz. Core (11MB) is simply the kernel + core.gz - this is the foundation for user created desktops, servers, or appliances. TinyCore is Core + Xvesa.tcz + Xprogs.tcz + aterm.tcz + fltk-1.3.tcz + flwm.tcz + wbar.tcz
TinyCore becomes simply an example of what the Core Project can produce, an 16MB FLTK/FLWM desktop.
CorePlus ofers a simple way to get started using the Core philosophy with its included community packaged extensions enabling easy embedded frugal or pendrive installation of the user’s choice of supported desktop, while maintaining the Core principal of mounted extensions with full package management.
It is not a complete desktop nor is all hardware completely supported. It represents only the core needed to boot into a very minimal X desktop typically with wired internet access.
The user has complete control over which applications and/or additional hardware to have supported, be it for a desktop, a netbook, an appliance, or server, selectable by the user by installing additional applications from online repositories, or easily compiling most anything you desire using tools provided.
Our goal is the creation of a nomadic ultra small graphical desktop operating system capable of booting from cdrom, pendrive, or frugally from a hard drive. The desktop boots extremely fast and is able to support additional applications and hardware of the users choice. While Tiny Core always resides in ram, additional applications extensions can either reside in ram, mounted from a persistent storage device, or installed into a persistent storage device.
We invite interested users and developers to explore Tiny Core. Within our forums we have an open developement model. We encourage shared knowledge. We promote community involvement and community built application extensions. Anyone can contribute to our project by packaging their favorite application or hardware support to run in Tiny Core. The Tiny Core Linux Team currently consists of eight members who peruse the forums to assist from answering questions to helping package new extensions.
Join us here and on IRC Freenode #tinycorelinux.
...
Read the original on www.tinycorelinux.net »
Use tab to navigate through the menu items. Or: How the AI Bubble, Panic, and Unpreparedness Stole ChristmasWritten by Tom of Moore’s Law Is DeadAt the beginning of November, I ordered a 32GB DDR5 kit for pairing with a Minisforum BD790i X3D motherboard, and three weeks later those very same sticks of DDR5 are now listed for a staggering $330– a 156% increase in price from less than a month ago! At this rate, it seems likely that by Christmas, that DDR5 kit alone could be worth more than the entire Zen 4 X3D platform I planned to pair it with! How could this happen, and more specifically — how could this happen THIS quickly? Well, buckle up! I am about to tell you the story of Sam Altman’s Dirty DRAM Deal, or: How the AI bubble, panic, and unpreparedness stole Christmas…But before I dive in, let me make it clear that my RAM kit’s 156% jump in price isn’t a fluke or some extreme example of what’s going on right now. Nope, and in fact, I’d like to provide two more examples of how how impossible it is becoming to get ahold of RAM - these were provided by a couple of our sources within the industry:One source that works at a US Retailer, stated that a RAM Manufacturer called them in order to inquire if they might buy RAM from to stock up for their other customers. This would be like Corsair asking a Best Buy if they had any RAM around.Another source that works at a Prebuilt PC company, was recently given an estimate for when they would receive RAM orders if they placed them now…and they were told December…of 2026So what happened? Well, it all comes down to three perfectly synergistic events:two unprecedented RAM deals that took everyone by surprise.The secrecy and size of the deals triggered full-scale panic buying from everyone else.The market had almost zero safety stock left due to tariffs, worry about RAM prices over the summer, and stalled equipment transfers.Below, we’re going to walk through each of these factors — and then I’m going to warn you about which hardware categories will be hit the hardest, which products are already being cancelled, and what you should buy before the shelves turn into a repeat of 2021–2022…because this is doomed to turn into much more than just RAM scarcity…deals with Samsung and SK Hynix for 40% of the worlds DRAM supply. Now, did OpenAI’s competition suspect some big RAM deals could be signed in late 2025? Yes. Ok, but did they think it would be deals this huge and with multiple companies? NO! In fact, if you go back and read reporting on Sam Altman’s now infamous trip to South Korea on October 1st, even just mere hours before the massive deals with Samsung and SK Hynix were — most reporting simply mentioned vague reports about Sam talking to Samsung, SK Hynix, TSMC, and Foxconn. But the reporting at the time was soft, almost dismissive — “exploring ties,” “seeking cooperation,” “probing for partnerships.” Nobody hinted that OpenAI was about to swallow up to 40% of global DRAM output — even on morning before it happened! Nobody saw this coming - this is clear in the lack of reporting about the deals before they were announced, and every MLID Source who works in DRAM manufacturing and distribution insist this took everyone in the industry by surprise.To be clear - the shock wasn’t that OpenAI made a big deal, no, it was that they made two massive deals this big, at the same time, with Samsung and SK Hynix simultaneously! In fact, according to our sources - both companies had no idea how big each other’s deal was, nor how close to simultaneous they were. And this secrecy mattered. It mattered a lot.Had Samsung known SK Hynix was about to commit a similar chunk of supply — or vice-versa — the pricing and terms would have likely been different. It’s entirely conceivable they wouldn’t have both agreed to supply such a substantial part of global supply if they had known more…but at the end of the day - OpenAI did succeed in keeping the circles tight, locking down the NDAs, and leveraging the fact that these companies assumed the other wasn’t giving up this much wafer volume simultaneously…in order to make a surgical strike on the global RAM supply chain…and it’s worked so far…Part II — Instant Panic: How did we miss this?Imagine you’re running a hyper scaler, or maybe you’re a major OEM, or perhaps pretend that you are simply one of OpenAI’s chief competitors: On October 1st of 2025, you would have woken up to the news that OpenAI had just cornered the memory market more aggressively than any company in the last decade, and you hadn’t heard even a murmur that this was coming beforehand! Well, you would probably make some follow-up calls to colleagues in the industry, and then also quickly hear rumors that it wasn’t just you - also the two largest suppliers didn’t even see each other’s simultaneous cooperation with OpenAI coming ! You wouldn’t go: “Well, that’s an interesting coincidence”, no, you would say: “WHAT ELSE IS GOING ON THAT WE DON’T KNOW ABOUT?”Again — it’s not the size of the deals that’s solely the issue here, no, it’s also the of them. On October 1st silicon valley executives and procurement managers panicked over concerns like these:What other deals don’t we know about? Is this just the first of many?None of our DRAM suppliers warned us ahead of time! We have to assume they also won’t in the future, and that it’s possible of global DRAM could be bought up without us getting a single warning!We know OpenAI’s competitors are already panic-buying! If we don’t move we might be locked out of the market until 2028!OpenAI’s competitors, OEMs, and cloud providers scrambled to secure whatever inventory remained out of self-defense, and self-defense in a world that was entirely due to the accelerant I’ll now explain in Part III…Normally, the DRAM market has buffers: warehouses of emergency stock, excess wafer starts, older DRAM manufacturing machinery being sold off to budget brands while the big brands upgrade their production lines…but not in 2025, in 2025 those would-be buffers were depleted for three separate reasons:Tariff Chaos. Companies had deliberately reduced how much DRAM they ordered for their safety stock over the summer of 2025 because tariffs were changing almost weekly. Every RAM purchase risked being made at the wrong moment — and so fewer purchases were made.Prices had been falling all summer. Because of the hesitancy to purchase as much safety stock as usual, RAM prices were also genuinely falling over time. And, obviously when memory is getting cheaper month over month, the thing you’d feel is pressured to buy a commodity that could be cheaper the next month…so everyone waited.Secondary RAM Manufacturing Had Stalled. Budget brands normally buy older DRAM fabrication equipment from mega-producers like Samsung when Samsung upgrades their DRAM lines to the latest and greatest equipment. This allows the DRAM market to more than it would otherwise because it makes any upgrading of the fanciest production lines to still be change to the market. However, Korean memory firms have been terrified that reselling old equipment to China-adjacent OEMs might trigger U.S. retaliation…and so those machines have been sitting idle in warehouses since early spring.Yep, there was no cushion. OpenAI hit the market at the exact moment it was least prepared. And now time for the biggest twist of all, a twist that’s actually , and therefore should be getting discussed by far more people in this writer’s opinion: OpenAI isn’t even bothering to buy finished memory modules! No, their deals are unprecedentedly only for raw wafers — uncut, unfinished, and not even allocated to a specific DRAM standard yet. It’s not even clear if they have decided yet on how or when they will finish them into RAM sticks or HBM! Right now it seems like these wafers will just be stockpiled in warehouses — like a kid who hides the toybox because they’re afraid nobody wants to play with them, and thus selfishly feels nobody but them should get the toys!And let’s just say it: Here is the uncomfortable truth Sam Altman is always loath to admit in interviews: OpenAI is worried about losing its lead. The last 18 months have seen competitors catching up fast — Anthropic, Meta, xAI, and specifically Google’s Gemini 3 has gotten a ton of praise just in the past week. Everyone’s chasing training capacity. Everyone needs memory. DRAM is the lifeblood of scaling inference and training throughput. Cutting supply to your rivals is not a conspiracy theory. It’s a business tactic as old as business itself. And so, when you consider how secretive OpenAI was about their deals with Samsung and SK Hynix, but additionally how unready they were to immediately utilize their warehouses of DRAM wafers — it sure seems like a primary goal of these deals was to , and not just an attempt to protect OpenAI’s own supply…Part V — What will be cancelled? What should you buy now?Alright, now that we are done explaining the , let’s get to the “ – because even if the RAM shortage miraculously improves immediately behind the scenes — even if the AI Bubble instantly popped or 10 companies started tooling up for more DRAM capacity this second (and many are, to be fair), at a minimum the next six to nine months are already screwed See above: DRAM manufactures are quoting 13-Month lead times for DDR5! This is not a temporary blip. This could be a once-in-a-generation shock. So what gets hit first? What gets hit hardest? Well, below is an E through S-Tier ranking of which products are “the most screwed”:S-Tier (Already Screwed — Too Late to Buy) -RAM itself, obviously. RAM prices have “exploded”. The detonation is in the past.SSDs. These tends to follow DRAM pricing with a lag.RADEON GPUs. AMD doesn’t bundle RAM in their BOM kits to AIBs the way Nvidia does. In fact, the RX 9070 GRE 16GB this channel leaked months ago is almost certainly cancelled according to our sourcesXBOX. Microsoft didn’t plan. Prices may rise and/or supply may dwindle in 2026.Nvidia GPUs. Nvidia maintains large memory inventories for its board partners, giving them a buffer. But high-capacity GPUs (like a hypothetical 24GB 5080 SUPER) are on ice for now because stores were never sufficiently built up. In fact, Nvidia is quietly telling partners that their SUPER refresh “might” launch Q3 2026 — although most partners think it’s just a placeholder for when Nvidia expects new capacity to come online, and thus SUPER may never launch.C-Tier (Think about buying soon)Laptops and phones. These companies negotiate immense long-term contracts, so they’re not hit immediately. But once their stockpiles run dry, watch out!D-Tier (Consider buying soon, but there’s no rush)PlayStation. Sony planned better than almost anyone else. They bought aggressively during the summer price trough, which is why they can afford a Black Friday discount while everyone else is raising prices.Anything without RAM. Specifically CPUs that do not come with coolers could see price over time since there could be a in demand for CPUs if nobody has the RAM to feed them in systems.???-Tier —Steam Machine. Valve keeps things quiet, but the big unknown is whether they pre-bought RAM months ago before announcing their much-hyped Steam Machine. If they did already stockpile an ample supply of DDR5 - then Steam Machine should launch fine, but supply could dry up temporarily at some point while they wait for prices to drop. However, if they didn’t plan ahead - expect a high launch price and very little resupply…it might even need to be cancelled or there might need to be a variant offered without RAM included (BYO RAM Edition!).And that’s it! This last bit was the most important part of the article in this writer’s opinion — an attempt at helping you avoid getting burned. Well, actually, there is one other important reason for this article’s existence I’ll tack onto the end — a hope that other people start digging into what’s going on at OpenAI. I mean seriously — do we even have a single reliable audit of their financials to back up them outrageously spending this much money… Heck, I’ve even heard from numerous sources that OpenAI is “buying up the manufacturing equipment as well” — and without mountains of concrete proof, and/or more input from additional sources on what that really means…I don’t feel I can touch that hot potato without getting burned…but I hope someone else will…
...
Read the original on www.mooreslawisdead.com »
...
Read the original on haveibeenflocked.com »
Reddit user Dycus built a camera using the sensor from an optical mouse. After about 65 hours of work, Dycus had a low-resolution black-and-white camera with multiple shooting modes, housed in a nifty 3D-printed body.
PetaPixel has previously reported on similar projects that turn old optical computer mice into functional cameras, but Dycus’ project is unique in that he designed a full-blown camera.
Optical computer mice work by detecting movement with a photoelectric cell (or sensor) and a light. The light is emitted downward, striking a desk or mousepad, and then reflecting to the sensor. The sensor has a lens to help direct the reflected light, enabling the mouse to convert precise physical movement into an input for the computer’s on-screen cursor. The way the reflected changes in response to movement is translated into cursor movement values.
It’s a clever solution for a fundamental computer problem: how to control the cursor. For most computer users, that’s fine, and they can happily use their mouse and go about their day. But when Dycus came across a PCB from an old optical mouse, which they had saved because they knew it was possible to read images from an optical mouse sensor, the itch to build a mouse-based camera was too much to ignore.
The new optical mouse camera has a lot of neat features, including multiple shooting modes, numerous color palettes (the camera itself has 64 shades of gray), controllable exposure, and 32kB of on-camera storage to save up to 48 pictures. In addition to a standard single-shot mode, the camera also captures quad shots and “smear” shots, which are panoramas.
Posts from the electronics
community on Reddit
“The panorama ‘smear shot’ is definitely my favorite mode, it scans out one column at a time across the screen as you sweep the camera,” Dycus writes on Reddit. “It’s scaled 2x vertically but 1x horizontally, so you get extra ‘temporal resolution’ horizontally if you do the sweep well.”
The optical mouse camera can also record movements, like it would if it were integrated into an actual mouse, and convert motion into drawings on the camera’s screen.
Given that the camera isn’t even sniffing one megapixel territory — its standard photos are just 900 pixels versus the 1,000,000 required to hit 1MP — the image quality is not particularly impressive, but as Dycus notes and Game Boy Camera enthusiasts can attest, it’s not about the resolution, it’s about the fun factor.
“Despite the low resolution, it’s easily possible to take recognizable pictures of stuff,” Dycus says. “The ‘high’ color depth definitely helps. I’d like it to the Game Boy Camera (which I also enjoy), which is much higher resolution but only has four colors.”
...
Read the original on petapixel.com »
Let’s say you’ve done a computation in Wolfram Language. And now you want to scale it up. Maybe 1000x or more. Well, today we’ve released an extremely streamlined way to do that. Just wrap the scaled up computation in and off it’ll go to our new Wolfram Compute Services system. Then—in a minute, an hour, a day, or whatever—it’ll let you know it’s finished, and you can get its results.
For decades I’ve often needed to do big, crunchy calculations (usually for science). With large volumes of data, millions of cases, rampant computational irreducibility, etc. I probably have more compute lying around my house than most people—these days about 200 cores worth. But many nights I’ll leave all of that compute running, all night—and I still want much more. Well, as of today, there’s an easy solution—for everyone: just seamlessly send your computation off to Wolfram Compute Services to be done, at basically any scale.
For nearly 20 years we’ve had built-in functions like and in Wolfram Language that make it immediate to parallelize subcomputations. But for this to really let you scale up, you have to have the compute. Which now—thanks to our new Wolfram Compute Services—everyone can immediately get.
The underlying tools that make Wolfram Compute Services possible have existed in the Wolfram Language for several years. But what Wolfram Compute Services now does is to pull everything together to provide an extremely streamlined all-in-one experience. For example, let’s say you’re working in a notebook and building up a computation. And finally you give the input that you want to scale up. Typically that input will have lots of dependencies on earlier parts of your computation. But you don’t have to worry about any of that. Just take the input you want to scale up, and feed it to . Wolfram Compute Services will automatically take care of all the dependencies, etc.
And another thing: , like every function in Wolfram Language, is dealing with symbolic expressions, which can represent anything—from numerical tables to images to graphs to user interfaces to videos, etc. So that means that the results you get can immediately be used, say in your Wolfram Notebook, without any importing, etc.
OK, so what kinds of machines can you run on? Well, Wolfram Compute Services gives you a bunch of options, suitable for different computations, and different budgets. There’s the most basic 1 core, 8 GB option—which you can use to just “get a computation off your own machine”. You can pick a machine with larger memory—currently up to about 1500 GB. Or you can pick a machine with more cores—currently up to 192. But if you’re looking for even larger scale parallelism Wolfram Compute Services can deal with that too. Because can map a function across any number of elements, running on any number of cores, across multiple machines.
OK, so here’s a very simple example—that happens to come from some science I did a little while ago. Define a function that randomly adds nonoverlapping pentagons to a cluster:
For 20 pentagons I can run this quickly on my machine:
But what about for 500 pentagons? Well, the computational geometry gets difficult and it would take long enough that I wouldn’t want to tie up my own machine doing it. But now there’s another option: use Wolfram Compute Services!
And all I have to do is feed my computation to :
Immediately, a job is created (with all necessary dependencies automatically handled). And the job is queued for execution. And then, a couple of minutes later, I get an email:
Not knowing how long it’s going to take, I go off and do something else. But a while later, I’m curious to check how my job is doing. So I click the link in the email and it takes me to a dashboard—and I can see that my job is successfully running:
I go off and do other things. Then, suddenly, I get an email:
It finished! And in the mail is a preview of the result. To get the result as an expression in a Wolfram Language session I just evaluate a line from the email:
And this is now a computable object that I can work with, say computing areas
One of the great strengths of Wolfram Compute Services is that it makes it easy to use large-scale parallelism. You want to run your computation in parallel on hundreds of cores? Well, just use Wolfram Compute Services!
Here’s an example that came up in some recent work of mine. I’m searching for a cellular automaton rule that generates a pattern with a “lifetime” of exactly 100 steps. Here I’m testing 10,000 random rules—which takes a couple of seconds, and doesn’t find anything:
To test 100,000 rules I can use and run in parallel, say across the 16 cores in my laptop:
Still nothing. OK, so what about testing 100 million rules? Well, then it’s time for Wolfram Compute Services. The simplest thing to do is just to submit a job requesting a machine with lots of cores (here 192, the maximum currently offered):
A few minutes later I get mail telling me the job is starting. After a while I check on my job and it’s still running:
I go off and do other things. Then, after a couple of hours I get mail telling me my job is finished. And there’s a preview in the email that shows, yes, it found some things:
And here they are—rules plucked from the hundred million tests we did in the computational universe:
But what if we wanted to get this result in less than a couple of hours? Well, then we’d need even more parallelism. And, actually, Wolfram Compute Services lets us get that too—using . You can think of as a souped up analog of —mapping a function across a list of any length, splitting up the necessary computations across cores that can be on different machines, and handling the data and communications involved in a scalable way.
Because is a “pure ” we have to rearrange our computation a little—making it run 100,000 cases of selecting from 1000 random instances:
The system decided to distribute my 100,000 cases across 316 separate “child jobs”, here each running on its own core. How is the job doing? I can get a dynamic visualization of what’s happening:
And it doesn’t take many minutes before I’m getting mail that the job is finished:
And, yes, even though I only had to wait for 3 minutes to get this result, the total amount of computer time used—across all the cores—is about 8 hours.
Now I can retrieve all the results, using to combine all the separate pieces I generated:
And, yes, if I wanted to spend a little more, I could run a bigger search, increasing the 100,000 to a larger number; and Wolfram Compute Services would seamlessly scale up.
Like everything around Wolfram Language, Wolfram Compute Services is fully programmable. When you submit a job, there are lots of options you can set. We already saw the option which lets you choose the type of machine to use. Currently the choices range from Basic1x8 (1 core, 8 GB) through Basic4x16 (4 cores, 16 GB) to “parallel compute” Compute192x384 (192 cores, 384 GB) and “large memory” Memory192x1536 (192 cores, 1536 GB).
Different classes of machine cost different numbers of credits to run. And to make sure things don’t go out of control, you can set the options (maximum time in seconds) and (maximum number of credits to use).
Then there’s notification. The default is to send one email when the job is starting, and one when it’s finished. There’s an option that lets you give a name to each job, so you can more easily tell which job a particular piece of email is about, or where the job is on the web dashboard. (If you don’t give a name to a job, it’ll be referred to by the UUID it’s been assigned.)
The option lets you say what notifications you want, and how you want to receive them. There can be notifications whenever the status of a job changes, or at specific time intervals, or when specific numbers of credits have been used. You can get notifications either by email, or by text message. And, yes, if you get notified that your job is going to run out of credits, you can always go to the Wolfram Account portal to top up your credits.
There are many properties of jobs that you can query. A central one is . But, for example, gives you a whole association of related information:
If your job succeeds, it’s pretty likely will be all you need. But if something goes wrong, you can easily drill down to study the details of what happened with the job, for example by looking at .
If you want to know all the jobs you’ve initiated, you can always look at the web dashboard, but you can also get symbolic representations of the jobs from:
For any of these job objects, you can ask for properties, and you can for example also apply to abort them.
Once a job has completed, its result will be stored in Wolfram Compute Services—but only for a limited time (currently two weeks). Of course, once you’ve got the result, it’s very easy to store it permanently, for example, by putting it into the Wolfram Cloud using [expr]. (If you know you’re going to want to store the result permanently, you can also do the right inside your .)
Talking about programmatic uses of Wolfram Compute Services, here’s another example: let’s say you want to generate a compute-intensive report once a week. Well, then you can put together several very high-level Wolfram Language functions to deploy a scheduled task that will run in the Wolfram Cloud to initiate jobs for Wolfram Compute Services:
And, yes, you can initiate a Wolfram Compute Services job from any Wolfram Language system, whether on the desktop or in the cloud.
Wolfram Compute Services is going to be very useful to many people. But actually it’s just part of a much larger constellation of capabilities aimed at broadening the ways Wolfram Language can be used.
Mathematica and the Wolfram Language started—back in 1988—as desktop systems. But even at the very beginning, there was a capability to run the notebook front end on one machine, and then have a “remote kernel” on another machine. (In those days we supported, among other things, communication via phone line!) In 2008 we introduced built-in parallel computation capabilities like and . Then in 2014 we introduced the Wolfram Cloud—both replicating the core functionality of Wolfram Notebooks on the web, and providing services such as instant APIs and scheduled tasks. Soon thereafter, we introduced the Enterprise Private Cloud—a private version of Wolfram Cloud. In 2021 we introduced Wolfram Application Server to deliver high-performance APIs (and it’s what we now use, for example, for Wolfram|Alpha). Along the way, in 2019, we introduced Wolfram Engine as a streamlined server and command-line deployment of Wolfram Language. Around Wolfram Engine we built WSTPServer to serve Wolfram Engine capabilities on local networks, and we introduced WolframScript to provide a deployment-agnostic way to run command-line-style Wolfram Language code. In 2020 we then introduced the first version of , to be used with cloud services such as AWS and Azure. But unlike with Wolfram Compute Services, this required “do it yourself” provisioning and licensing with the cloud services. And, finally, now, that’s what we’ve automated in Wolfram Compute Services.
OK, so what’s next? An important direction is the forthcoming Wolfram HPCKit—for organizations with their own large-scale compute facilities to set up their own back ends to , etc. is built in a very general way, that allows different “batch computation providers” to be plugged in. Wolfram Compute Services is initially set up to support just one standard batch computation provider: . HPCKit will allow organizations to configure their own compute facilities (often with our help) to serve as batch computation providers, extending the streamlined experience of Wolfram Compute Services to on-premise or organizational compute facilities, and automating what is often a rather fiddly job process of submission (which, I must say, personally reminds me a lot of the mainframe job control systems I used in the 1970s).
Wolfram Compute Services is currently set up purely as a batch computation environment. But within the Wolfram System, we have the capability to support synchronous remote computation, and we’re planning to extend Wolfram Compute Services to offer this—allowing one, for example, to seamlessly run a remote kernel on a large or exotic remote machine.
But this is for the future. Today we’re launching the first version of Wolfram Compute Services. Which makes “supercomputer power” immediately available for any Wolfram Language computation. I think it’s going to be very useful to a broad range of users of Wolfram Language. I know I’m going to be using it a lot.
...
Read the original on writings.stephenwolfram.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.