10 interesting stories served every morning and every evening.




1 606 shares, 29 trendiness

Self-hosting my photos with Immich

For every cloud ser­vice I use, I want to have a lo­cal copy of my data for backup pur­poses and in­de­pen­dence. Unfortunately, the gpho­tos-sync tool stopped

work­ing in March

2025 when Google re­stricted the OAuth scopes, so I needed an al­ter­na­tive for my ex­ist­ing Google Photos setup. In this post, I de­scribe how I have set up

Immich, a self-hostable photo man­ager.

Here is the end re­sult: a few (live) pho­tos from NixCon

2025:

I am run­ning Immich on my Ryzen 7 Mini PC (ASRock DeskMini

X600), which con­sumes less than 10 W of power in idle and has plenty of re­sources for VMs (64 GB RAM, 1 TB disk). You can read more about it in my blog post from July 2024:

I in­stalled Proxmox, an Open Source vir­tu­al­iza­tion plat­form, to di­vide this mini server into VMs, but you could of course also in­stall Immich di­rectly on any server.

I cre­ated a VM (named photos”) with 500 GB of disk space, 4 CPU cores and 4 GB of RAM.

For the ini­tial im­port, you could as­sign more CPU and RAM, but for nor­mal us­age, that’s enough.

I (declaratively) in­stalled

NixOS on that VM as de­scribed in this blog post:

Afterwards, I en­abled Immich, with this ex­act con­fig­u­ra­tion:

At this point, Immich is avail­able on lo­cal­host, but not over the net­work, be­cause NixOS en­ables a fire­wall by de­fault. I could en­able the

ser­vices.im­mich.open­Fire­wall op­tion, but I ac­tu­ally want Immich to only be avail­able via my Tailscale VPN, for which I don’t need to open fire­wall ac­cess — in­stead, I use tailscale serve to for­ward traf­fic to lo­cal­host:2283:

pho­tos# tailscale serve –bg http://​lo­cal­host:2283

Because I have Tailscale’s MagicDNS

and TLS cer­tifi­cate pro­vi­sion­ing

en­abled, that means I can now open https://​pho­tos.ex­am­ple.ts.net in my browser on my PC, lap­top or phone.

At first, I tried im­port­ing my pho­tos us­ing the of­fi­cial Immich CLI:

% nix run nix­p­kgs#im­mich-cli — lo­gin https://​pho­tos.ex­am­ple.ts.net se­cret

% nix run nix­p­kgs#im­mich-cli — up­load –recursive /home/michael/lib/photo/gphotos-takeout

Unfortunately, the up­load was not run­ning re­li­ably and had to be restarted man­u­ally a few times af­ter run­ning into a time­out. Later I re­al­ized that this was be­cause the Immich server runs back­ground jobs like thumb­nail cre­ation, meta­data ex­trac­tion or face de­tec­tion, and these back­ground jobs slow down the up­load to the ex­tent that the up­load can fail with a time­out.

The other is­sue was that even af­ter the up­load was done, I re­al­ized that Google Takeout archives for Google Photos con­tain meta­data in sep­a­rate JSON files next to the orig­i­nal im­age files:

Unfortunately, these files are not con­sid­ered by im­mich-cli.

Luckily, there is a great third-party tool called

im­mich-go, which solves both of these is­sues! It pauses back­ground tasks be­fore up­load­ing and restarts them af­ter­wards, which works much bet­ter, and it does its best to un­der­stand Google Takeout archives.

I ran im­mich-go as fol­lows and it worked beau­ti­fully:

% im­mich-go \

up­load \

from-google-pho­tos \

–server=https://​pho­tos.ex­am­ple.ts.net \

–api-key=secret \

~/Downloads/takeout-*.zip

My main source of new pho­tos is my phone, so I in­stalled the Immich app on my iPhone, logged into my Immich server via its Tailscale URL and en­abled au­to­matic backup of new pho­tos via the icon at the top right.

I am not 100% sure whether these set­tings are cor­rect, but it seems like cam­era pho­tos gen­er­ally go into Live Photos, and Recent should cover other files…?!

If any­one knows, please send an ex­pla­na­tion (or a link!) and I will up­date the ar­ti­cle.

I also strongly rec­om­mend to dis­able no­ti­fi­ca­tions for Immich, be­cause oth­er­wise you get no­ti­fi­ca­tions when­ever it up­loads im­ages in the back­ground. These no­ti­fi­ca­tions are not re­quired for back­ground up­load to work, as an Immich

de­vel­oper con­firmed on

Reddit. Open

Settings → Apps → Immich → Notifications and un-tick the per­mis­sion check­box:

Immich’s doc­u­men­ta­tion on

back­ups con­tains some good rec­om­men­da­tions. The Immich de­vel­op­ers rec­om­mend back­ing up the en­tire con­tents of UPLOAD_LOCATION, which is /var/lib/immich on NixOS. The

back­ups sub­di­rec­tory con­tains SQL dumps, whereas the 3 di­rec­to­ries up­load,

li­brary and pro­file con­tain all user-up­loaded data.

Hence, I have set up a sys­temd timer that runs rsync to copy /var/lib/immich

onto my PC, which is en­rolled in a 3-2-1 backup

scheme.

Immich (currently?) does not con­tain photo edit­ing fea­tures, so to ro­tate or crop an im­age, I down­load the im­age and use GIMP.

To share im­ages, I still up­load them to Google Photos (depending on who I share them with).

The two most promis­ing op­tions in the space of self-hosted im­age man­age­ment tools seem to be Immich and Ente.

I got the im­pres­sion that Immich is more pop­u­lar in my bub­ble, and Ente made the im­pres­sion on me that its scope is far larger than what I am look­ing for:

Ente is a ser­vice that pro­vides a fully open source, end-to-end en­crypted plat­form for you to store your data in the cloud with­out need­ing to trust the ser­vice provider. On top of this plat­form, we have built two apps so far: Ente Photos (an al­ter­na­tive to Apple and Google Photos) and Ente Auth (a 2FA al­ter­na­tive to the dep­re­cated Authy).

I don’t need an end-to-end en­crypted plat­form. I al­ready have en­cryp­tion on the tran­sit layer (Tailscale) and disk layer (LUKS), no need for more com­plex­ity.

Immich is a de­light­ful app! It’s very fast and gen­er­ally seems to work well.

The ini­tial im­port is smooth, but only if you use the right tool. Ideally, the of­fi­cial im­mich-cli could be im­proved. Or maybe im­mich-go could be made the of­fi­cial one.

I think the auto backup is too hard to con­fig­ure on an iPhone, so that could also be im­proved.

But aside from these ini­tial stum­bling blocks, I have no com­plaints.

Table Of Contents

...

Read the original on michael.stapelberg.ch »

2 388 shares, 53 trendiness

GrapheneOS (@GrapheneOS@grapheneos.social)

To use the Mastodon web ap­pli­ca­tion, please en­able JavaScript. Alternatively, try one of the na­tive apps for Mastodon for your plat­form.

...

Read the original on grapheneos.social »

3 380 shares, 11 trendiness

YouTube secretly tests AI video retouching without creators’ consent

The con­tro­versy high­lights a wider trend in which more of what peo­ple see on­line is pre-processed by AI be­fore reach­ing them. Smartphone mak­ers like Samsung and Google have long used AI to enhance” im­ages. Samsung pre­vi­ously ad­mit­ted to us­ing AI to sharpen moon pho­tos, while Google’s Pixel Best Take” fea­ture stitches to­gether fa­cial ex­pres­sions from mul­ti­ple shots to cre­ate a sin­gle perfect” group pic­ture.

...

Read the original on www.ynetnews.com »

4 380 shares, 15 trendiness

Fedi.Tips 🎄 (@FediTips@social.growyourown.services)

To use the Mastodon web ap­pli­ca­tion, please en­able JavaScript. Alternatively, try one of the na­tive apps for Mastodon for your plat­form.

...

Read the original on social.growyourown.services »

5 340 shares, 38 trendiness

Telefoncek.si • How I discovered a hidden microphone on a Chinese NanoKVM

NanoKVM is a hard­ware KVM switch de­vel­oped by the Chinese com­pany Sipeed. Released last year, it en­ables re­mote con­trol of a com­puter or server us­ing a vir­tual key­board, mouse, and mon­i­tor. Thanks to its com­pact size and low price, it quickly gained at­ten­tion on­line, es­pe­cially when the com­pany promised to re­lease its code as open-source. However, as we’ll see, the de­vice has some se­ri­ous se­cu­rity is­sues. But first, let’s start with the ba­sics.

As men­tioned, NanoKVM is a KVM switch de­signed for re­motely con­trol­ling and man­ag­ing com­put­ers or servers. It fea­tures an HDMI port, three USB-C ports, an Ethernet port for net­work con­nec­tiv­ity, and a spe­cial se­r­ial in­ter­face. The pack­age also in­cludes a small ac­ces­sory for man­ag­ing the power of an ex­ter­nal com­puter.

Using it is quite sim­ple. First, you con­nect the de­vice to the in­ter­net via an Ethernet ca­ble. Once on­line, you can ac­cess it through a stan­dard web browser (though JavaScript JIT must be en­abled). The de­vice sup­ports Tailscale VPN, but with some ef­fort (read: hack­ing), it can also be con­fig­ured to work with your own VPN, such as WireGuard or OpenVPN server. Once set up, you can con­trol it from any­where in the world via your browser.

The de­vice could be con­nected to the tar­get com­puter us­ing an HDMI ca­ble, cap­tur­ing the video out­put that would nor­mally be dis­played on a mon­i­tor. This al­lows you to view the com­put­er’s screen di­rectly in your browser, es­sen­tially act­ing as a vir­tual mon­i­tor.

Through the USB con­nec­tion, NanoKVM can also em­u­late a key­board, mouse, CD-ROM, USB drive, and even a USB net­work adapter. This means you can re­motely con­trol the com­puter as if you were phys­i­cally sit­ting in front of it - but all through a web in­ter­face.

While it func­tions sim­i­larly to re­mote man­age­ment tools like RDP or VNC, it has one key dif­fer­ence: there’s no need to in­stall any soft­ware on the tar­get com­puter. Simply plug in the de­vice, and you’re ready to man­age it re­motely. NanoKVM even al­lows you to en­ter the BIOS, and with the ad­di­tional ac­ces­sory for power man­age­ment, you can re­motely turn the com­puter on, off, or re­set it.

This makes it in­cred­i­bly use­ful - you can power on a ma­chine, ac­cess the BIOS, change set­tings, mount a vir­tual bootable CD, and in­stall an op­er­at­ing sys­tem from scratch, just as if you were phys­i­cally there. Even if the com­puter is on the other side of the world.

NanoKVM is also quite af­ford­able. The fully-fea­tured ver­sion, which in­cludes all ports, a built-in mini screen, and a case, costs just over €60, while the stripped-down ver­sion is around €30. By com­par­i­son, a sim­i­lar RaspberryPi-based de­vice, PiKVM, costs around €400. However, PiKVM is sig­nif­i­cantly more pow­er­ful and re­li­able and, with a KVM split­ter, can man­age mul­ti­ple de­vices si­mul­ta­ne­ously.

As men­tioned ear­lier, the an­nounce­ment of the de­vice caused quite a stir on­line - not just be­cause of its low price, but also due to its com­pact size and min­i­mal power con­sump­tion. In fact, it can be pow­ered di­rectly from the tar­get com­puter via a USB ca­ble, which it also uses to sim­u­late a key­board, mouse, and other USB de­vices. So you have only one USB ca­ble - in one di­rec­tion it pow­ers NanoKVM, on the other it helps it to sim­u­late key­board mouse and other de­vices on a com­puter you want to man­age.

The de­vice is built on the open-source RISC-V proces­sor ar­chi­tec­ture, and the man­u­fac­turer even­tu­ally did re­lease the de­vice’s soft­ware un­der an open-source li­cense at the end of last year. (To be fair, one part of the code re­mains closed, but the com­mu­nity has al­ready found a suit­able open-source re­place­ment, and the man­u­fac­turer has promised to open this por­tion soon.)

However, the real is­sue is se­cu­rity.

Understandably, the com­pany was ea­ger to re­lease the de­vice as soon as pos­si­ble. In fact, an early ver­sion had a mi­nor hard­ware de­sign flaw - due to an in­cor­rect cir­cuit ca­ble, the de­vice some­times failed to de­tect in­com­ing HDMI sig­nals. As a re­sult, the com­pany re­called and re­placed all af­fected units free of charge. Software de­vel­op­ment also pro­gressed rapidly, but in such cases, the pri­mary fo­cus is typ­i­cally on get­ting ba­sic func­tion­al­ity work­ing, with se­cu­rity tak­ing a back­seat.

So, it’s not sur­pris­ing that the de­vel­op­ers made some se­ri­ous mis­steps - rushed de­vel­op­ment of­ten leads to stu­pid mis­takes. But some of the se­cu­rity flaws I dis­cov­ered in my quick (and by no means ex­haus­tive) re­view are gen­uinely con­cern­ing.

One of the first se­cu­rity analy­sis re­vealed nu­mer­ous vul­ner­a­bil­i­ties - and some rather bizarre dis­cov­er­ies. For in­stance, a se­cu­rity re­searcher even found an im­age of a cat em­bed­ded in the firmware. While the Sipeed de­vel­op­ers ac­knowl­edged these is­sues and rel­a­tively quickly fixed at least some of them, many re­main un­re­solved.

After pur­chas­ing the de­vice my­self, I ran a quick se­cu­rity au­dit and found sev­eral alarm­ing flaws. The de­vice ini­tially came with a de­fault pass­word, and SSH ac­cess was en­abled us­ing this pre­set pass­word. I re­ported this to the man­u­fac­turer, and to their credit, they fixed it rel­a­tively quickly. However, many other is­sues per­sist.

The user in­ter­face is rid­dled with se­cu­rity flaws - there’s no CSRF pro­tec­tion, no way to in­val­i­date ses­sions, and more. Worse yet, the en­cryp­tion key used for pass­word pro­tec­tion (when log­ging in via a browser) is hard­coded and iden­ti­cal across all de­vices. This is a ma­jor se­cu­rity over­sight, as it al­lows an at­tacker to eas­ily de­crypt pass­words. More prob­lem­atic, this needed to be ex­plained to the de­vel­op­ers. Multiple times.

Another con­cern is the de­vice’s re­liance on Chinese DNS servers. And con­fig­ur­ing your own (custom) DNS set­tings is quite com­pli­cated. Additionally, the de­vice com­mu­ni­cates with Sipeed’s servers in China - down­load­ing not only up­dates but also the closed-source com­po­nent men­tioned ear­lier. For this closed source com­po­nent it needs to ver­ify an iden­ti­fi­ca­tion key, which is stored on the de­vice in plain text. Alarmingly, the de­vice does not ver­ify the in­tegrity of soft­ware up­dates, in­cludes a strange ver­sion of the WireGuard VPN ap­pli­ca­tion (which does not work on some net­works), and runs a heav­ily stripped-down ver­sion of Linux that lacks sys­temd and apt. And these are just a few of the is­sues.

Were these prob­lems sim­ply over­sights? Possibly. But what ad­di­tion­ally raised red flags was the pres­ence of tcp­dump and air­crack - tools com­monly used for net­work packet analy­sis and wire­less se­cu­rity test­ing. While these are use­ful for de­bug­ging and de­vel­op­ment, they are also hack­ing tools that can be dan­ger­ously ex­ploited. I can un­der­stand why de­vel­op­ers might use them dur­ing test­ing, but they have ab­solutely no place on a pro­duc­tion ver­sion of the de­vice.

And then I dis­cov­ered some­thing even more alarm­ing - a tiny built-in mi­cro­phone that is­n’t clearly men­tioned in the of­fi­cial doc­u­men­ta­tion. It’s a minia­ture SMD com­po­nent, mea­sur­ing just 2 x 1 mm, yet ca­pa­ble of record­ing sur­pris­ingly high-qual­ity au­dio.

What’s even more con­cern­ing is that all the nec­es­sary record­ing tools are al­ready in­stalled on the de­vice! By sim­ply con­nect­ing via SSH (remember, the de­vice ini­tially used de­fault pass­words!), I was able to start record­ing au­dio us­ing the amixer and arecord tools. Once recorded, the au­dio file could be eas­ily copied to an­other com­puter. With a lit­tle ex­tra ef­fort, it would even be pos­si­ble to stream the au­dio over a net­work, al­low­ing an at­tacker to eaves­drop in real time.

Physically re­mov­ing the mi­cro­phone is pos­si­ble, but it’s not ex­actly straight­for­ward. As seen in the im­age, dis­as­sem­bling the de­vice is tricky, and due to the mi­cro­phone’s tiny size, you’d need a mi­cro­scope or mag­ni­fy­ing glass to prop­erly des­ol­der it.

To sum­ma­rize: the de­vice is rid­dled with se­cu­rity flaws, orig­i­nally shipped with de­fault pass­words, com­mu­ni­cates with servers in China, comes pre­in­stalled with hack­ing tools, and even in­cludes a built-in mi­cro­phone - fully equipped for record­ing au­dio - with­out clear men­tion of it in the doc­u­men­ta­tion. Could it get any worse?

I am pretty sure these is­sues stem from ex­treme neg­li­gence and rushed de­vel­op­ment rather than ma­li­cious in­tent. However, that does­n’t make them any less con­cern­ing.

That said, these find­ings don’t mean the de­vice is en­tirely un­us­able.

Since the de­vice is open-source, it’s en­tirely pos­si­ble to in­stall cus­tom soft­ware on it. In fact, one user has al­ready be­gun port­ing his own Linux dis­tri­b­u­tion - start­ing with Debian and later switch­ing to Ubuntu. With a bit of luck, this work could soon lead to of­fi­cial Ubuntu Linux sup­port for the de­vice.

This cus­tom Linux ver­sion al­ready runs the man­u­fac­tur­er’s mod­i­fied KVM code, and within a few months, we’ll likely have a fully in­de­pen­dent and sig­nif­i­cantly more se­cure soft­ware al­ter­na­tive. The only mi­nor in­con­ve­nience is that in­stalling it re­quires phys­i­cally open­ing the de­vice, re­mov­ing the built-in SD card, and flash­ing the new soft­ware onto it. However, in re­al­ity, this process is­n’t too com­pli­cated.

And while you’re at it, you might also want to re­move the mi­cro­phone… or, if you pre­fer, con­nect a speaker. In my test, I used an 8-ohm, 0.5W speaker, which pro­duced sur­pris­ingly good sound - es­sen­tially turn­ing the NanoKVM into a tiny mu­sic player. Actually, the idea is not so bad, be­cause PiKVM also in­cluded 2-way au­dio sup­port for their de­vices end of last year.

All this of course raises an in­ter­est­ing ques­tion: How many sim­i­lar de­vices with hid­den func­tion­al­i­ties might be lurk­ing in your home, just wait­ing to be dis­cov­ered? And not just those of Chinese ori­gin. Are you ab­solutely sure none of them have built-in minia­ture mi­cro­phones or cam­eras?

You can start with your iPhone - last year Apple has agreed to pay $95 mil­lion to set­tle a law­suit al­leg­ing that its voice as­sis­tant Siri recorded pri­vate con­ver­sa­tions. They shared the data with third par­ties and used them for tar­geted ads. Unintentionally”, of course! Yes, that Apple, that cares about your pri­vacy so much.

And Google is do­ing the same. They are fac­ing a sim­i­lar law­suit over their voice as­sis­tant, but the lit­i­ga­tion likely won’t be set­tled un­til this fall. So no, small Chinese startup com­pa­nies are not the only prob­lem. And if you are wor­ried about Chinese com­pa­nies oblig­a­tions to­wards Chinese gov­ern­ment, let’s not for­get that U. S. com­pa­nies also have oblig­a­tions to co­op­er­ate with U.S. gov­ern­ment. While Apple is pub­licly claim­ing they do not co­op­er­ate with FBI and other U. S. agen­cies (because thy care about your pri­vacy so much), some me­dia re­vealed that Apple was hold­ing a se­ries se­cre­tive Global Police Summit at its Cupertino head­quar­ters where they taught po­lice how to use their prod­ucts for sur­veil­lance and polic­ing work. And as one of the po­lice of­fi­cers pointed out - he has never been part of an en­gage­ment that was so col­lab­o­ra­tive.”. Yep.

If you want to test the built-in mi­cro­phone your­self, sim­ply con­nect to the de­vice via SSH and run the fol­low­ing two com­mands:

* arecord -Dhw:0,0 -d 3 -r 48000 -f S16_LE -t wav test.wav & > /dev/null & (this will cap­ture the sound to a file named test.wav)

Now, speak or sing (perhaps the Chinese na­tional an­them?) near the de­vice, then press Ctrl + C, copy the test.wav file to your com­puter, and lis­ten to the record­ing.

...

Read the original on telefoncek.si »

6 319 shares, 32 trendiness

Tiny Core Linux, Micro Core Linux, 12MB Linux GUI Desktop, Live, Frugal, Extendable

The Core Project is a highly mod­u­lar based sys­tem with com­mu­nity build ex­ten­sions.

It starts with a re­cent Linux ker­nel, vm­linuz, and our root filesys­tem and start-up scripts pack­aged with a ba­sic set of ker­nel mod­ules in core.gz. Core (11MB) is sim­ply the ker­nel + core.gz - this is the foun­da­tion for user cre­ated desk­tops, servers, or ap­pli­ances. TinyCore is Core + Xvesa.tcz + Xprogs.tcz + aterm.tcz + fltk-1.3.tcz + flwm.tcz + wbar.tcz

TinyCore be­comes sim­ply an ex­am­ple of what the Core Project can pro­duce, an 16MB FLTK/FLWM desk­top.

CorePlus ofers a sim­ple way to get started us­ing the Core phi­los­o­phy with its in­cluded com­mu­nity pack­aged ex­ten­sions en­abling easy em­bed­ded fru­gal or pen­drive in­stal­la­tion of the user’s choice of sup­ported desk­top, while main­tain­ing the Core prin­ci­pal of mounted ex­ten­sions with full pack­age man­age­ment.

It is not a com­plete desk­top nor is all hard­ware com­pletely sup­ported. It rep­re­sents only the core needed to boot into a very min­i­mal X desk­top typ­i­cally with wired in­ter­net ac­cess.

The user has com­plete con­trol over which ap­pli­ca­tions and/​or ad­di­tional hard­ware to have sup­ported, be it for a desk­top, a net­book, an ap­pli­ance, or server, se­lec­table by the user by in­stalling ad­di­tional ap­pli­ca­tions from on­line repos­i­to­ries, or eas­ily com­pil­ing most any­thing you de­sire us­ing tools pro­vided.

Our goal is the cre­ation of a no­madic ul­tra small graph­i­cal desk­top op­er­at­ing sys­tem ca­pa­ble of boot­ing from cdrom, pen­drive, or fru­gally from a hard drive. The desk­top boots ex­tremely fast and is able to sup­port ad­di­tional ap­pli­ca­tions and hard­ware of the users choice. While Tiny Core al­ways re­sides in ram, ad­di­tional ap­pli­ca­tions ex­ten­sions can ei­ther re­side in ram, mounted from a per­sis­tent stor­age de­vice, or in­stalled into a per­sis­tent stor­age de­vice.

We in­vite in­ter­ested users and de­vel­op­ers to ex­plore Tiny Core. Within our fo­rums we have an open de­vel­ope­ment model. We en­cour­age shared knowl­edge. We pro­mote com­mu­nity in­volve­ment and com­mu­nity built ap­pli­ca­tion ex­ten­sions. Anyone can con­tribute to our pro­ject by pack­ag­ing their fa­vorite ap­pli­ca­tion or hard­ware sup­port to run in Tiny Core. The Tiny Core Linux Team cur­rently con­sists of eight mem­bers who pe­ruse the fo­rums to as­sist from an­swer­ing ques­tions to help­ing pack­age new ex­ten­sions.

Join us here and on IRC Freenode #tinycorelinux.

...

Read the original on www.tinycorelinux.net »

7 312 shares, 8 trendiness

Sam Altman’s Dirty DRAM Deal

Use tab to nav­i­gate through the menu items. Or: How the AI Bubble, Panic, and Unpreparedness Stole ChristmasWritten by Tom of Moore’s Law Is DeadAt the be­gin­ning of November, I or­dered a 32GB DDR5 kit for pair­ing with a Minisforum BD790i X3D moth­er­board, and three weeks later those very same sticks of DDR5 are now listed for a stag­ger­ing $330– a 156% in­crease in price from less than a month ago! At this rate, it seems likely that by Christmas, that DDR5 kit alone could be worth more than the en­tire Zen 4 X3D plat­form I planned to pair it with! How could this hap­pen, and more specif­i­cally — how could this hap­pen THIS quickly? Well, buckle up! I am about to tell you the story of Sam Altman’s Dirty DRAM Deal, or: How the AI bub­ble, panic, and un­pre­pared­ness stole Christmas…But be­fore I dive in, let me make it clear that my RAM kit’s 156% jump in price is­n’t a fluke or some ex­treme ex­am­ple of what’s go­ing on right now. Nope, and in fact, I’d like to pro­vide two more ex­am­ples of how how im­pos­si­ble it is be­com­ing to get ahold of RAM - these were pro­vided by a cou­ple of our sources within the in­dus­try:One source that works at a US Retailer, stated that a RAM Manufacturer called them in or­der to in­quire if they might buy RAM from  to stock up for their other cus­tomers. This would be like Corsair ask­ing a Best Buy if they had any RAM around.An­other source that works at a Prebuilt PC com­pany, was re­cently given an es­ti­mate for when they would re­ceive RAM or­ders if they placed them now…and they were told December…of 2026So what hap­pened?  Well, it all comes down to three per­fectly syn­er­gis­tic events:two un­prece­dented RAM deals that took every­one by sur­prise.The se­crecy and size of the deals trig­gered full-scale panic buy­ing from every­one else.The mar­ket had al­most zero safety stock left due to tar­iffs, worry about RAM prices over the sum­mer, and stalled equip­ment trans­fers.Be­low, we’re go­ing to walk through each of these fac­tors — and then I’m go­ing to warn you about which hard­ware cat­e­gories will be hit the hard­est, which prod­ucts are al­ready be­ing can­celled, and what you should buy  before the shelves turn into a re­peat of 2021–2022…because this is doomed to turn into much more than just RAM scarcity…deals with Samsung and SK Hynix for 40% of the worlds DRAM sup­ply.  Now, did OpenAI’s com­pe­ti­tion sus­pect some big RAM deals could be signed in late 2025? Yes. Ok, but did they think it would be deals this huge and with mul­ti­ple com­pa­nies? NO!  In fact, if you go back and read re­port­ing on Sam Altman’s now in­fa­mous trip to South Korea on October 1st, even just mere hours be­fore the mas­sive deals with Samsung and SK Hynix were  — most re­port­ing sim­ply men­tioned vague re­ports about Sam talk­ing to Samsung, SK Hynix, TSMC, and Foxconn. But the re­port­ing at the time was soft, al­most dis­mis­sive — exploring ties,” seeking co­op­er­a­tion,” probing for part­ner­ships.” Nobody hinted that OpenAI was about to swal­low up to 40% of global DRAM out­put — even on morn­ing be­fore it hap­pened! Nobody saw this com­ing - this is clear in the lack of re­port­ing about the deals be­fore they were an­nounced, and every MLID Source who works in DRAM man­u­fac­tur­ing and dis­tri­b­u­tion in­sist this took every­one in the in­dus­try by sur­prise.To be clear - the shock was­n’t that OpenAI made a big deal, no, it was that they made two mas­sive deals this big, at the same time, with Samsung and SK Hynix si­mul­ta­ne­ously! In fact, ac­cord­ing to our sources - both com­pa­nies had no idea how big each oth­er’s deal was, nor how close to si­mul­ta­ne­ous they were. And this se­crecy mat­tered. It mat­tered a lot.Had Samsung known SK Hynix was about to com­mit a sim­i­lar chunk of sup­ply — or vice-versa — the pric­ing and terms would have likely been dif­fer­ent. It’s en­tirely con­ceiv­able they would­n’t have both agreed to sup­ply such a sub­stan­tial part of global sup­ply if they had known more…but at the end of the day - OpenAI did suc­ceed in keep­ing the cir­cles tight, lock­ing down the NDAs, and lever­ag­ing the fact that these com­pa­nies as­sumed the other was­n’t giv­ing up this much wafer vol­ume si­mul­ta­ne­ously…in or­der to make a sur­gi­cal strike on the global RAM sup­ply chain…and it’s worked so far…Part II — Instant Panic: How did we miss this?Imag­ine you’re run­ning a hy­per scaler, or maybe you’re a ma­jor OEM, or per­haps pre­tend that you are sim­ply one of OpenAI’s chief com­peti­tors: On October 1st of 2025, you would have woken up to the news that OpenAI had just cor­nered the mem­ory mar­ket more ag­gres­sively than any com­pany in the last decade, and you had­n’t heard even a mur­mur that this was com­ing be­fore­hand! Well, you would prob­a­bly make some fol­low-up calls to col­leagues in the in­dus­try, and then also quickly hear ru­mors that it was­n’t just you - also the two largest sup­pli­ers did­n’t even see each oth­er’s si­mul­ta­ne­ous co­op­er­a­tion with OpenAI com­ing ! You would­n’t go: Well, that’s an in­ter­est­ing co­in­ci­dence”, no, you would say: WHAT ELSE IS GOING ON THAT WE DON’T KNOW ABOUT?”Again — it’s not the size of the deals that’s solely the is­sue here, no, it’s also the of them. On October 1st sil­i­con val­ley ex­ec­u­tives and pro­cure­ment man­agers pan­icked over con­cerns like these:What other deals don’t we know about? Is this just the first of many?None of our DRAM sup­pli­ers warned us ahead of time! We have to as­sume they also won’t in the fu­ture, and that it’s pos­si­ble  of global DRAM could be bought up with­out us get­ting a sin­gle warn­ing!We know OpenAI’s com­peti­tors are al­ready panic-buy­ing!  If we don’t move we might be locked out of the mar­ket un­til 2028!OpenAI’s com­peti­tors, OEMs, and cloud providers scram­bled to se­cure what­ever in­ven­tory re­mained out of self-de­fense, and self-de­fense in a world that was en­tirely  due to the ac­cel­er­ant I’ll now ex­plain in Part III…Normally, the DRAM mar­ket has buffers: ware­houses of emer­gency stock, ex­cess wafer starts, older DRAM man­u­fac­tur­ing ma­chin­ery be­ing sold off to bud­get brands while the big brands up­grade their pro­duc­tion lines…but not in 2025, in 2025 those would-be buffers were de­pleted for three sep­a­rate rea­sons:Tar­iff Chaos. Companies had de­lib­er­ately re­duced how much DRAM they or­dered for their safety stock over the sum­mer of 2025 be­cause tar­iffs were chang­ing al­most weekly. Every RAM pur­chase risked be­ing made at the wrong mo­ment — and so fewer pur­chases were made.Prices had been falling all sum­mer. Because of the hes­i­tancy to pur­chase as much safety stock as usual, RAM prices were also gen­uinely falling over time.  And, ob­vi­ously when mem­ory is get­ting cheaper month over month, the thing you’d feel is pres­sured to buy a com­mod­ity that could be cheaper the next month…so every­one waited.Sec­ondary RAM Manufacturing Had Stalled. Budget brands nor­mally buy older DRAM fab­ri­ca­tion equip­ment from mega-pro­duc­ers like Samsung when Samsung up­grades their DRAM lines to the lat­est and great­est equip­ment.  This al­lows the DRAM mar­ket to more than it would oth­er­wise be­cause it makes any up­grad­ing of the fan­ci­est pro­duc­tion lines to still be change to the mar­ket. However, Korean mem­ory firms have been ter­ri­fied that re­selling old equip­ment to China-adjacent OEMs might trig­ger U.S. re­tal­i­a­tion…and so those ma­chines have been sit­ting idle in ware­houses since early spring.Yep, there was no cush­ion. OpenAI hit the mar­ket at the ex­act mo­ment it was least pre­pared. And now time for the biggest twist of all, a twist that’s ac­tu­ally , and there­fore should be get­ting dis­cussed by far more peo­ple in this writer’s opin­ion: OpenAI is­n’t even both­er­ing to buy fin­ished mem­ory mod­ules! No, their deals are un­prece­dent­edly only for raw wafers — un­cut, un­fin­ished, and not even al­lo­cated to a spe­cific DRAM stan­dard yet. It’s not even clear if they have de­cided yet on how or when they will fin­ish them into RAM sticks or HBM!  Right now it seems like these wafers will just be stock­piled in ware­houses — like a kid who hides the toy­box be­cause they’re afraid no­body wants to play with them, and thus self­ishly feels no­body but them should get the toys!And let’s just say it: Here is the un­com­fort­able truth Sam Altman is al­ways loath to ad­mit in in­ter­views: OpenAI is wor­ried about los­ing its lead. The last 18 months have seen com­peti­tors catch­ing up fast — Anthropic, Meta, xAI, and specif­i­cally Google’s Gemini 3 has got­ten a ton of praise just in the past week. Everyone’s chas­ing train­ing ca­pac­ity. Everyone needs mem­ory. DRAM is the lifeblood of scal­ing in­fer­ence and train­ing through­put. Cutting sup­ply to your ri­vals is not a con­spir­acy the­ory. It’s a busi­ness tac­tic as old as busi­ness it­self.  And so, when you con­sider how se­cre­tive OpenAI was about their deals with Samsung and SK Hynix, but ad­di­tion­ally how un­ready they were to im­me­di­ately uti­lize their ware­houses of DRAM wafers — it sure seems like a pri­mary goal of these deals was to , and not just an at­tempt to pro­tect OpenAI’s own sup­ply…Part V — What will be can­celled? What should you buy now?Al­right, now that we are done ex­plain­ing the , let’s get to the  – be­cause even if the RAM short­age mirac­u­lously im­proves im­me­di­ately be­hind the scenes — even if the AI Bubble in­stantly popped or 10 com­pa­nies started tool­ing up for more DRAM ca­pac­ity this sec­ond (and many are, to be fair), at a min­i­mum the next six to nine months are al­ready screwed  See above: DRAM man­u­fac­tures are quot­ing 13-Month lead times for DDR5!  This is not a tem­po­rary blip. This could be a once-in-a-gen­er­a­tion shock. So what gets hit first? What gets hit hard­est? Well, be­low is an E through S-Tier rank­ing of which prod­ucts are the most screwed”:S-Tier (Already Screwed — Too Late to Buy) -RAM it­self, ob­vi­ously. RAM prices have exploded”. The det­o­na­tion is in the past.SSDs. These tends to fol­low DRAM pric­ing with a lag.RADEON GPUs. AMD does­n’t bun­dle RAM in their BOM kits to AIBs the way Nvidia does. In fact, the RX 9070 GRE 16GB this chan­nel leaked months ago is al­most cer­tainly can­celled ac­cord­ing to our sourcesXBOX. Microsoft did­n’t plan. Prices may rise and/​or sup­ply may dwin­dle in 2026.Nvidia GPUs. Nvidia main­tains large mem­ory in­ven­to­ries for its board part­ners, giv­ing them a buffer. But high-ca­pac­ity GPUs (like a hy­po­thet­i­cal 24GB 5080 SUPER) are on ice for now be­cause  stores were never suf­fi­ciently built up. In fact, Nvidia is qui­etly telling part­ners that their SUPER re­fresh might” launch Q3 2026 — al­though most part­ners think it’s just a place­holder for when Nvidia ex­pects new ca­pac­ity to come on­line, and thus SUPER may never launch.C-Tier (Think about buy­ing soon)Lap­tops and phones. These com­pa­nies ne­go­ti­ate im­mense long-term con­tracts, so they’re not hit im­me­di­ately. But once their stock­piles run dry, watch out!D-Tier (Consider buy­ing soon, but there’s no rush)PlaySta­tion. Sony planned bet­ter than al­most any­one else. They bought ag­gres­sively dur­ing the sum­mer price trough, which is why they can af­ford a Black Friday dis­count while every­one else is rais­ing prices.Any­thing with­out RAM. Specifically CPUs that do not come with cool­ers could see price  over time since there could be a  in de­mand for CPUs if no­body has the RAM to feed them in sys­tems.???-Tier —Steam Machine. Valve keeps things quiet, but the big un­known is whether they pre-bought RAM months ago be­fore an­nounc­ing their much-hyped Steam Machine. If they did al­ready stock­pile an am­ple sup­ply of DDR5 - then Steam Machine should launch fine, but sup­ply could dry up tem­porar­ily at some point while they wait for prices to drop. However, if they did­n’t plan ahead - ex­pect a high launch price and very lit­tle re­sup­ply…it might even need to be can­celled or there might need to be a vari­ant of­fered with­out RAM in­cluded (BYO RAM Edition!).And that’s it! This last bit was the most im­por­tant part of the ar­ti­cle in this writer’s opin­ion — an at­tempt at help­ing you avoid get­ting burned. Well, ac­tu­ally, there is one other im­por­tant rea­son for this ar­ti­cle’s ex­is­tence I’ll tack onto the end — a hope that other peo­ple start dig­ging into what’s go­ing on at OpenAI.  I mean se­ri­ously — do we even have a sin­gle re­li­able au­dit of their fi­nan­cials to back up them out­ra­geously spend­ing this much money…  Heck, I’ve even heard from nu­mer­ous sources that OpenAI is buying up the man­u­fac­tur­ing equip­ment as well” — and with­out moun­tains of con­crete proof, and/​or more in­put from ad­di­tional sources on what that re­ally means…I don’t feel I can touch that hot potato with­out get­ting burned…but I hope some­one else will…

...

Read the original on www.mooreslawisdead.com »

8 287 shares, 13 trendiness

License Plate Privacy Check

...

Read the original on haveibeenflocked.com »

9 227 shares, 15 trendiness

This Guy Built a Compact Camera Using an Optical Mouse

Reddit user Dycus built a cam­era us­ing the sen­sor from an op­ti­cal mouse. After about 65 hours of work, Dycus had a low-res­o­lu­tion black-and-white cam­era with mul­ti­ple shoot­ing modes, housed in a nifty 3D-printed body.

PetaPixel has pre­vi­ously re­ported on sim­i­lar pro­jects that turn old op­ti­cal com­puter mice into func­tional cam­eras, but Dycus’ pro­ject is unique in that he de­signed a full-blown cam­era.

Optical com­puter mice work by de­tect­ing move­ment with a pho­to­elec­tric cell (or sen­sor) and a light. The light is emit­ted down­ward, strik­ing a desk or mousepad, and then re­flect­ing to the sen­sor. The sen­sor has a lens to help di­rect the re­flected light, en­abling the mouse to con­vert pre­cise phys­i­cal move­ment into an in­put for the com­put­er’s on-screen cur­sor. The way the re­flected changes in re­sponse to move­ment is trans­lated into cur­sor move­ment val­ues.

It’s a clever so­lu­tion for a fun­da­men­tal com­puter prob­lem: how to con­trol the cur­sor. For most com­puter users, that’s fine, and they can hap­pily use their mouse and go about their day. But when Dycus came across a PCB from an old op­ti­cal mouse, which they had saved be­cause they knew it was pos­si­ble to read im­ages from an op­ti­cal mouse sen­sor, the itch to build a mouse-based cam­era was too much to ig­nore.

The new op­ti­cal mouse cam­era has a lot of neat fea­tures, in­clud­ing mul­ti­ple shoot­ing modes, nu­mer­ous color palettes (the cam­era it­self has 64 shades of gray), con­trol­lable ex­po­sure, and 32kB of on-cam­era stor­age to save up to 48 pic­tures. In ad­di­tion to a stan­dard sin­gle-shot mode, the cam­era also cap­tures quad shots and smear” shots, which are panora­mas.

Posts from the elec­tron­ics

com­mu­nity on Reddit

The panorama smear shot’ is def­i­nitely my fa­vorite mode, it scans out one col­umn at a time across the screen as you sweep the cam­era,” Dycus writes on Reddit. It’s scaled 2x ver­ti­cally but 1x hor­i­zon­tally, so you get ex­tra temporal res­o­lu­tion’ hor­i­zon­tally if you do the sweep well.”

The op­ti­cal mouse cam­era can also record move­ments, like it would if it were in­te­grated into an ac­tual mouse, and con­vert mo­tion into draw­ings on the cam­er­a’s screen.

Given that the cam­era is­n’t even sniff­ing one megapixel ter­ri­tory — its stan­dard pho­tos are just 900 pix­els ver­sus the 1,000,000 re­quired to hit 1MP — the im­age qual­ity is not par­tic­u­larly im­pres­sive, but as Dycus notes and Game Boy Camera en­thu­si­asts can at­test, it’s not about the res­o­lu­tion, it’s about the fun fac­tor.

Despite the low res­o­lu­tion, it’s eas­ily pos­si­ble to take rec­og­niz­able pic­tures of stuff,” Dycus says. The high’ color depth def­i­nitely helps. I’d like it to the Game Boy Camera (which I also en­joy), which is much higher res­o­lu­tion but only has four col­ors.”

...

Read the original on petapixel.com »

10 214 shares, 12 trendiness

Launching Wolfram Compute Services

Let’s say you’ve done a com­pu­ta­tion in Wolfram Language. And now you want to scale it up. Maybe 1000x or more. Well, to­day we’ve re­leased an ex­tremely stream­lined way to do that. Just wrap the scaled up com­pu­ta­tion in and off it’ll go to our new Wolfram Compute Services sys­tem. Then—in a minute, an hour, a day, or what­ever—it’ll let you know it’s fin­ished, and you can get its re­sults.

For decades I’ve of­ten needed to do big, crunchy cal­cu­la­tions (usually for sci­ence). With large vol­umes of data, mil­lions of cases, ram­pant com­pu­ta­tional ir­re­ducibil­ity, etc. I prob­a­bly have more com­pute ly­ing around my house than most peo­ple—these days about 200 cores worth. But many nights I’ll leave all of that com­pute run­ning, all night—and I still want much more. Well, as of to­day, there’s an easy so­lu­tion—for every­one: just seam­lessly send your com­pu­ta­tion off to Wolfram Compute Services to be done, at ba­si­cally any scale.

For nearly 20 years we’ve had built-in func­tions like and in Wolfram Language that make it im­me­di­ate to par­al­lelize sub­com­pu­ta­tions. But for this to re­ally let you scale up, you have to have the com­pute. Which now—thanks to our new Wolfram Compute Services—everyone can im­me­di­ately get.

The un­der­ly­ing tools that make Wolfram Compute Services pos­si­ble have ex­isted in the Wolfram Language for sev­eral years. But what Wolfram Compute Services now does is to pull every­thing to­gether to pro­vide an ex­tremely stream­lined all-in-one ex­pe­ri­ence. For ex­am­ple, let’s say you’re work­ing in a note­book and build­ing up a com­pu­ta­tion. And fi­nally you give the in­put that you want to scale up. Typically that in­put will have lots of de­pen­den­cies on ear­lier parts of your com­pu­ta­tion. But you don’t have to worry about any of that. Just take the in­put you want to scale up, and feed it to . Wolfram Compute Services will au­to­mat­i­cally take care of all the de­pen­den­cies, etc.

And an­other thing: , like every func­tion in Wolfram Language, is deal­ing with sym­bolic ex­pres­sions, which can rep­re­sent any­thing—from nu­mer­i­cal ta­bles to im­ages to graphs to user in­ter­faces to videos, etc. So that means that the re­sults you get can im­me­di­ately be used, say in your Wolfram Notebook, with­out any im­port­ing, etc.

OK, so what kinds of ma­chines can you run on? Well, Wolfram Compute Services gives you a bunch of op­tions, suit­able for dif­fer­ent com­pu­ta­tions, and dif­fer­ent bud­gets. There’s the most ba­sic 1 core, 8 GB op­tion—which you can use to just get a com­pu­ta­tion off your own ma­chine”. You can pick a ma­chine with larger mem­ory—cur­rently up to about 1500 GB. Or you can pick a ma­chine with more cores—cur­rently up to 192. But if you’re look­ing for even larger scale par­al­lelism Wolfram Compute Services can deal with that too. Because can map a func­tion across any num­ber of el­e­ments, run­ning on any num­ber of cores, across mul­ti­ple ma­chines.

OK, so here’s a very sim­ple ex­am­ple—that hap­pens to come from some sci­ence I did a lit­tle while ago. Define a func­tion that ran­domly adds nonover­lap­ping pen­tagons to a clus­ter:

For 20 pen­tagons I can run this quickly on my ma­chine:

But what about for 500 pen­tagons? Well, the com­pu­ta­tional geom­e­try gets dif­fi­cult and it would take long enough that I would­n’t want to tie up my own ma­chine do­ing it. But now there’s an­other op­tion: use Wolfram Compute Services!

And all I have to do is feed my com­pu­ta­tion to :

Immediately, a job is cre­ated (with all nec­es­sary de­pen­den­cies au­to­mat­i­cally han­dled). And the job is queued for ex­e­cu­tion. And then, a cou­ple of min­utes later, I get an email:

Not know­ing how long it’s go­ing to take, I go off and do some­thing else. But a while later, I’m cu­ri­ous to check how my job is do­ing. So I click the link in the email and it takes me to a dash­board—and I can see that my job is suc­cess­fully run­ning:

I go off and do other things. Then, sud­denly, I get an email:

It fin­ished! And in the mail is a pre­view of the re­sult. To get the re­sult as an ex­pres­sion in a Wolfram Language ses­sion I just eval­u­ate a line from the email:

And this is now a com­putable ob­ject that I can work with, say com­put­ing ar­eas

One of the great strengths of Wolfram Compute Services is that it makes it easy to use large-scale par­al­lelism. You want to run your com­pu­ta­tion in par­al­lel on hun­dreds of cores? Well, just use Wolfram Compute Services!

Here’s an ex­am­ple that came up in some re­cent work of mine. I’m search­ing for a cel­lu­lar au­toma­ton rule that gen­er­ates a pat­tern with a lifetime” of ex­actly 100 steps. Here I’m test­ing 10,000 ran­dom rules—which takes a cou­ple of sec­onds, and does­n’t find any­thing:

To test 100,000 rules I can use and run in par­al­lel, say across the 16 cores in my lap­top:

Still noth­ing. OK, so what about test­ing 100 mil­lion rules? Well, then it’s time for Wolfram Compute Services. The sim­plest thing to do is just to sub­mit a job re­quest­ing a ma­chine with lots of cores (here 192, the max­i­mum cur­rently of­fered):

A few min­utes later I get mail telling me the job is start­ing. After a while I check on my job and it’s still run­ning:

I go off and do other things. Then, af­ter a cou­ple of hours I get mail telling me my job is fin­ished. And there’s a pre­view in the email that shows, yes, it found some things:

And here they are—rules plucked from the hun­dred mil­lion tests we did in the com­pu­ta­tional uni­verse:

But what if we wanted to get this re­sult in less than a cou­ple of hours? Well, then we’d need even more par­al­lelism. And, ac­tu­ally, Wolfram Compute Services lets us get that too—us­ing . You can think of as a souped up ana­log of —mapping a func­tion across a list of any length, split­ting up the nec­es­sary com­pu­ta­tions across cores that can be on dif­fer­ent ma­chines, and han­dling the data and com­mu­ni­ca­tions in­volved in a scal­able way.

Because is a pure we have to re­arrange our com­pu­ta­tion a lit­tle—mak­ing it run 100,000 cases of se­lect­ing from 1000 ran­dom in­stances:

The sys­tem de­cided to dis­trib­ute my 100,000 cases across 316 sep­a­rate child jobs”, here each run­ning on its own core. How is the job do­ing? I can get a dy­namic vi­su­al­iza­tion of what’s hap­pen­ing:

And it does­n’t take many min­utes be­fore I’m get­ting mail that the job is fin­ished:

And, yes, even though I only had to wait for 3 min­utes to get this re­sult, the to­tal amount of com­puter time used—across all the cores—is about 8 hours.

Now I can re­trieve all the re­sults, us­ing to com­bine all the sep­a­rate pieces I gen­er­ated:

And, yes, if I wanted to spend a lit­tle more, I could run a big­ger search, in­creas­ing the 100,000 to a larger num­ber; and Wolfram Compute Services would seam­lessly scale up.

Like every­thing around Wolfram Language, Wolfram Compute Services is fully pro­gram­ma­ble. When you sub­mit a job, there are lots of op­tions you can set. We al­ready saw the op­tion which lets you choose the type of ma­chine to use. Currently the choices range from Basic1x8 (1 core, 8 GB) through Basic4x16 (4 cores, 16 GB) to parallel com­pute” Compute192x384 (192 cores, 384 GB) and large mem­ory” Memory192x1536 (192 cores, 1536 GB).

Different classes of ma­chine cost dif­fer­ent num­bers of cred­its to run. And to make sure things don’t go out of con­trol, you can set the op­tions (maximum time in sec­onds) and (maximum num­ber of cred­its to use).

Then there’s no­ti­fi­ca­tion. The de­fault is to send one email when the job is start­ing, and one when it’s fin­ished. There’s an op­tion that lets you give a name to each job, so you can more eas­ily tell which job a par­tic­u­lar piece of email is about, or where the job is on the web dash­board. (If you don’t give a name to a job, it’ll be re­ferred to by the UUID it’s been as­signed.)

The op­tion lets you say what no­ti­fi­ca­tions you want, and how you want to re­ceive them. There can be no­ti­fi­ca­tions when­ever the sta­tus of a job changes, or at spe­cific time in­ter­vals, or when spe­cific num­bers of cred­its have been used. You can get no­ti­fi­ca­tions ei­ther by email, or by text mes­sage. And, yes, if you get no­ti­fied that your job is go­ing to run out of cred­its, you can al­ways go to the Wolfram Account por­tal to top up your cred­its.

There are many prop­er­ties of jobs that you can query. A cen­tral one is . But, for ex­am­ple, gives you a whole as­so­ci­a­tion of re­lated in­for­ma­tion:

If your job suc­ceeds, it’s pretty likely will be all you need. But if some­thing goes wrong, you can eas­ily drill down to study the de­tails of what hap­pened with the job, for ex­am­ple by look­ing at .

If you want to know all the jobs you’ve ini­ti­ated, you can al­ways look at the web dash­board, but you can also get sym­bolic rep­re­sen­ta­tions of the jobs from:

For any of these job ob­jects, you can ask for prop­er­ties, and you can for ex­am­ple also ap­ply to abort them.

Once a job has com­pleted, its re­sult will be stored in Wolfram Compute Services—but only for a lim­ited time (currently two weeks). Of course, once you’ve got the re­sult, it’s very easy to store it per­ma­nently, for ex­am­ple, by putting it into the Wolfram Cloud us­ing [expr]. (If you know you’re go­ing to want to store the re­sult per­ma­nently, you can also do the right in­side your .)

Talking about pro­gram­matic uses of Wolfram Compute Services, here’s an­other ex­am­ple: let’s say you want to gen­er­ate a com­pute-in­ten­sive re­port once a week. Well, then you can put to­gether sev­eral very high-level Wolfram Language func­tions to de­ploy a sched­uled task that will run in the Wolfram Cloud to ini­ti­ate jobs for Wolfram Compute Services:

And, yes, you can ini­ti­ate a Wolfram Compute Services job from any Wolfram Language sys­tem, whether on the desk­top or in the cloud.

Wolfram Compute Services is go­ing to be very use­ful to many peo­ple. But ac­tu­ally it’s just part of a much larger con­stel­la­tion of ca­pa­bil­i­ties aimed at broad­en­ing the ways Wolfram Language can be used.

Mathematica and the Wolfram Language started—back in 1988—as desk­top sys­tems. But even at the very be­gin­ning, there was a ca­pa­bil­ity to run the note­book front end on one ma­chine, and then have a remote ker­nel” on an­other ma­chine. (In those days we sup­ported, among other things, com­mu­ni­ca­tion via phone line!) In 2008 we in­tro­duced built-in par­al­lel com­pu­ta­tion ca­pa­bil­i­ties like and . Then in 2014 we in­tro­duced the Wolfram Cloud—both repli­cat­ing the core func­tion­al­ity of Wolfram Notebooks on the web, and pro­vid­ing ser­vices such as in­stant APIs and sched­uled tasks. Soon there­after, we in­tro­duced the Enterprise Private Cloud—a pri­vate ver­sion of Wolfram Cloud. In 2021 we in­tro­duced Wolfram Application Server to de­liver high-per­for­mance APIs (and it’s what we now use, for ex­am­ple, for Wolfram|Alpha). Along the way, in 2019, we in­tro­duced Wolfram Engine as a stream­lined server and com­mand-line de­ploy­ment of Wolfram Language. Around Wolfram Engine we built WSTPServer to serve Wolfram Engine ca­pa­bil­i­ties on lo­cal net­works, and we in­tro­duced WolframScript to pro­vide a de­ploy­ment-ag­nos­tic way to run com­mand-line-style Wolfram Language code. In 2020 we then in­tro­duced the first ver­sion of , to be used with cloud ser­vices such as AWS and Azure. But un­like with Wolfram Compute Services, this re­quired do it your­self” pro­vi­sion­ing and li­cens­ing with the cloud ser­vices. And, fi­nally, now, that’s what we’ve au­to­mated in Wolfram Compute Services.

OK, so what’s next? An im­por­tant di­rec­tion is the forth­com­ing Wolfram HPCKit—for or­ga­ni­za­tions with their own large-scale com­pute fa­cil­i­ties to set up their own back ends to , etc. is built in a very gen­eral way, that al­lows dif­fer­ent batch com­pu­ta­tion providers” to be plugged in. Wolfram Compute Services is ini­tially set up to sup­port just one stan­dard batch com­pu­ta­tion provider: . HPCKit will al­low or­ga­ni­za­tions to con­fig­ure their own com­pute fa­cil­i­ties (often with our help) to serve as batch com­pu­ta­tion providers, ex­tend­ing the stream­lined ex­pe­ri­ence of Wolfram Compute Services to on-premise or or­ga­ni­za­tional com­pute fa­cil­i­ties, and au­tomat­ing what is of­ten a rather fid­dly job process of sub­mis­sion (which, I must say, per­son­ally re­minds me a lot of the main­frame job con­trol sys­tems I used in the 1970s).

Wolfram Compute Services is cur­rently set up purely as a batch com­pu­ta­tion en­vi­ron­ment. But within the Wolfram System, we have the ca­pa­bil­ity to sup­port syn­chro­nous re­mote com­pu­ta­tion, and we’re plan­ning to ex­tend Wolfram Compute Services to of­fer this—al­low­ing one, for ex­am­ple, to seam­lessly run a re­mote ker­nel on a large or ex­otic re­mote ma­chine.

But this is for the fu­ture. Today we’re launch­ing the first ver­sion of Wolfram Compute Services. Which makes supercomputer power” im­me­di­ately avail­able for any Wolfram Language com­pu­ta­tion. I think it’s go­ing to be very use­ful to a broad range of users of Wolfram Language. I know I’m go­ing to be us­ing it a lot.

...

Read the original on writings.stephenwolfram.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.