10 interesting stories served every morning and every evening.




1 529 shares, 36 trendiness

Self-hosting my photos with Immich

For every cloud ser­vice I use, I want to have a lo­cal copy of my data for backup pur­poses and in­de­pen­dence. Unfortunately, the gpho­tos-sync tool stopped

work­ing in March

2025 when Google re­stricted the OAuth scopes, so I needed an al­ter­na­tive for my ex­ist­ing Google Photos setup. In this post, I de­scribe how I have set up

Immich, a self-hostable photo man­ager.

Here is the end re­sult: a few (live) pho­tos from NixCon

2025:

I am run­ning Immich on my Ryzen 7 Mini PC (ASRock DeskMini

X600), which con­sumes less than 10 W of power in idle and has plenty of re­sources for VMs (64 GB RAM, 1 TB disk). You can read more about it in my blog post from July 2024:

I in­stalled Proxmox, an Open Source vir­tu­al­iza­tion plat­form, to di­vide this mini server into VMs, but you could of course also in­stall Immich di­rectly on any server.

I cre­ated a VM (named photos”) with 500 GB of disk space, 4 CPU cores and 4 GB of RAM.

For the ini­tial im­port, you could as­sign more CPU and RAM, but for nor­mal us­age, that’s enough.

I (declaratively) in­stalled

NixOS on that VM as de­scribed in this blog post:

Afterwards, I en­abled Immich, with this ex­act con­fig­u­ra­tion:

At this point, Immich is avail­able on lo­cal­host, but not over the net­work, be­cause NixOS en­ables a fire­wall by de­fault. I could en­able the

ser­vices.im­mich.open­Fire­wall op­tion, but I ac­tu­ally want Immich to only be avail­able via my Tailscale VPN, for which I don’t need to open fire­wall ac­cess — in­stead, I use tailscale serve to for­ward traf­fic to lo­cal­host:2283:

pho­tos# tailscale serve –bg http://​lo­cal­host:2283

Because I have Tailscale’s MagicDNS

and TLS cer­tifi­cate pro­vi­sion­ing

en­abled, that means I can now open https://​pho­tos.ex­am­ple.ts.net in my browser on my PC, lap­top or phone.

At first, I tried im­port­ing my pho­tos us­ing the of­fi­cial Immich CLI:

% nix run nix­p­kgs#im­mich-cli — lo­gin https://​pho­tos.ex­am­ple.ts.net se­cret

% nix run nix­p­kgs#im­mich-cli — up­load –recursive /home/michael/lib/photo/gphotos-takeout

Unfortunately, the up­load was not run­ning re­li­ably and had to be restarted man­u­ally a few times af­ter run­ning into a time­out. Later I re­al­ized that this was be­cause the Immich server runs back­ground jobs like thumb­nail cre­ation, meta­data ex­trac­tion or face de­tec­tion, and these back­ground jobs slow down the up­load to the ex­tent that the up­load can fail with a time­out.

The other is­sue was that even af­ter the up­load was done, I re­al­ized that Google Takeout archives for Google Photos con­tain meta­data in sep­a­rate JSON files next to the orig­i­nal im­age files:

Unfortunately, these files are not con­sid­ered by im­mich-cli.

Luckily, there is a great third-party tool called

im­mich-go, which solves both of these is­sues! It pauses back­ground tasks be­fore up­load­ing and restarts them af­ter­wards, which works much bet­ter, and it does its best to un­der­stand Google Takeout archives.

I ran im­mich-go as fol­lows and it worked beau­ti­fully:

% im­mich-go \

up­load \

from-google-pho­tos \

–server=https://​pho­tos.ex­am­ple.ts.net \

–api-key=secret \

~/Downloads/takeout-*.zip

My main source of new pho­tos is my phone, so I in­stalled the Immich app on my iPhone, logged into my Immich server via its Tailscale URL and en­abled au­to­matic backup of new pho­tos via the icon at the top right.

I am not 100% sure whether these set­tings are cor­rect, but it seems like cam­era pho­tos gen­er­ally go into Live Photos, and Recent should cover other files…?!

If any­one knows, please send an ex­pla­na­tion (or a link!) and I will up­date the ar­ti­cle.

I also strongly rec­om­mend to dis­able no­ti­fi­ca­tions for Immich, be­cause oth­er­wise you get no­ti­fi­ca­tions when­ever it up­loads im­ages in the back­ground. These no­ti­fi­ca­tions are not re­quired for back­ground up­load to work, as an Immich

de­vel­oper con­firmed on

Reddit. Open

Settings → Apps → Immich → Notifications and un-tick the per­mis­sion check­box:

Immich’s doc­u­men­ta­tion on

back­ups con­tains some good rec­om­men­da­tions. The Immich de­vel­op­ers rec­om­mend back­ing up the en­tire con­tents of UPLOAD_LOCATION, which is /var/lib/immich on NixOS. The

back­ups sub­di­rec­tory con­tains SQL dumps, whereas the 3 di­rec­to­ries up­load,

li­brary and pro­file con­tain all user-up­loaded data.

Hence, I have set up a sys­temd timer that runs rsync to copy /var/lib/immich

onto my PC, which is en­rolled in a 3-2-1 backup

scheme.

Immich (currently?) does not con­tain photo edit­ing fea­tures, so to ro­tate or crop an im­age, I down­load the im­age and use GIMP.

To share im­ages, I still up­load them to Google Photos (depending on who I share them with).

The two most promis­ing op­tions in the space of self-hosted im­age man­age­ment tools seem to be Immich and Ente.

I got the im­pres­sion that Immich is more pop­u­lar in my bub­ble, and Ente made the im­pres­sion on me that its scope is far larger than what I am look­ing for:

Ente is a ser­vice that pro­vides a fully open source, end-to-end en­crypted plat­form for you to store your data in the cloud with­out need­ing to trust the ser­vice provider. On top of this plat­form, we have built two apps so far: Ente Photos (an al­ter­na­tive to Apple and Google Photos) and Ente Auth (a 2FA al­ter­na­tive to the dep­re­cated Authy).

I don’t need an end-to-end en­crypted plat­form. I al­ready have en­cryp­tion on the tran­sit layer (Tailscale) and disk layer (LUKS), no need for more com­plex­ity.

Immich is a de­light­ful app! It’s very fast and gen­er­ally seems to work well.

The ini­tial im­port is smooth, but only if you use the right tool. Ideally, the of­fi­cial im­mich-cli could be im­proved. Or maybe im­mich-go could be made the of­fi­cial one.

I think the auto backup is too hard to con­fig­ure on an iPhone, so that could also be im­proved.

But aside from these ini­tial stum­bling blocks, I have no com­plaints.

Table Of Contents

...

Read the original on michael.stapelberg.ch »

2 506 shares, 20 trendiness

the frontier of vision AI

Your browser does not sup­port the au­dio el­e­ment.

This con­tent is gen­er­ated by Google AI. Generative AI is ex­per­i­men­tal

Gemini 3 Pro rep­re­sents a gen­er­a­tional leap from sim­ple recog­ni­tion to true vi­sual and spa­tial rea­son­ing. It is our most ca­pa­ble mul­ti­modal model ever, de­liv­er­ing state-of-the-art per­for­mance across doc­u­ment, spa­tial, screen and video un­der­stand­ing. This model sets new highs on vi­sion bench­marks such as MMMU Pro and Video MMMU for com­plex vi­sual rea­son­ing, as well as use-case-spe­cific bench­marks across doc­u­ment, spa­tial, screen and long video un­der­stand­ing.

Real-world doc­u­ments are messy, un­struc­tured, and dif­fi­cult to parse — of­ten filled with in­ter­leaved im­ages, il­leg­i­ble hand­writ­ten text, nested ta­bles, com­plex math­e­mat­i­cal no­ta­tion and non-lin­ear lay­outs. Gemini 3 Pro rep­re­sents a ma­jor leap for­ward in this do­main, ex­celling across the en­tire doc­u­ment pro­cess­ing pipeline — from highly ac­cu­rate Optical Character Recognition (OCR) to com­plex vi­sual rea­son­ing.To truly un­der­stand a doc­u­ment, a model must ac­cu­rately de­tect and rec­og­nize text, ta­bles, math for­mu­las, fig­ures and charts re­gard­less of noise or for­mat.A fun­da­men­tal ca­pa­bil­ity is derendering” — the abil­ity to re­verse-en­gi­neer a vi­sual doc­u­ment back into struc­tured code (HTML, LaTeX, Markdown) that would recre­ate it. As il­lus­trated be­low, Gemini 3 demon­strates ac­cu­rate per­cep­tion across di­verse modal­i­ties in­clud­ing con­vert­ing an 18th-century mer­chant log into a com­plex table, or trans­form­ing a raw im­age with math­e­mat­i­cal an­no­ta­tion into pre­cise LaTeX code.

Example 2: Reconstructing equa­tions from an im­age

Example 3: Reconstructing Florence Nightingale’s orig­i­nal Polar Area Diagram into an in­ter­ac­tive chart (with a tog­gle!)

Users can rely on Gemini 3 to per­form com­plex, multi-step rea­son­ing across ta­bles and charts — even in long re­ports. In fact, the model no­tably out­per­forms the hu­man base­line on the CharXiv Reasoning bench­mark (80.5%).To il­lus­trate this, imag­ine a user an­a­lyz­ing the 62-page U.S. Census Bureau Income in the United States: 2022” re­port with the fol­low­ing prompt: Compare the 2021–2022 per­cent change in the Gini in­dex for Money Income” ver­sus Post-Tax Income”, and what caused the di­ver­gence in the post-tax mea­sure, and in terms of Money Income”, does it show the low­est quin­tile’s share ris­ing or falling?”Swipe through the im­ages be­low to see the mod­el’s step-by-step rea­son­ing.

Visual Extraction: To an­swer the Gini Index Comparison ques­tion, Gemini lo­cated and cross-ref­er­enced this info in Figure 3 about Money Income de­creased by 1.2 per­cent” and in Table B-3 about Post-Tax Income in­creased by 3.2 per­cent”

Causal Logic: Crucially, Gemini 3 does not stop at the num­bers; it cor­re­lates this gap with the tex­t’s pol­icy analy­sis, cor­rectly iden­ti­fy­ing Lapse of ARPA Policies and the end of Stimulus Payments are the main causes.

Numerical Comparison: To com­pare the low­est quan­tile’s share ris­ing or falling, Gemini3 looked at table A-3, and com­pared the num­ber of 2.9 and 3.0, and con­cluded that the share of ag­gre­gate house­hold in­come held by the low­est quin­tile was ris­ing.”

Gemini 3 Pro is our strongest spa­tial un­der­stand­ing model so far. Combined with its strong rea­son­ing, this en­ables the model to make sense of the phys­i­cal world.Point­ing ca­pa­bil­ity: Gemini 3 has the abil­ity to point at spe­cific lo­ca­tions in im­ages by out­putting pixel-pre­cise co­or­di­nates. Sequences of 2D points can be strung to­gether to per­form com­plex tasks, such as es­ti­mat­ing hu­man poses or re­flect­ing tra­jec­to­ries over time.Open vo­cab­u­lary ref­er­ences: Gemini 3 iden­ti­fies ob­jects and their in­tent us­ing an open vo­cab­u­lary. The most di­rect ap­pli­ca­tion is ro­bot­ics: the user can ask a ro­bot to gen­er­ate spa­tially grounded plans like, Given this messy table, come up with a plan on how to sort the trash.” This also ex­tends to AR/XR de­vices, where the user can re­quest an AI as­sis­tant to Point to the screw ac­cord­ing to the user man­ual.”

Gemini 3.0 Pro’s spa­tial un­der­stand­ing re­ally shines through its screen un­der­stand­ing of desk­top and mo­bile OS screens. This re­li­a­bil­ity helps make com­puter use agents ro­bust enough to au­to­mate repet­i­tive tasks. UI un­der­stand­ing ca­pa­bil­i­ties can also en­able tasks like QA test­ing, user on­board­ing and UX an­a­lyt­ics. The fol­low­ing com­puter use demo shows the model per­ceiv­ing and click­ing with high pre­ci­sion.

Gemini 3 Pro takes a mas­sive leap for­ward in how AI un­der­stands video, the most com­plex data for­mat we in­ter­act with. It is dense, dy­namic, mul­ti­modal and rich with con­text.High frame rate un­der­stand­ing: We have op­ti­mized the model to be much stronger at un­der­stand­ing fast-paced ac­tions when sam­pling at >1 frames-per-sec­ond. Gemini 3 Pro can cap­ture rapid de­tails — vi­tal for tasks like an­a­lyz­ing golf swing me­chan­ics.

By pro­cess­ing video at 10 FPS—10x the de­fault speed—Gem­ini 3 Pro catches every swing and shift in weight, un­lock­ing deep in­sights into player me­chan­ics.

2. Video rea­son­ing with thinking” mode: We up­graded thinking” mode to go be­yond ob­ject recog­ni­tion to­ward true video rea­son­ing. The model can now bet­ter trace com­plex cause-and-ef­fect re­la­tion­ships over time. Instead of just iden­ti­fy­ing what is hap­pen­ing, it un­der­stands why it is hap­pen­ing.3. Turning long videos into ac­tion: Gemini 3 Pro bridges the gap be­tween video and code. It can ex­tract knowl­edge from long-form con­tent and im­me­di­ately trans­late it into func­tion­ing apps or struc­tured code.

Here are a few ways we think var­i­ous fields will ben­e­fit from Gemini 3’s ca­pa­bil­i­ties.Gem­ini 3.0 Pro’s en­hanced vi­sion ca­pa­bil­i­ties drive sig­nif­i­cant gains in the ed­u­ca­tion field, par­tic­u­larly for di­a­gram-heavy ques­tions cen­tral to math and sci­ence. It suc­cess­fully tack­les the full spec­trum of mul­ti­modal rea­son­ing prob­lems found from mid­dle school through post-sec­ondary cur­ricu­lums. This in­cludes vi­sual rea­son­ing puz­zles (like Math Kangaroo) and com­plex chem­istry and physics di­a­grams.Gem­ini 3’s vi­sual in­tel­li­gence also pow­ers the gen­er­a­tive ca­pa­bil­i­ties of Nano Banana Pro. By com­bin­ing ad­vanced rea­son­ing with pre­cise gen­er­a­tion, the model, for ex­am­ple, can help users iden­tify ex­actly where they went wrong in a home­work prob­lem.

Prompt: Here is a photo of my home­work at­tempt. Please check my steps and tell me where I went wrong. Instead of ex­plain­ing in text, show me vi­su­ally on my im­age.” (Note: Student work is shown in blue; model cor­rec­tions are shown in red). [See prompt in Google AI Studio]

Gemini 3 Pro

stands as our most ca­pa­ble gen­eral model for med­ical and bio­med­ical im­agery un­der­stand­ing, achiev­ing state-of-the-art per­for­mance across ma­jor pub­lic bench­marks in MedXpertQA-MM (a dif­fi­cult ex­pert-level med­ical rea­son­ing exam), VQA-RAD (radiology im­agery Q&A) and MicroVQA (multimodal rea­son­ing bench­marks for mi­croscopy based bi­o­log­i­cal re­search).

Gemini 3 Pro’s en­hanced doc­u­ment un­der­stand­ing helps pro­fes­sion­als in fi­nance and law tackle highly com­plex work­flows. Finance plat­forms can seam­lessly an­a­lyze dense re­ports filled with charts and ta­bles, while le­gal plat­forms ben­e­fit from the mod­el’s so­phis­ti­cated doc­u­ment rea­son­ing.

Gemini 3 Pro im­proves the way it processes vi­sual in­puts by pre­serv­ing the na­tive as­pect ra­tio of im­ages. This dri­ves sig­nif­i­cant qual­ity im­prove­ments across the board.

Additionally, de­vel­op­ers gain gran­u­lar con­trol over per­for­mance and cost via the new me­di­a_res­o­lu­tion pa­ra­me­ter. This al­lows you to tune vi­sual to­ken us­age to bal­ance fi­delity against con­sump­tion:High res­o­lu­tion: Maximizes fi­delity for tasks re­quir­ing fine de­tail, such as dense OCR or com­plex doc­u­ment un­der­stand­ing.Low res­o­lu­tion: Optimizes for cost and la­tency on sim­pler tasks, such as gen­eral scene recog­ni­tion or long-con­text tasks.For spe­cific rec­om­men­da­tions, re­fer to our Gemini 3.0 Documentation Guide.We are ex­cited to see what you build with these new ca­pa­bil­i­ties. To get started, check out our de­vel­oper doc­u­men­ta­tion or play with the model in Google AI Studio to­day.

...

Read the original on blog.google »

3 351 shares, 19 trendiness

Fedi.Tips 🎄 (@FediTips@social.growyourown.services)

To use the Mastodon web ap­pli­ca­tion, please en­able JavaScript. Alternatively, try one of the na­tive apps for Mastodon for your plat­form.

...

Read the original on social.growyourown.services »

4 351 shares, 14 trendiness

YouTube secretly tests AI video retouching without creators’ consent

The con­tro­versy high­lights a wider trend in which more of what peo­ple see on­line is pre-processed by AI be­fore reach­ing them. Smartphone mak­ers like Samsung and Google have long used AI to enhance” im­ages. Samsung pre­vi­ously ad­mit­ted to us­ing AI to sharpen moon pho­tos, while Google’s Pixel Best Take” fea­ture stitches to­gether fa­cial ex­pres­sions from mul­ti­ple shots to cre­ate a sin­gle perfect” group pic­ture.

...

Read the original on www.ynetnews.com »

5 301 shares, 12 trendiness

Patterns for Defensive Programming in Rust

Whenever I see the com­ment // this should never hap­pen in code, I try to find out the ex­act con­di­tions un­der which it could hap­pen. And in 90% of cases, I find a way to do just that. More of­ten than not, the de­vel­oper just has­n’t con­sid­ered all edge cases or fu­ture code changes.

In fact, the rea­son why I like this com­ment so much is that it of­ten marks the ex­act spot where strong guar­an­tees fall apart. Often, vi­o­lat­ing im­plicit in­vari­ants that aren’t en­forced by the com­piler are the root cause.

Yes, the com­piler pre­vents mem­ory safety is­sues, and the stan­dard li­brary is best-in-class. But even the stan­dard li­brary has its warts and bugs in busi­ness logic can still hap­pen.

All we can work with are hard-learned pat­terns to write more de­fen­sive Rust code, learned through­out years of ship­ping Rust code to pro­duc­tion. I’m not talk­ing about de­sign pat­terns here, but rather small id­ioms, which are rarely doc­u­mented, but make a big dif­fer­ence in the over­all code qual­ity.

if !matching_users.is_empty() {

let ex­ist­ing_user = &matching_users[0];

What if you refac­tor it and for­get to keep the is_empty() check? The prob­lem is that the vec­tor in­dex­ing is de­cou­pled from check­ing the length. So match­ing_users[0] can panic at run­time if the vec­tor is empty.

Checking the length and in­dex­ing are two sep­a­rate op­er­a­tions, which can be changed in­de­pen­dently. That’s our first im­plicit in­vari­ant that’s not en­forced by the com­piler.

If we use slice pat­tern match­ing in­stead, we’ll only get ac­cess to the el­e­ment if the cor­rect match arm is ex­e­cuted.

match match­ing_users.as_s­lice() {

[] => todo!(“What to do if no users found!?“),

[existing_user] => { // Safe! Compiler guar­an­tees ex­actly one el­e­ment

// No need to in­dex into the vec­tor,

// we can di­rectly use `existing_user` here

_ => Err(RepositoryError::DuplicateUsers)

Note how this au­to­mat­i­cally un­cov­ered one more edge case: what if the list is empty? We had­n’t ex­plic­itly con­sid­ered this case be­fore. The com­piler-en­forced pat­tern match­ing re­quires us to think about all pos­si­ble states! This is a com­mon pat­tern in all ro­bust Rust code: putting the com­piler in charge of en­forc­ing in­vari­ants.

When ini­tial­iz­ing an ob­ject with many fields, it’s tempt­ing to use ..Default::default() to fill in the rest. In prac­tice, this is a com­mon source of bugs. You might for­get to ex­plic­itly set a new field later when you add it to the struct (thus us­ing the de­fault value in­stead, which might not be what you want), or you might not be aware of all the fields that are be­ing set to de­fault val­ues.

Instead of this:

let foo = Foo {

field1: val­ue1,

field2: val­ue2,

..Default::default() // Implicitly sets all other fields

let foo = Foo {

field1: val­ue1,

field2: val­ue2,

field3: val­ue3, // Explicitly set all fields

field4: val­ue4,

Yes, it’s slightly more ver­bose, but what you gain is that the com­piler will force you to han­dle all fields ex­plic­itly. Now when you add a new field to Foo, the com­piler will re­mind you to set it here as well and re­flect on which value makes sense.

If you still pre­fer to use Default but don’t want to lose com­piler checks, you can also de­struc­ture the de­fault in­stance:

let Foo { field1, field2, field3, field4 } = Foo::default();

This way, you get all the de­fault val­ues as­signed to lo­cal vari­ables and you can still over­ride what you need:

let foo = Foo {

field1: val­ue1, // Override what you need

field2: val­ue2, // Override what you need

field3, // Use de­fault value

field4, // Use de­fault value

This pat­tern gives you the best of both worlds:

You get de­fault val­ues with­out du­pli­cat­ing de­fault logic

The com­piler will com­plain when new fields are added to the struct

It’s clear which fields use de­faults and which have cus­tom val­ues

Completely de­struc­tur­ing a struct into its com­po­nents can also be a de­fen­sive strat­egy for API ad­her­ence. For ex­am­ple, let’s say you’re build­ing a pizza or­der­ing sys­tem and have an or­der type like this:

struct PizzaOrder {

size: PizzaSize,

top­pings: Vec

For your or­der track­ing sys­tem, you want to com­pare or­ders based on what’s ac­tu­ally on the pizza - the size, top­pings, and crust_­type. The or­dered_at time­stamp should­n’t af­fect whether two or­ders are con­sid­ered the same.

Here’s the prob­lem with the ob­vi­ous ap­proach:

impl PartialEq for PizzaOrder {

fn eq(&self, other: &Self) -> bool {

self.size == other.size

&& self.top­pings == other.top­pings

&& self.crust_­type == other.crust_­type

// Oops! What hap­pens when we add ex­tra_cheese or de­liv­ery_ad­dress later?

Now imag­ine your team adds a field for cus­tomiza­tion op­tions:

struct PizzaOrder {

size: PizzaSize,

top­pings: Vec

Your PartialEq im­ple­men­ta­tion still com­piles, but is it cor­rect? Should ex­tra_cheese be part of the equal­ity check? Probably yes - a pizza with ex­tra cheese is a dif­fer­ent or­der! But you’ll never know be­cause the com­piler won’t re­mind you to think about it.

impl PartialEq for PizzaOrder {

fn eq(&self, other: &Self) -> bool {

let Self {

size,

top­pings,

crust_­type,

or­dered_at: _,

} = self;

let Self {

size: oth­er_­size,

top­pings: oth­er_­top­pings,

crust_­type: oth­er_crust,

or­dered_at: _,

} = other;

size == oth­er_­size && top­pings == oth­er_­top­pings && crust_­type == oth­er_crust

Now when some­one adds the ex­tra_cheese field, this code won’t com­pile any­more. The com­piler forces you to de­cide: should ex­tra_cheese be in­cluded in the com­par­i­son or ex­plic­itly ig­nored with ex­tra_cheese: _?

This pat­tern works for any trait im­ple­men­ta­tion where you need to han­dle struct fields: Hash, Debug, Clone, etc. It’s es­pe­cially valu­able in code­bases where structs evolve fre­quently as re­quire­ments change.

Code Smell: From Impls That Are Really TryFrom

Sometimes there’s no con­ver­sion that will work 100% of the time. That’s fine. When that’s the case, re­sist the temp­ta­tion to of­fer a From im­ple­men­ta­tion out of habit; use TryFrom in­stead.

Here’s an ex­am­ple of TryFrom in dis­guise:

impl From for DetectorStartupErrorSubject {

fn from(re­port: &DetectorStartupErrorReport) -> Self {

let post­fix = re­port

.get_identifier()

.or_else(get_binary_name)

.unwrap_or_else(|| UNKNOWN_DETECTOR_SUBJECT.to_string());

Self(StreamSubject::from(

for­mat!(“apps.er­rors.de­tec­tors.startup.{post­fix}“).as_str(),

The un­wrap_or_else is a hint that this con­ver­sion can fail in some way. We set a de­fault value in­stead, but is it re­ally the right thing to do for all callers? This should be a TryFrom im­ple­men­ta­tion in­stead, mak­ing the fal­li­ble na­ture ex­plicit. We fail fast in­stead of con­tin­u­ing with a po­ten­tially flawed busi­ness logic.

It’s tempt­ing to use match in com­bi­na­tion with a catch-all pat­tern like _ => {}, but this can haunt you later. The prob­lem is that you might for­get to han­dle a new case that was added later.

match self {

Self::Variant1 => { /* … */ }

Self::Variant2 => { /* … */ }

_ => { /* catch-all */ }

match self {

Self::Variant1 => { /* … */ }

Self::Variant2 => { /* … */ }

Self::Variant3 => { /* … */ }

Self::Variant4 => { /* … */ }

By spelling out all vari­ants ex­plic­itly, the com­piler will warn you when a new vari­ant is added, forc­ing you to han­dle it. Another case of putting the com­piler to work.

If the code for two vari­ants is the same, you can group them:

match self {

Self::Variant1 => { /* … */ }

...

Read the original on corrode.dev »

6 299 shares, 16 trendiness

Brendan Gregg's Blog

Recent posts:

04 Aug 2025 »

When to Hire a Computer Performance Engineering Team (2025) part 1 of 2

17 Mar 2024 »

The Return of the Frame Pointers

19 Mar 2022 »

Why Don’t You Use …

Blog in­dex

About

RSS

Recent posts:

04 Aug 2025 »

When to Hire a Computer Performance Engineering Team (2025) part 1 of 2

17 Mar 2024 »

The Return of the Frame Pointers

19 Mar 2022 »

Why Don’t You Use …

Blog in­dex

About

RSS

I’ve re­signed from Intel and ac­cepted a new op­por­tu­nity. If you are an Intel em­ployee, you might have seen my fairly long email that sum­ma­rized what I did in my 3.5 years. Much of this is pub­lic:

It’s still early days for AI flame graphs. Right now when I browse CPU per­for­mance case stud­ies on the Internet, I’ll of­ten see a CPU flame graph as part of the analy­sis. We’re a long way from that kind of adop­tion for GPUs (and it does­n’t help that our open source ver­sion is Intel only), but I think as GPU code be­comes more com­plex, with more lay­ers, the need for AI flame graphs will keep in­creas­ing.

I also sup­ported cloud com­put­ing, par­tic­i­pat­ing in 110 cus­tomer meet­ings, and cre­ated a com­pany-wide strat­egy to win back the cloud with 33 spe­cific rec­om­men­da­tions, in col­lab­o­ra­tion with oth­ers across 6 or­ga­ni­za­tions. It is some of my best work and fea­tures a vi­sual map of in­ter­ac­tions be­tween all 19 rel­e­vant teams, de­scribed by Intel long-timers as the first time they have ever seen such a cross-com­pany map. (This strat­egy, sum­ma­rized in a slide deck, is in­ter­nal only.)

I al­ways wish I did more, in any job, but I’m glad to have con­tributed this much es­pe­cially given the con­text: I over­lapped with Intel’s tough­est 3 years in his­tory, and I had a hir­ing freeze for my first 15 months.

My fond mem­o­ries from Intel in­clude meet­ing Linus at an Intel event who said everyone is us­ing fleme graphs these days” (Finnish ac­cent), meet­ing Pat Gelsinger who knew about my work and in­tro­duced me to every­one at an exec all hands, surf­ing lessons at an Intel Australia and HP off­site (mp4), and meet­ing Harshad Sane (Intel cloud sup­port en­gi­neer) who helped me when I was at Netflix and now has joined Netflix him­self — we’ve swapped ends of the meet­ing table. I also en­joyed meet­ing Intel’s hard­ware fel­lows and se­nior fel­lows who were happy to help me un­der­stand proces­sor in­ter­nals. (Unrelated to Intel, but if you’re a Who fan like me, I re­cently met some other peo­ple as well!)

My next few years at Intel would have fo­cused on ex­e­cu­tion of those 33 rec­om­men­da­tions, which Intel can con­tinue to do in my ab­sence. Most of my rec­om­men­da­tions aren’t easy, how­ever, and re­quire ac­cept­ing change, ELT/CEO ap­proval, and mul­ti­ple quar­ters of in­vest­ment. I won’t be there to push them, but other em­ploy­ees can (my CloudTeams strat­egy is in the in­box of var­i­ous ELT, and in a shared folder with all my pre­sen­ta­tions, code, and weekly sta­tus re­ports). This work will hope­fully live on and keep mak­ing Intel stronger. Good luck.

...

Read the original on www.brendangregg.com »

7 294 shares, 11 trendiness

Sam Altman’s Dirty DRAM Deal

Use tab to nav­i­gate through the menu items. Or: How the AI Bubble, Panic, and Unpreparedness Stole ChristmasWritten by Tom of Moore’s Law Is DeadAt the be­gin­ning of November, I or­dered a 32GB DDR5 kit for pair­ing with a Minisforum BD790i X3D moth­er­board, and three weeks later those very same sticks of DDR5 are now listed for a stag­ger­ing $330– a 156% in­crease in price from less than a month ago! At this rate, it seems likely that by Christmas, that DDR5 kit alone could be worth more than the en­tire Zen 4 X3D plat­form I planned to pair it with! How could this hap­pen, and more specif­i­cally — how could this hap­pen THIS quickly? Well, buckle up! I am about to tell you the story of Sam Altman’s Dirty DRAM Deal, or: How the AI bub­ble, panic, and un­pre­pared­ness stole Christmas…But be­fore I dive in, let me make it clear that my RAM kit’s 156% jump in price is­n’t a fluke or some ex­treme ex­am­ple of what’s go­ing on right now. Nope, and in fact, I’d like to pro­vide two more ex­am­ples of how how im­pos­si­ble it is be­com­ing to get ahold of RAM - these were pro­vided by a cou­ple of our sources within the in­dus­try:One source that works at a US Retailer, stated that a RAM Manufacturer called them in or­der to in­quire if they might buy RAM from  to stock up for their other cus­tomers. This would be like Corsair ask­ing a Best Buy if they had any RAM around.An­other source that works at a Prebuilt PC com­pany, was re­cently given an es­ti­mate for when they would re­ceive RAM or­ders if they placed them now…and they were told December…of 2026So what hap­pened?  Well, it all comes down to three per­fectly syn­er­gis­tic events:two un­prece­dented RAM deals that took every­one by sur­prise.The se­crecy and size of the deals trig­gered full-scale panic buy­ing from every­one else.The mar­ket had al­most zero safety stock left due to tar­iffs, worry about RAM prices over the sum­mer, and stalled equip­ment trans­fers.Be­low, we’re go­ing to walk through each of these fac­tors — and then I’m go­ing to warn you about which hard­ware cat­e­gories will be hit the hard­est, which prod­ucts are al­ready be­ing can­celled, and what you should buy  before the shelves turn into a re­peat of 2021–2022…because this is doomed to turn into much more than just RAM scarcity…deals with Samsung and SK Hynix for 40% of the worlds DRAM sup­ply.  Now, did OpenAI’s com­pe­ti­tion sus­pect some big RAM deals could be signed in late 2025? Yes. Ok, but did they think it would be deals this huge and with mul­ti­ple com­pa­nies? NO!  In fact, if you go back and read re­port­ing on Sam Altman’s now in­fa­mous trip to South Korea on October 1st, even just mere hours be­fore the mas­sive deals with Samsung and SK Hynix were  — most re­port­ing sim­ply men­tioned vague re­ports about Sam talk­ing to Samsung, SK Hynix, TSMC, and Foxconn. But the re­port­ing at the time was soft, al­most dis­mis­sive — exploring ties,” seeking co­op­er­a­tion,” probing for part­ner­ships.” Nobody hinted that OpenAI was about to swal­low up to 40% of global DRAM out­put — even on morn­ing be­fore it hap­pened! Nobody saw this com­ing - this is clear in the lack of re­port­ing about the deals be­fore they were an­nounced, and every MLID Source who works in DRAM man­u­fac­tur­ing and dis­tri­b­u­tion in­sist this took every­one in the in­dus­try by sur­prise.To be clear - the shock was­n’t that OpenAI made a big deal, no, it was that they made two mas­sive deals this big, at the same time, with Samsung and SK Hynix si­mul­ta­ne­ously! In fact, ac­cord­ing to our sources - both com­pa­nies had no idea how big each oth­er’s deal was, nor how close to si­mul­ta­ne­ous they were. And this se­crecy mat­tered. It mat­tered a lot.Had Samsung known SK Hynix was about to com­mit a sim­i­lar chunk of sup­ply — or vice-versa — the pric­ing and terms would have likely been dif­fer­ent. It’s en­tirely con­ceiv­able they would­n’t have both agreed to sup­ply such a sub­stan­tial part of global sup­ply if they had known more…but at the end of the day - OpenAI did suc­ceed in keep­ing the cir­cles tight, lock­ing down the NDAs, and lever­ag­ing the fact that these com­pa­nies as­sumed the other was­n’t giv­ing up this much wafer vol­ume si­mul­ta­ne­ously…in or­der to make a sur­gi­cal strike on the global RAM sup­ply chain…and it’s worked so far…Part II — Instant Panic: How did we miss this?Imag­ine you’re run­ning a hy­per scaler, or maybe you’re a ma­jor OEM, or per­haps pre­tend that you are sim­ply one of OpenAI’s chief com­peti­tors: On October 1st of 2025, you would have woken up to the news that OpenAI had just cor­nered the mem­ory mar­ket more ag­gres­sively than any com­pany in the last decade, and you had­n’t heard even a mur­mur that this was com­ing be­fore­hand! Well, you would prob­a­bly make some fol­low-up calls to col­leagues in the in­dus­try, and then also quickly hear ru­mors that it was­n’t just you - also the two largest sup­pli­ers did­n’t even see each oth­er’s si­mul­ta­ne­ous co­op­er­a­tion with OpenAI com­ing ! You would­n’t go: Well, that’s an in­ter­est­ing co­in­ci­dence”, no, you would say: WHAT ELSE IS GOING ON THAT WE DON’T KNOW ABOUT?”Again — it’s not the size of the deals that’s solely the is­sue here, no, it’s also the of them. On October 1st sil­i­con val­ley ex­ec­u­tives and pro­cure­ment man­agers pan­icked over con­cerns like these:What other deals don’t we know about? Is this just the first of many?None of our DRAM sup­pli­ers warned us ahead of time! We have to as­sume they also won’t in the fu­ture, and that it’s pos­si­ble  of global DRAM could be bought up with­out us get­ting a sin­gle warn­ing!We know OpenAI’s com­peti­tors are al­ready panic-buy­ing!  If we don’t move we might be locked out of the mar­ket un­til 2028!OpenAI’s com­peti­tors, OEMs, and cloud providers scram­bled to se­cure what­ever in­ven­tory re­mained out of self-de­fense, and self-de­fense in a world that was en­tirely  due to the ac­cel­er­ant I’ll now ex­plain in Part III…Normally, the DRAM mar­ket has buffers: ware­houses of emer­gency stock, ex­cess wafer starts, older DRAM man­u­fac­tur­ing ma­chin­ery be­ing sold off to bud­get brands while the big brands up­grade their pro­duc­tion lines…but not in 2025, in 2025 those would-be buffers were de­pleted for three sep­a­rate rea­sons:Tar­iff Chaos. Companies had de­lib­er­ately re­duced how much DRAM they or­dered for their safety stock over the sum­mer of 2025 be­cause tar­iffs were chang­ing al­most weekly. Every RAM pur­chase risked be­ing made at the wrong mo­ment — and so fewer pur­chases were made.Prices had been falling all sum­mer. Because of the hes­i­tancy to pur­chase as much safety stock as usual, RAM prices were also gen­uinely falling over time.  And, ob­vi­ously when mem­ory is get­ting cheaper month over month, the thing you’d feel is pres­sured to buy a com­mod­ity that could be cheaper the next month…so every­one waited.Sec­ondary RAM Manufacturing Had Stalled. Budget brands nor­mally buy older DRAM fab­ri­ca­tion equip­ment from mega-pro­duc­ers like Samsung when Samsung up­grades their DRAM lines to the lat­est and great­est equip­ment.  This al­lows the DRAM mar­ket to more than it would oth­er­wise be­cause it makes any up­grad­ing of the fan­ci­est pro­duc­tion lines to still be change to the mar­ket. However, Korean mem­ory firms have been ter­ri­fied that re­selling old equip­ment to China-adjacent OEMs might trig­ger U.S. re­tal­i­a­tion…and so those ma­chines have been sit­ting idle in ware­houses since early spring.Yep, there was no cush­ion. OpenAI hit the mar­ket at the ex­act mo­ment it was least pre­pared. And now time for the biggest twist of all, a twist that’s ac­tu­ally , and there­fore should be get­ting dis­cussed by far more peo­ple in this writer’s opin­ion: OpenAI is­n’t even both­er­ing to buy fin­ished mem­ory mod­ules! No, their deals are un­prece­dent­edly only for raw wafers — un­cut, un­fin­ished, and not even al­lo­cated to a spe­cific DRAM stan­dard yet. It’s not even clear if they have de­cided yet on how or when they will fin­ish them into RAM sticks or HBM!  Right now it seems like these wafers will just be stock­piled in ware­houses — like a kid who hides the toy­box be­cause they’re afraid no­body wants to play with them, and thus self­ishly feels no­body but them should get the toys!And let’s just say it: Here is the un­com­fort­able truth Sam Altman is al­ways loath to ad­mit in in­ter­views: OpenAI is wor­ried about los­ing its lead. The last 18 months have seen com­peti­tors catch­ing up fast — Anthropic, Meta, xAI, and specif­i­cally Google’s Gemini 3 has got­ten a ton of praise just in the past week. Everyone’s chas­ing train­ing ca­pac­ity. Everyone needs mem­ory. DRAM is the lifeblood of scal­ing in­fer­ence and train­ing through­put. Cutting sup­ply to your ri­vals is not a con­spir­acy the­ory. It’s a busi­ness tac­tic as old as busi­ness it­self.  And so, when you con­sider how se­cre­tive OpenAI was about their deals with Samsung and SK Hynix, but ad­di­tion­ally how un­ready they were to im­me­di­ately uti­lize their ware­houses of DRAM wafers — it sure seems like a pri­mary goal of these deals was to , and not just an at­tempt to pro­tect OpenAI’s own sup­ply…Part V — What will be can­celled? What should you buy now?Al­right, now that we are done ex­plain­ing the , let’s get to the  – be­cause even if the RAM short­age mirac­u­lously im­proves im­me­di­ately be­hind the scenes — even if the AI Bubble in­stantly popped or 10 com­pa­nies started tool­ing up for more DRAM ca­pac­ity this sec­ond (and many are, to be fair), at a min­i­mum the next six to nine months are al­ready screwed  See above: DRAM man­u­fac­tures are quot­ing 13-Month lead times for DDR5!  This is not a tem­po­rary blip. This could be a once-in-a-gen­er­a­tion shock. So what gets hit first? What gets hit hard­est? Well, be­low is an E through S-Tier rank­ing of which prod­ucts are the most screwed”:S-Tier (Already Screwed — Too Late to Buy) -RAM it­self, ob­vi­ously. RAM prices have exploded”. The det­o­na­tion is in the past.SSDs. These tends to fol­low DRAM pric­ing with a lag.RADEON GPUs. AMD does­n’t bun­dle RAM in their BOM kits to AIBs the way Nvidia does. In fact, the RX 9070 GRE 16GB this chan­nel leaked months ago is al­most cer­tainly can­celled ac­cord­ing to our sourcesXBOX. Microsoft did­n’t plan. Prices may rise and/​or sup­ply may dwin­dle in 2026.Nvidia GPUs. Nvidia main­tains large mem­ory in­ven­to­ries for its board part­ners, giv­ing them a buffer. But high-ca­pac­ity GPUs (like a hy­po­thet­i­cal 24GB 5080 SUPER) are on ice for now be­cause  stores were never suf­fi­ciently built up. In fact, Nvidia is qui­etly telling part­ners that their SUPER re­fresh might” launch Q3 2026 — al­though most part­ners think it’s just a place­holder for when Nvidia ex­pects new ca­pac­ity to come on­line, and thus SUPER may never launch.C-Tier (Think about buy­ing soon)Lap­tops and phones. These com­pa­nies ne­go­ti­ate im­mense long-term con­tracts, so they’re not hit im­me­di­ately. But once their stock­piles run dry, watch out!D-Tier (Consider buy­ing soon, but there’s no rush)PlaySta­tion. Sony planned bet­ter than al­most any­one else. They bought ag­gres­sively dur­ing the sum­mer price trough, which is why they can af­ford a Black Friday dis­count while every­one else is rais­ing prices.Any­thing with­out RAM. Specifically CPUs that do not come with cool­ers could see price  over time since there could be a  in de­mand for CPUs if no­body has the RAM to feed them in sys­tems.???-Tier —Steam Machine. Valve keeps things quiet, but the big un­known is whether they pre-bought RAM months ago be­fore an­nounc­ing their much-hyped Steam Machine. If they did al­ready stock­pile an am­ple sup­ply of DDR5 - then Steam Machine should launch fine, but sup­ply could dry up tem­porar­ily at some point while they wait for prices to drop. However, if they did­n’t plan ahead - ex­pect a high launch price and very lit­tle re­sup­ply…it might even need to be can­celled or there might need to be a vari­ant of­fered with­out RAM in­cluded (BYO RAM Edition!).And that’s it! This last bit was the most im­por­tant part of the ar­ti­cle in this writer’s opin­ion — an at­tempt at help­ing you avoid get­ting burned. Well, ac­tu­ally, there is one other im­por­tant rea­son for this ar­ti­cle’s ex­is­tence I’ll tack onto the end — a hope that other peo­ple start dig­ging into what’s go­ing on at OpenAI.  I mean se­ri­ously — do we even have a sin­gle re­li­able au­dit of their fi­nan­cials to back up them out­ra­geously spend­ing this much money…  Heck, I’ve even heard from nu­mer­ous sources that OpenAI is buying up the man­u­fac­tur­ing equip­ment as well” — and with­out moun­tains of con­crete proof, and/​or more in­put from ad­di­tional sources on what that re­ally means…I don’t feel I can touch that hot potato with­out get­ting burned…but I hope some­one else will…

...

Read the original on www.mooreslawisdead.com »

8 250 shares, 17 trendiness

License Plate Privacy Check

...

Read the original on haveibeenflocked.com »

9 245 shares, 8 trendiness

A $20 over-the-counter drug in Europe requires a prescription and $800 in the U.S.

A mon­th’s sup­ply of Miebo, Bausch & Lomb’s pre­scrip­tion dry eye drug, costs $800 or more in the U. S. be­fore in­sur­ance. But the same drug — sold as EvoTears — has been avail­able over-the-counter (OTC) in Europe since 2015 for about $20. I or­dered it on­line from an over­seas phar­macy for $32 in­clud­ing ship­ping, and it was de­liv­ered in a week.

This is, of course, both shock­ing and un­sur­pris­ing. A 2021 RAND study found U. S. pre­scrip­tion drug prices are, on av­er­age, more than 2.5 times higher than in 32 other de­vel­oped na­tions. Miebo ex­em­pli­fies how some phar­ma­ceu­ti­cal com­pa­nies ex­ploit reg­u­la­tory loop­holes and patent pro­tec­tions, pri­or­i­tiz­ing prof­its over pa­tients, erod­ing trust in health care. But there is a way to fix this loop­hole.

In December 2019, Bausch & Lomb, for­merly a di­vi­sion of Valeant, ac­quired the ex­clu­sive li­cense for the com­mer­cial­iza­tion and de­vel­op­ment in the United States and Canada for NOV03, now called Miebo in the U. S. Rather than get­ting an ap­proval for an OTC drug, like it is in Europe, Bausch se­cured U.S. Food and Drug Administration ap­proval as a pre­scrip­tion med­ica­tion, sub­se­quently pric­ing it at a high level. Currently, ac­cord­ing to GoodRx, a monthly sup­ply of Miebo will cost $830.27 at Walgreens, and it’s listed at $818.38 on Amazon Pharmacy.

The strat­egy has paid off: Miebo’s 2024 sales — its first full year — hit $172 mil­lion, sur­pass­ing the com­pa­ny’s pro­jec­tions of $95 mil­lion. The com­pany now fore­casts sales to ex­ceed $500 mil­lion an­nu­ally. At European prices, those sales would be less than $20 mil­lion. Emboldened with Miebo’s early suc­cess, Bausch & Lomb raised the price an­other 4% in 2025, ac­cord­ing to the drug price track­ing firm 46brooklyn.

Bausch & Lomb has a track record of pri­or­i­tiz­ing prof­its over pa­tients. As Valeant, its busi­ness model was sim­ple: buy, gut, gouge, re­peat. In 2015, it raised prices for Nitropress and Isuprel by over 200% and 500%, re­spec­tively, trig­ger­ing a 2016 con­gres­sional hear­ing. Despite promises of re­form, lit­tle has changed. When he was at Allergan, Bausch & Lomb’s cur­rent CEO, Brent Saunders, pledged responsible pric­ing” but tried to ex­tend patent pro­tec­tion for Allergan’s drug Restasis (another dry eye drug) through a du­bi­ous deal with the Mohawk Indian tribe, later re­jected by courts.

Now at Bausch & Lomb, Saunders over­saw Miebo’s launch, claim­ing ear­lier this year in an in­vestor call, We are once again an in­no­va­tion com­pany.” But find­ing a way to get an ex­ist­ing European OTC drug to be a pre­scrip­tion drug in the U. S. with a new name and a 40-fold price in­crease is not true in­no­va­tion — it’s a price-goug­ing strat­egy.

Bausch & Lomb could have pur­sued OTC ap­proval in the U. S., lever­ag­ing its ex­per­tise in OTC eye drops and lo­tions. However, I could not find in tran­scripts or pre­sen­ta­tions any ev­i­dence that Baush & Lomb se­ri­ously pur­sued this. Prescription sta­tus, how­ever, en­sures much higher prices, pro­tected by patents and lim­ited com­pe­ti­tion. Even in­sured pa­tients feel the rip­ple ef­fects: Coupons may re­duce out-of-pocket costs, but in­sur­ers pay hun­dreds per pre­scrip­tion, dri­ving up pre­mi­ums and the over­all cost of health care for every­one.

In re­sponse to ques­tions from STAT about why Miebo is an ex­pen­sive pre­scrip­tion drug, a rep­re­sen­ta­tive said in a state­ment, The FDA de­ter­mined that MIEBO acts at the cel­lu­lar and mol­e­c­u­lar level of the eye, which meant it had to go through the same rig­or­ous process as any new phar­ma­ceu­ti­cal — a full New Drug Application. Unlike in Europe, where all med­ical de­vice eye drops are pre­scrip­tion-free and cleared through a highly pre­dictable and fast path­way, we were re­quired to de­sign, en­roll and com­plete ex­ten­sive clin­i­cal tri­als in­volv­ing thou­sands of pa­tients, and pro­vide de­tailed safety and ef­fi­cacy data sub­mis­sions. Those stud­ies took years and sig­nif­i­cant in­vest­ment, but they en­sure that MIEBO meets the high­est reg­u­la­tory stan­dards for safety and ef­fec­tive­ness.”

Bausch & Lomb’s care­fully worded re­sponse ex­pertly side­steps the real is­sue. The FDAs test for OTC sta­tus is­n’t a drug’s mech­a­nism of ac­tion — it’s whether pa­tients can use it safely with­out a doc­tor. Miebo’s track record as an OTC prod­uct in Europe for nearly a decade shows it meets that stan­dard. Bausch & Lomb pro­vides no ev­i­dence, or even as­ser­tion, that it ever tried for OTC ap­proval in the U. S. Instead, it pur­sued the pre­scrip­tion route — not be­cause of reg­u­la­tory ne­ces­sity, but as a busi­ness strat­egy to se­cure patents and com­mand an $800 price. In do­ing so, B&L is weaponiz­ing a reg­u­la­tory loop­hole against American pa­tients, pri­or­i­tiz­ing profit over ac­cess, and leav­ing their significant in­vest­ment” as the cost of mo­nop­oly, not med­ical ne­ces­sity.

Even if you ac­cept Bausch & Lomb’s self-serv­ing ra­tio­nale, the an­swer is not to al­low the loop­hole to per­sist, but to close it. The FDA could re­quire any drug ap­proved as OTC in­ter­na­tion­ally be con­sid­ered for OTC sta­tus in the United States be­fore green­light­ing it as a pre­scrip­tion prod­uct — and man­date retroac­tive re­view of cases like Miebo.

The FDAs OTC mono­graph process, which as­sesses the safety and ef­fi­cacy of non­pre­scrip­tion drugs, makes this fea­si­ble, though it may need to be ad­justed slightly. Those changes might in­volve in­cor­po­rat­ing a mech­a­nism to make sure that over­seas OTC sta­tus trig­gers a re­view of U. S. pre­scrip­tion drugs con­tain­ing the same ac­tive in­gre­di­ents or for­mu­la­tions for po­ten­tial OTC des­ig­na­tion; de­vel­op­ing cri­te­ria to as­sess equiv­a­lency in safety and ef­fi­cacy stan­dards be­tween U.S. OTC re­quire­ments and those of other coun­tries; and es­tab­lish­ing a retroac­tive re­view path­way within the mono­graph process to han­dle ex­ist­ing pre­scrip­tion drugs al­ready mar­keted OTC in­ter­na­tion­ally.

EvoTears thrives abroad with­out safety con­cerns, coun­ter­ing in­dus­try claims of stricter U. S. stan­dards. This re­form would de­ter com­pa­nies from repack­ag­ing OTC drugs as high-cost pre­scrip­tions, fos­ter­ing com­pe­ti­tion and low­er­ing prices.

While this tac­tic is­n’t wide­spread, it joins loop­holes like late-listed patents, picket fence patents, or pay-for-de­lay generic deals that un­der­mine trust in an in­dus­try whose em­ploy­ees largely aim to save lives.

Miebo also shows how global ref­er­ence pric­ing could save bil­lions. Aligning with European prices could cut con­sumer costs while re­duc­ing doc­tor vis­its, phar­macy time, and ad­min­is­tra­tive bur­dens. For pa­tients who skip doses to af­ford gro­ceries, lower prices would mean bet­ter ac­cess and health. Reforms like the 2022 Inflation Reduction Act’s Medicare price ne­go­ti­a­tions set a prece­dent, but tar­geted rules are ur­gently needed.

Unexplained dif­fer­ences in drug prices be­tween the U. S. and other wealthy coun­tries erode the pub­lic’s trust in health care. Companies like Bausch & Lomb ex­ploit sys­temic gaps, leav­ing pa­tients and pay­ers to foot ex­or­bi­tant bills. An OTC eval­u­a­tion rule, with retroac­tive re­views, is a prac­ti­cal first step, sig­nal­ing that pa­tient ac­cess takes prece­dence over cor­po­rate greed.

Let’s end the price-goug­ing prac­tices of out­liers and build a health care sys­tem that puts pa­tients first. Just as tar­get­ing crim­i­nal out­liers fos­ters a law-abid­ing so­ci­ety, hold­ing bad phar­ma­ceu­ti­cal ac­tors ac­count­able is cru­cial for restor­ing trust and in­tegrity to our health care sys­tem. While broader ap­proaches to mak­ing health care more fair, ac­ces­si­ble, and af­ford­able are needed, some­times the way to save bil­lions is to start by sav­ing hun­dreds of mil­lions.

David Maris is a six-time No. 1 ranked phar­ma­ceu­ti­cal an­a­lyst with more than two decades cov­er­ing the in­dus­try. He cur­rently runs Phalanx Investment Partners, a fam­ily of­fice; is a part­ner in Wall Street Beats; and is co-au­thor of the re­cently pub­lished book The Fax Club Experiment.” He is cur­rently work­ing on his next book about health care in America.

...

Read the original on www.statnews.com »

10 198 shares, 20 trendiness

Launching Wolfram Compute Services

Let’s say you’ve done a com­pu­ta­tion in Wolfram Language. And now you want to scale it up. Maybe 1000x or more. Well, to­day we’ve re­leased an ex­tremely stream­lined way to do that. Just wrap the scaled up com­pu­ta­tion in and off it’ll go to our new Wolfram Compute Services sys­tem. Then—in a minute, an hour, a day, or what­ever—it’ll let you know it’s fin­ished, and you can get its re­sults.

For decades I’ve of­ten needed to do big, crunchy cal­cu­la­tions (usually for sci­ence). With large vol­umes of data, mil­lions of cases, ram­pant com­pu­ta­tional ir­re­ducibil­ity, etc. I prob­a­bly have more com­pute ly­ing around my house than most peo­ple—these days about 200 cores worth. But many nights I’ll leave all of that com­pute run­ning, all night—and I still want much more. Well, as of to­day, there’s an easy so­lu­tion—for every­one: just seam­lessly send your com­pu­ta­tion off to Wolfram Compute Services to be done, at ba­si­cally any scale.

For nearly 20 years we’ve had built-in func­tions like and in Wolfram Language that make it im­me­di­ate to par­al­lelize sub­com­pu­ta­tions. But for this to re­ally let you scale up, you have to have the com­pute. Which now—thanks to our new Wolfram Compute Services—everyone can im­me­di­ately get.

The un­der­ly­ing tools that make Wolfram Compute Services pos­si­ble have ex­isted in the Wolfram Language for sev­eral years. But what Wolfram Compute Services now does is to pull every­thing to­gether to pro­vide an ex­tremely stream­lined all-in-one ex­pe­ri­ence. For ex­am­ple, let’s say you’re work­ing in a note­book and build­ing up a com­pu­ta­tion. And fi­nally you give the in­put that you want to scale up. Typically that in­put will have lots of de­pen­den­cies on ear­lier parts of your com­pu­ta­tion. But you don’t have to worry about any of that. Just take the in­put you want to scale up, and feed it to . Wolfram Compute Services will au­to­mat­i­cally take care of all the de­pen­den­cies, etc.

And an­other thing: , like every func­tion in Wolfram Language, is deal­ing with sym­bolic ex­pres­sions, which can rep­re­sent any­thing—from nu­mer­i­cal ta­bles to im­ages to graphs to user in­ter­faces to videos, etc. So that means that the re­sults you get can im­me­di­ately be used, say in your Wolfram Notebook, with­out any im­port­ing, etc.

OK, so what kinds of ma­chines can you run on? Well, Wolfram Compute Services gives you a bunch of op­tions, suit­able for dif­fer­ent com­pu­ta­tions, and dif­fer­ent bud­gets. There’s the most ba­sic 1 core, 8 GB op­tion—which you can use to just get a com­pu­ta­tion off your own ma­chine”. You can pick a ma­chine with larger mem­ory—cur­rently up to about 1500 GB. Or you can pick a ma­chine with more cores—cur­rently up to 192. But if you’re look­ing for even larger scale par­al­lelism Wolfram Compute Services can deal with that too. Because can map a func­tion across any num­ber of el­e­ments, run­ning on any num­ber of cores, across mul­ti­ple ma­chines.

OK, so here’s a very sim­ple ex­am­ple—that hap­pens to come from some sci­ence I did a lit­tle while ago. Define a func­tion that ran­domly adds nonover­lap­ping pen­tagons to a clus­ter:

For 20 pen­tagons I can run this quickly on my ma­chine:

But what about for 500 pen­tagons? Well, the com­pu­ta­tional geom­e­try gets dif­fi­cult and it would take long enough that I would­n’t want to tie up my own ma­chine do­ing it. But now there’s an­other op­tion: use Wolfram Compute Services!

And all I have to do is feed my com­pu­ta­tion to :

Immediately, a job is cre­ated (with all nec­es­sary de­pen­den­cies au­to­mat­i­cally han­dled). And the job is queued for ex­e­cu­tion. And then, a cou­ple of min­utes later, I get an email:

Not know­ing how long it’s go­ing to take, I go off and do some­thing else. But a while later, I’m cu­ri­ous to check how my job is do­ing. So I click the link in the email and it takes me to a dash­board—and I can see that my job is suc­cess­fully run­ning:

I go off and do other things. Then, sud­denly, I get an email:

It fin­ished! And in the mail is a pre­view of the re­sult. To get the re­sult as an ex­pres­sion in a Wolfram Language ses­sion I just eval­u­ate a line from the email:

And this is now a com­putable ob­ject that I can work with, say com­put­ing ar­eas

One of the great strengths of Wolfram Compute Services is that it makes it easy to use large-scale par­al­lelism. You want to run your com­pu­ta­tion in par­al­lel on hun­dreds of cores? Well, just use Wolfram Compute Services!

Here’s an ex­am­ple that came up in some re­cent work of mine. I’m search­ing for a cel­lu­lar au­toma­ton rule that gen­er­ates a pat­tern with a lifetime” of ex­actly 100 steps. Here I’m test­ing 10,000 ran­dom rules—which takes a cou­ple of sec­onds, and does­n’t find any­thing:

To test 100,000 rules I can use and run in par­al­lel, say across the 16 cores in my lap­top:

Still noth­ing. OK, so what about test­ing 100 mil­lion rules? Well, then it’s time for Wolfram Compute Services. The sim­plest thing to do is just to sub­mit a job re­quest­ing a ma­chine with lots of cores (here 192, the max­i­mum cur­rently of­fered):

A few min­utes later I get mail telling me the job is start­ing. After a while I check on my job and it’s still run­ning:

I go off and do other things. Then, af­ter a cou­ple of hours I get mail telling me my job is fin­ished. And there’s a pre­view in the email that shows, yes, it found some things:

And here they are—rules plucked from the hun­dred mil­lion tests we did in the com­pu­ta­tional uni­verse:

But what if we wanted to get this re­sult in less than a cou­ple of hours? Well, then we’d need even more par­al­lelism. And, ac­tu­ally, Wolfram Compute Services lets us get that too—us­ing . You can think of as a souped up ana­log of —mapping a func­tion across a list of any length, split­ting up the nec­es­sary com­pu­ta­tions across cores that can be on dif­fer­ent ma­chines, and han­dling the data and com­mu­ni­ca­tions in­volved in a scal­able way.

Because is a pure we have to re­arrange our com­pu­ta­tion a lit­tle—mak­ing it run 100,000 cases of se­lect­ing from 1000 ran­dom in­stances:

The sys­tem de­cided to dis­trib­ute my 100,000 cases across 316 sep­a­rate child jobs”, here each run­ning on its own core. How is the job do­ing? I can get a dy­namic vi­su­al­iza­tion of what’s hap­pen­ing:

And it does­n’t take many min­utes be­fore I’m get­ting mail that the job is fin­ished:

And, yes, even though I only had to wait for 3 min­utes to get this re­sult, the to­tal amount of com­puter time used—across all the cores—is about 8 hours.

Now I can re­trieve all the re­sults, us­ing to com­bine all the sep­a­rate pieces I gen­er­ated:

And, yes, if I wanted to spend a lit­tle more, I could run a big­ger search, in­creas­ing the 100,000 to a larger num­ber; and Wolfram Compute Services would seam­lessly scale up.

Like every­thing around Wolfram Language, Wolfram Compute Services is fully pro­gram­ma­ble. When you sub­mit a job, there are lots of op­tions you can set. We al­ready saw the op­tion which lets you choose the type of ma­chine to use. Currently the choices range from Basic1x8 (1 core, 8 GB) through Basic4x16 (4 cores, 16 GB) to parallel com­pute” Compute192x384 (192 cores, 384 GB) and large mem­ory” Memory192x1536 (192 cores, 1536 GB).

Different classes of ma­chine cost dif­fer­ent num­bers of cred­its to run. And to make sure things don’t go out of con­trol, you can set the op­tions (maximum time in sec­onds) and (maximum num­ber of cred­its to use).

Then there’s no­ti­fi­ca­tion. The de­fault is to send one email when the job is start­ing, and one when it’s fin­ished. There’s an op­tion that lets you give a name to each job, so you can more eas­ily tell which job a par­tic­u­lar piece of email is about, or where the job is on the web dash­board. (If you don’t give a name to a job, it’ll be re­ferred to by the UUID it’s been as­signed.)

The op­tion lets you say what no­ti­fi­ca­tions you want, and how you want to re­ceive them. There can be no­ti­fi­ca­tions when­ever the sta­tus of a job changes, or at spe­cific time in­ter­vals, or when spe­cific num­bers of cred­its have been used. You can get no­ti­fi­ca­tions ei­ther by email, or by text mes­sage. And, yes, if you get no­ti­fied that your job is go­ing to run out of cred­its, you can al­ways go to the Wolfram Account por­tal to top up your cred­its.

There are many prop­er­ties of jobs that you can query. A cen­tral one is . But, for ex­am­ple, gives you a whole as­so­ci­a­tion of re­lated in­for­ma­tion:

If your job suc­ceeds, it’s pretty likely will be all you need. But if some­thing goes wrong, you can eas­ily drill down to study the de­tails of what hap­pened with the job, for ex­am­ple by look­ing at .

If you want to know all the jobs you’ve ini­ti­ated, you can al­ways look at the web dash­board, but you can also get sym­bolic rep­re­sen­ta­tions of the jobs from:

For any of these job ob­jects, you can ask for prop­er­ties, and you can for ex­am­ple also ap­ply to abort them.

Once a job has com­pleted, its re­sult will be stored in Wolfram Compute Services—but only for a lim­ited time (currently two weeks). Of course, once you’ve got the re­sult, it’s very easy to store it per­ma­nently, for ex­am­ple, by putting it into the Wolfram Cloud us­ing [expr]. (If you know you’re go­ing to want to store the re­sult per­ma­nently, you can also do the right in­side your .)

Talking about pro­gram­matic uses of Wolfram Compute Services, here’s an­other ex­am­ple: let’s say you want to gen­er­ate a com­pute-in­ten­sive re­port once a week. Well, then you can put to­gether sev­eral very high-level Wolfram Language func­tions to de­ploy a sched­uled task that will run in the Wolfram Cloud to ini­ti­ate jobs for Wolfram Compute Services:

And, yes, you can ini­ti­ate a Wolfram Compute Services job from any Wolfram Language sys­tem, whether on the desk­top or in the cloud.

Wolfram Compute Services is go­ing to be very use­ful to many peo­ple. But ac­tu­ally it’s just part of a much larger con­stel­la­tion of ca­pa­bil­i­ties aimed at broad­en­ing the ways Wolfram Language can be used.

Mathematica and the Wolfram Language started—back in 1988—as desk­top sys­tems. But even at the very be­gin­ning, there was a ca­pa­bil­ity to run the note­book front end on one ma­chine, and then have a remote ker­nel” on an­other ma­chine. (In those days we sup­ported, among other things, com­mu­ni­ca­tion via phone line!) In 2008 we in­tro­duced built-in par­al­lel com­pu­ta­tion ca­pa­bil­i­ties like and . Then in 2014 we in­tro­duced the Wolfram Cloud—both repli­cat­ing the core func­tion­al­ity of Wolfram Notebooks on the web, and pro­vid­ing ser­vices such as in­stant APIs and sched­uled tasks. Soon there­after, we in­tro­duced the Enterprise Private Cloud—a pri­vate ver­sion of Wolfram Cloud. In 2021 we in­tro­duced Wolfram Application Server to de­liver high-per­for­mance APIs (and it’s what we now use, for ex­am­ple, for Wolfram|Alpha). Along the way, in 2019, we in­tro­duced Wolfram Engine as a stream­lined server and com­mand-line de­ploy­ment of Wolfram Language. Around Wolfram Engine we built WSTPServer to serve Wolfram Engine ca­pa­bil­i­ties on lo­cal net­works, and we in­tro­duced WolframScript to pro­vide a de­ploy­ment-ag­nos­tic way to run com­mand-line-style Wolfram Language code. In 2020 we then in­tro­duced the first ver­sion of , to be used with cloud ser­vices such as AWS and Azure. But un­like with Wolfram Compute Services, this re­quired do it your­self” pro­vi­sion­ing and li­cens­ing with the cloud ser­vices. And, fi­nally, now, that’s what we’ve au­to­mated in Wolfram Compute Services.

OK, so what’s next? An im­por­tant di­rec­tion is the forth­com­ing Wolfram HPCKit—for or­ga­ni­za­tions with their own large-scale com­pute fa­cil­i­ties to set up their own back ends to , etc. is built in a very gen­eral way, that al­lows dif­fer­ent batch com­pu­ta­tion providers” to be plugged in. Wolfram Compute Services is ini­tially set up to sup­port just one stan­dard batch com­pu­ta­tion provider: . HPCKit will al­low or­ga­ni­za­tions to con­fig­ure their own com­pute fa­cil­i­ties (often with our help) to serve as batch com­pu­ta­tion providers, ex­tend­ing the stream­lined ex­pe­ri­ence of Wolfram Compute Services to on-premise or or­ga­ni­za­tional com­pute fa­cil­i­ties, and au­tomat­ing what is of­ten a rather fid­dly job process of sub­mis­sion (which, I must say, per­son­ally re­minds me a lot of the main­frame job con­trol sys­tems I used in the 1970s).

Wolfram Compute Services is cur­rently set up purely as a batch com­pu­ta­tion en­vi­ron­ment. But within the Wolfram System, we have the ca­pa­bil­ity to sup­port syn­chro­nous re­mote com­pu­ta­tion, and we’re plan­ning to ex­tend Wolfram Compute Services to of­fer this—al­low­ing one, for ex­am­ple, to seam­lessly run a re­mote ker­nel on a large or ex­otic re­mote ma­chine.

But this is for the fu­ture. Today we’re launch­ing the first ver­sion of Wolfram Compute Services. Which makes supercomputer power” im­me­di­ately avail­able for any Wolfram Language com­pu­ta­tion. I think it’s go­ing to be very use­ful to a broad range of users of Wolfram Language. I know I’m go­ing to be us­ing it a lot.

...

Read the original on writings.stephenwolfram.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.