10 interesting stories served every morning and every evening.




1 464 shares, 25 trendiness

the frontier of vision AI

Your browser does not sup­port the au­dio el­e­ment.

This con­tent is gen­er­ated by Google AI. Generative AI is ex­per­i­men­tal

Gemini 3 Pro rep­re­sents a gen­er­a­tional leap from sim­ple recog­ni­tion to true vi­sual and spa­tial rea­son­ing. It is our most ca­pa­ble mul­ti­modal model ever, de­liv­er­ing state-of-the-art per­for­mance across doc­u­ment, spa­tial, screen and video un­der­stand­ing. This model sets new highs on vi­sion bench­marks such as MMMU Pro and Video MMMU for com­plex vi­sual rea­son­ing, as well as use-case-spe­cific bench­marks across doc­u­ment, spa­tial, screen and long video un­der­stand­ing.

Real-world doc­u­ments are messy, un­struc­tured, and dif­fi­cult to parse — of­ten filled with in­ter­leaved im­ages, il­leg­i­ble hand­writ­ten text, nested ta­bles, com­plex math­e­mat­i­cal no­ta­tion and non-lin­ear lay­outs. Gemini 3 Pro rep­re­sents a ma­jor leap for­ward in this do­main, ex­celling across the en­tire doc­u­ment pro­cess­ing pipeline — from highly ac­cu­rate Optical Character Recognition (OCR) to com­plex vi­sual rea­son­ing.To truly un­der­stand a doc­u­ment, a model must ac­cu­rately de­tect and rec­og­nize text, ta­bles, math for­mu­las, fig­ures and charts re­gard­less of noise or for­mat.A fun­da­men­tal ca­pa­bil­ity is derendering” — the abil­ity to re­verse-en­gi­neer a vi­sual doc­u­ment back into struc­tured code (HTML, LaTeX, Markdown) that would recre­ate it. As il­lus­trated be­low, Gemini 3 demon­strates ac­cu­rate per­cep­tion across di­verse modal­i­ties in­clud­ing con­vert­ing an 18th-century mer­chant log into a com­plex table, or trans­form­ing a raw im­age with math­e­mat­i­cal an­no­ta­tion into pre­cise LaTeX code.

Example 2: Reconstructing equa­tions from an im­age

Example 3: Reconstructing Florence Nightingale’s orig­i­nal Polar Area Diagram into an in­ter­ac­tive chart (with a tog­gle!)

Users can rely on Gemini 3 to per­form com­plex, multi-step rea­son­ing across ta­bles and charts — even in long re­ports. In fact, the model no­tably out­per­forms the hu­man base­line on the CharXiv Reasoning bench­mark (80.5%).To il­lus­trate this, imag­ine a user an­a­lyz­ing the 62-page U.S. Census Bureau Income in the United States: 2022” re­port with the fol­low­ing prompt: Compare the 2021–2022 per­cent change in the Gini in­dex for Money Income” ver­sus Post-Tax Income”, and what caused the di­ver­gence in the post-tax mea­sure, and in terms of Money Income”, does it show the low­est quin­tile’s share ris­ing or falling?”Swipe through the im­ages be­low to see the mod­el’s step-by-step rea­son­ing.

Visual Extraction: To an­swer the Gini Index Comparison ques­tion, Gemini lo­cated and cross-ref­er­enced this info in Figure 3 about Money Income de­creased by 1.2 per­cent” and in Table B-3 about Post-Tax Income in­creased by 3.2 per­cent”

Causal Logic: Crucially, Gemini 3 does not stop at the num­bers; it cor­re­lates this gap with the tex­t’s pol­icy analy­sis, cor­rectly iden­ti­fy­ing Lapse of ARPA Policies and the end of Stimulus Payments are the main causes.

Numerical Comparison: To com­pare the low­est quan­tile’s share ris­ing or falling, Gemini3 looked at table A-3, and com­pared the num­ber of 2.9 and 3.0, and con­cluded that the share of ag­gre­gate house­hold in­come held by the low­est quin­tile was ris­ing.”

Gemini 3 Pro is our strongest spa­tial un­der­stand­ing model so far. Combined with its strong rea­son­ing, this en­ables the model to make sense of the phys­i­cal world.Point­ing ca­pa­bil­ity: Gemini 3 has the abil­ity to point at spe­cific lo­ca­tions in im­ages by out­putting pixel-pre­cise co­or­di­nates. Sequences of 2D points can be strung to­gether to per­form com­plex tasks, such as es­ti­mat­ing hu­man poses or re­flect­ing tra­jec­to­ries over time.Open vo­cab­u­lary ref­er­ences: Gemini 3 iden­ti­fies ob­jects and their in­tent us­ing an open vo­cab­u­lary. The most di­rect ap­pli­ca­tion is ro­bot­ics: the user can ask a ro­bot to gen­er­ate spa­tially grounded plans like, Given this messy table, come up with a plan on how to sort the trash.” This also ex­tends to AR/XR de­vices, where the user can re­quest an AI as­sis­tant to Point to the screw ac­cord­ing to the user man­ual.”

Gemini 3.0 Pro’s spa­tial un­der­stand­ing re­ally shines through its screen un­der­stand­ing of desk­top and mo­bile OS screens. This re­li­a­bil­ity helps make com­puter use agents ro­bust enough to au­to­mate repet­i­tive tasks. UI un­der­stand­ing ca­pa­bil­i­ties can also en­able tasks like QA test­ing, user on­board­ing and UX an­a­lyt­ics. The fol­low­ing com­puter use demo shows the model per­ceiv­ing and click­ing with high pre­ci­sion.

Gemini 3 Pro takes a mas­sive leap for­ward in how AI un­der­stands video, the most com­plex data for­mat we in­ter­act with. It is dense, dy­namic, mul­ti­modal and rich with con­text.High frame rate un­der­stand­ing: We have op­ti­mized the model to be much stronger at un­der­stand­ing fast-paced ac­tions when sam­pling at >1 frames-per-sec­ond. Gemini 3 Pro can cap­ture rapid de­tails — vi­tal for tasks like an­a­lyz­ing golf swing me­chan­ics.

By pro­cess­ing video at 10 FPS—10x the de­fault speed—Gem­ini 3 Pro catches every swing and shift in weight, un­lock­ing deep in­sights into player me­chan­ics.

2. Video rea­son­ing with thinking” mode: We up­graded thinking” mode to go be­yond ob­ject recog­ni­tion to­ward true video rea­son­ing. The model can now bet­ter trace com­plex cause-and-ef­fect re­la­tion­ships over time. Instead of just iden­ti­fy­ing what is hap­pen­ing, it un­der­stands why it is hap­pen­ing.3. Turning long videos into ac­tion: Gemini 3 Pro bridges the gap be­tween video and code. It can ex­tract knowl­edge from long-form con­tent and im­me­di­ately trans­late it into func­tion­ing apps or struc­tured code.

Here are a few ways we think var­i­ous fields will ben­e­fit from Gemini 3’s ca­pa­bil­i­ties.Gem­ini 3.0 Pro’s en­hanced vi­sion ca­pa­bil­i­ties drive sig­nif­i­cant gains in the ed­u­ca­tion field, par­tic­u­larly for di­a­gram-heavy ques­tions cen­tral to math and sci­ence. It suc­cess­fully tack­les the full spec­trum of mul­ti­modal rea­son­ing prob­lems found from mid­dle school through post-sec­ondary cur­ricu­lums. This in­cludes vi­sual rea­son­ing puz­zles (like Math Kangaroo) and com­plex chem­istry and physics di­a­grams.Gem­ini 3’s vi­sual in­tel­li­gence also pow­ers the gen­er­a­tive ca­pa­bil­i­ties of Nano Banana Pro. By com­bin­ing ad­vanced rea­son­ing with pre­cise gen­er­a­tion, the model, for ex­am­ple, can help users iden­tify ex­actly where they went wrong in a home­work prob­lem.

Prompt: Here is a photo of my home­work at­tempt. Please check my steps and tell me where I went wrong. Instead of ex­plain­ing in text, show me vi­su­ally on my im­age.” (Note: Student work is shown in blue; model cor­rec­tions are shown in red). [See prompt in Google AI Studio]

Gemini 3 Pro

stands as our most ca­pa­ble gen­eral model for med­ical and bio­med­ical im­agery un­der­stand­ing, achiev­ing state-of-the-art per­for­mance across ma­jor pub­lic bench­marks in MedXpertQA-MM (a dif­fi­cult ex­pert-level med­ical rea­son­ing exam), VQA-RAD (radiology im­agery Q&A) and MicroVQA (multimodal rea­son­ing bench­marks for mi­croscopy based bi­o­log­i­cal re­search).

Gemini 3 Pro’s en­hanced doc­u­ment un­der­stand­ing helps pro­fes­sion­als in fi­nance and law tackle highly com­plex work­flows. Finance plat­forms can seam­lessly an­a­lyze dense re­ports filled with charts and ta­bles, while le­gal plat­forms ben­e­fit from the mod­el’s so­phis­ti­cated doc­u­ment rea­son­ing.

Gemini 3 Pro im­proves the way it processes vi­sual in­puts by pre­serv­ing the na­tive as­pect ra­tio of im­ages. This dri­ves sig­nif­i­cant qual­ity im­prove­ments across the board.

Additionally, de­vel­op­ers gain gran­u­lar con­trol over per­for­mance and cost via the new me­di­a_res­o­lu­tion pa­ra­me­ter. This al­lows you to tune vi­sual to­ken us­age to bal­ance fi­delity against con­sump­tion:High res­o­lu­tion: Maximizes fi­delity for tasks re­quir­ing fine de­tail, such as dense OCR or com­plex doc­u­ment un­der­stand­ing.Low res­o­lu­tion: Optimizes for cost and la­tency on sim­pler tasks, such as gen­eral scene recog­ni­tion or long-con­text tasks.For spe­cific rec­om­men­da­tions, re­fer to our Gemini 3.0 Documentation Guide.We are ex­cited to see what you build with these new ca­pa­bil­i­ties. To get started, check out our de­vel­oper doc­u­men­ta­tion or play with the model in Google AI Studio to­day.

...

Read the original on blog.google »

2 390 shares, 17 trendiness

Most Technical Problems Are Really People Problems

Skip to main con­tent

I once worked at a com­pany which had an enor­mous amount of tech­ni­cal debt - mil­lions of lines of code, no unit tests, based on frame­works that were well over a decade out of date.  On one spe­cific pro­ject, we had a mar­ket need to get some Windows-only mod­ules run­ning on Linux, and rather than cross-com­pil­ing, an­other team had sim­ply copied & pasted a few hun­dred thou­sand lines of code, swap­ping Windows-specific com­po­nents for Linux-specific. For the non-tech­ni­cal reader, this is an enor­mous prob­lem be­cause now two ver­sions of the code ex­ist.  So, all fea­tures & bug fixes must be solved in two sep­a­rate code­bases that will grow apart over time.  When I heard about this, a young & naive ver­sion of me set out to fix the sit­u­a­tion….Tech debt pro­jects are al­ways a hard sell to man­age­ment, be­cause even if every­thing goes flaw­lessly, the code just does roughly what it did be­fore.  This pro­ject was no ex­cep­tion, and the op­tics weren’t great.  I did as many en­gi­neers do and ignored the pol­i­tics”, put my head down, and got it done.  But, the pro­ject went long, and I lost man­age­men­t’s trust in the process.I re­al­ized I was es­sen­tially try­ing to solve a peo­ple prob­lem with a tech­ni­cal so­lu­tion.  Most of the de­vel­op­ers at this com­pany were happy do­ing the same thing to­day that they did yes­ter­day…and five years ago.  As Andrew Harmel-Law points out, code tends to fol­low the per­son­al­i­ties of the peo­ple that wrote it.  Personality types who in­tensely dis­like change tend not to de­sign their code with fu­ture change in mind.Most tech­ni­cal prob­lems are re­ally peo­ple prob­lems.  Think about it.  Why does tech­ni­cal debt ex­ist?  Because re­quire­ments weren’t prop­erly clar­i­fied be­fore work be­gan.  Because a sales­per­son promised an un­re­al­is­tic dead­line to a cus­tomer.  Because a de­vel­oper chose an out­dated tech­nol­ogy be­cause it was com­fort­able.  Because man­age­ment was too re­ac­tive and can­celled a pro­ject mid-flight.  Because some­one’s ego would­n’t let them see a bet­ter way of do­ing things.The core is­sue with the pro­ject was that ad­mit­ting the need for refac­tor­ing was also to ad­mit that the way the com­pany was build­ing soft­ware was bro­ken and that in­di­vid­ual skillsets were sorely out of date.  My small team was try­ing to fix one mod­ule of many, while other de­vel­op­ers were writ­ing code as they had been for decades.  I had one de­vel­oper openly tell me, I don’t want to learn any­thing new.”  I re­al­ized that you’ll never clean up tech debt faster than oth­ers cre­ate it.  It is like triage in an emer­gency room, you must stop the bleed­ing first, then you can fix what­ever is bro­ken.The pro­ject also dis­abused me of the en­gi­neer’s ideal of a world in which en­gi­neer­ing prob­lems can be solved in a vac­uum - stay­ing out of politics” and let­ting the work speak for it­self - a world where dead­lines don’t ex­ist…and let’s be hon­est, nei­ther do cus­tomers.  This ideal world rarely ex­ists.  The vast ma­jor­ity of pro­jects have non-tech­ni­cal stake­hold­ers, and telling them just trust me; we’re work­ing on it” does­n’t cut it.  I re­al­ized that the per­cep­tion that your team is get­ting a lot done is just as im­por­tant as get­ting a lot done.Non-tech­ni­cal peo­ple do not in­tu­itively un­der­stand the level of ef­fort re­quired or the need for tech debt cleanup; it must be com­mu­ni­cated ef­fec­tively by en­gi­neer­ing - in both ini­tial es­ti­mates & pro­ject up­dates.  Unless lead­er­ship has an en­gi­neer­ing back­ground, the value of the tech­ni­cal debt work likely needs to be quan­ti­fied and shown as busi­ness value.Per­haps these are the lessons that prep one for more se­nior po­si­tions.  In my opin­ion, any­one above se­nior en­gi­neer level needs to know how to col­lab­o­rate cross-func­tion­ally, re­gard­less of whether they choose a tech­ni­cal or man­age­ment track.  Schools teach Computer Science, not nav­i­gat­ing per­son­al­i­ties, egos, and per­sonal blindspots.  I have worked with some in­cred­i­ble en­gi­neers, bet­ter than my­self - the type that have deep tech­ni­cal knowl­edge on just about any tech­nol­ogy you bring up.  When I was younger, I wanted to be that en­gi­neer - the engineer’s en­gi­neer”.  But I re­al­ize now, that is not my per­son­al­ity.  I’m too ADD to be com­pletely heads down. :)For all of their (considerable) strengths, more of­ten than not, those en­gi­neers shy away from the in­ter­per­sonal.  They can be in­cred­i­bly pro­duc­tive ICs, but may fail with big­ger ini­tia­tives be­cause they are only one per­son - a sin­gle proces­sor core can only go so fast.  Per­haps equally valu­able is the heads up coder” - the per­son who is deeply tech­ni­cal, but also able to pick their head up & see pro­ject risks com­ing (technical & oth­er­wise) and steer the team around them.

You start out your day with a nice cup of cof­fee, and think, Ah, green­field pro­ject day…smooth sail­ing”.  You fire up Visual Studio and cre­ate a new C# pro­ject.  First things first, I need li­brary X.” you say.  Wait, what the?”   The full er­ror:  Package MyPackage 1.0.0.0’ was re­stored us­ing .NETFramework,Version=v4.6.1, .NETFramework,Version=v4.6.2, .NETFramework,Version=v4.7, .NETFramework,Version=v4.7.1, .NETFramework,Version=v4.7.2, .NETFramework,Version=v4.8, .NETFramework,Version=v4.8.1’ in­stead of the pro­ject tar­get frame­work net6.0’. This pack­age may not be fully com­pat­i­ble with your pro­ject.   Ok” you think, That li­brary is a bit older.  I’ll go up­date the li­brary pro­ject to .NET 6 to match my pro­ject.  But, where is .NET 6?     Ok, what about my new pro­ject?  Just as a test, does the warn­ing go away if I set it to an older .NET Framework?  Wait, where are the .NET Framework ver­sions?…

Pointers are funny things.  They are one of the make or break con­cepts for be­gin­ners, and even years later, they can cause grief to ex­pe­ri­enced de­vel­op­ers.  I am no ex­cep­tion.  Here is one such story: I was faced with a class which I wanted to refac­tor.  It can be sim­pli­fied as fol­lows: In the in­ter­est of break­ing up the re­spon­si­bil­i­ties, I added a cou­ple of in­ter­faces. The idea is that I can pass around smart point­ers to these in­ter­faces and be­gin to de­cou­ple por­tions of the code.  For ex­am­ple, I can in­ject them into classes that need them: However, I made a mis­take.  I blame it on years of us­ing boost::in­trustive_ptr in­stead of std::shared_ptr, but enough ex­cuses.  Let’s see if you can spot it. Do you see it?  If so, give your­self 5 points.  If not, maybe af­ter see­ing the out­put: second’ go­ing out of scope… Destructor first’ go­ing out of scope… root’ go­ing out of scope… All done… Both shared point­ers ( first & sec­ond…

...

Read the original on blog.joeschrag.com »

3 359 shares, 43 trendiness

Self-hosting my photos with Immich

For every cloud ser­vice I use, I want to have a lo­cal copy of my data for backup pur­poses and in­de­pen­dence. Unfortunately, the gpho­tos-sync tool stopped

work­ing in March

2025 when Google re­stricted the OAuth scopes, so I needed an al­ter­na­tive for my ex­ist­ing Google Photos setup. In this post, I de­scribe how I have set up

Immich, a self-hostable photo man­ager.

Here is the end re­sult: a few (live) pho­tos from NixCon

2025:

I am run­ning Immich on my Ryzen 7 Mini PC (ASRock DeskMini

X600), which con­sumes less than 10 W of power in idle and has plenty of re­sources for VMs (64 GB RAM, 1 TB disk). You can read more about it in my blog post from July 2024:

I in­stalled Proxmox, an Open Source vir­tu­al­iza­tion plat­form, to di­vide this mini server into VMs, but you could of course also in­stall Immich di­rectly on any server.

I cre­ated a VM (named photos”) with 500 GB of disk space, 4 CPU cores and 4 GB of RAM.

For the ini­tial im­port, you could as­sign more CPU and RAM, but for nor­mal us­age, that’s enough.

I (declaratively) in­stalled

NixOS on that VM as de­scribed in this blog post:

Afterwards, I en­abled Immich, with this ex­act con­fig­u­ra­tion:

At this point, Immich is avail­able on lo­cal­host, but not over the net­work, be­cause NixOS en­ables a fire­wall by de­fault. I could en­able the

ser­vices.im­mich.open­Fire­wall op­tion, but I ac­tu­ally want Immich to only be avail­able via my Tailscale VPN, for which I don’t need to open fire­wall ac­cess — in­stead, I use tailscale serve to for­ward traf­fic to lo­cal­host:2283:

pho­tos# tailscale serve –bg http://​lo­cal­host:2283

Because I have Tailscale’s MagicDNS

and TLS cer­tifi­cate pro­vi­sion­ing

en­abled, that means I can now open https://​pho­tos.ex­am­ple.ts.net in my browser on my PC, lap­top or phone.

At first, I tried im­port­ing my pho­tos us­ing the of­fi­cial Immich CLI:

% nix run nix­p­kgs#im­mich-cli — lo­gin https://​pho­tos.ex­am­ple.ts.net se­cret

% nix run nix­p­kgs#im­mich-cli — up­load –recursive /home/michael/lib/photo/gphotos-takeout

Unfortunately, the up­load was not run­ning re­li­ably and had to be restarted man­u­ally a few times af­ter run­ning into a time­out. Later I re­al­ized that this was be­cause the Immich server runs back­ground jobs like thumb­nail cre­ation, meta­data ex­trac­tion or face de­tec­tion, and these back­ground jobs slow down the up­load to the ex­tent that the up­load can fail with a time­out.

The other is­sue was that even af­ter the up­load was done, I re­al­ized that Google Takeout archives for Google Photos con­tain meta­data in sep­a­rate JSON files next to the orig­i­nal im­age files:

Unfortunately, these files are not con­sid­ered by im­mich-cli.

Luckily, there is a great third-party tool called

im­mich-go, which solves both of these is­sues! It pauses back­ground tasks be­fore up­load­ing and restarts them af­ter­wards, which works much bet­ter, and it does its best to un­der­stand Google Takeout archives.

I ran im­mich-go as fol­lows and it worked beau­ti­fully:

% im­mich-go \

up­load \

from-google-pho­tos \

–server=https://​pho­tos.ex­am­ple.ts.net \

–api-key=secret \

~/Downloads/takeout-*.zip

My main source of new pho­tos is my phone, so I in­stalled the Immich app on my iPhone, logged into my Immich server via its Tailscale URL and en­abled au­to­matic backup of new pho­tos via the icon at the top right.

I am not 100% sure whether these set­tings are cor­rect, but it seems like cam­era pho­tos gen­er­ally go into Live Photos, and Recent should cover other files…?!

If any­one knows, please send an ex­pla­na­tion (or a link!) and I will up­date the ar­ti­cle.

I also strongly rec­om­mend to dis­able no­ti­fi­ca­tions for Immich, be­cause oth­er­wise you get no­ti­fi­ca­tions when­ever it up­loads im­ages in the back­ground. These no­ti­fi­ca­tions are not re­quired for back­ground up­load to work, as an Immich

de­vel­oper con­firmed on

Reddit. Open

Settings → Apps → Immich → Notifications and un-tick the per­mis­sion check­box:

Immich’s doc­u­men­ta­tion on

back­ups con­tains some good rec­om­men­da­tions. The Immich de­vel­op­ers rec­om­mend back­ing up the en­tire con­tents of UPLOAD_LOCATION, which is /var/lib/immich on NixOS. The

back­ups sub­di­rec­tory con­tains SQL dumps, whereas the 3 di­rec­to­ries up­load,

li­brary and pro­file con­tain all user-up­loaded data.

Hence, I have set up a sys­temd timer that runs rsync to copy /var/lib/immich

onto my PC, which is en­rolled in a 3-2-1 backup

scheme.

Immich (currently?) does not con­tain photo edit­ing fea­tures, so to ro­tate or crop an im­age, I down­load the im­age and use GIMP.

To share im­ages, I still up­load them to Google Photos (depending on who I share them with).

The two most promis­ing op­tions in the space of self-hosted im­age man­age­ment tools seem to be Immich and Ente.

I got the im­pres­sion that Immich is more pop­u­lar in my bub­ble, and Ente made the im­pres­sion on me that its scope is far larger than what I am look­ing for:

Ente is a ser­vice that pro­vides a fully open source, end-to-end en­crypted plat­form for you to store your data in the cloud with­out need­ing to trust the ser­vice provider. On top of this plat­form, we have built two apps so far: Ente Photos (an al­ter­na­tive to Apple and Google Photos) and Ente Auth (a 2FA al­ter­na­tive to the dep­re­cated Authy).

I don’t need an end-to-end en­crypted plat­form. I al­ready have en­cryp­tion on the tran­sit layer (Tailscale) and disk layer (LUKS), no need for more com­plex­ity.

Immich is a de­light­ful app! It’s very fast and gen­er­ally seems to work well.

The ini­tial im­port is smooth, but only if you use the right tool. Ideally, the of­fi­cial im­mich-cli could be im­proved. Or maybe im­mich-go could be made the of­fi­cial one.

I think the auto backup is too hard to con­fig­ure on an iPhone, so that could also be im­proved.

But aside from these ini­tial stum­bling blocks, I have no com­plaints.

Table Of Contents

...

Read the original on michael.stapelberg.ch »

4 304 shares, 28 trendiness

Fedi.Tips 🎄 (@FediTips@social.growyourown.services)

To use the Mastodon web ap­pli­ca­tion, please en­able JavaScript. Alternatively, try one of the na­tive apps for Mastodon for your plat­form.

...

Read the original on social.growyourown.services »

5 304 shares, 20 trendiness

YouTube secretly tests AI video retouching without creators’ consent

The con­tro­versy high­lights a wider trend in which more of what peo­ple see on­line is pre-processed by AI be­fore reach­ing them. Smartphone mak­ers like Samsung and Google have long used AI to enhance” im­ages. Samsung pre­vi­ously ad­mit­ted to us­ing AI to sharpen moon pho­tos, while Google’s Pixel Best Take” fea­ture stitches to­gether fa­cial ex­pres­sions from mul­ti­ple shots to cre­ate a sin­gle perfect” group pic­ture.

...

Read the original on www.ynetnews.com »

6 300 shares, 14 trendiness

Framework Laptop 13 gets ARM processor with 12 cores via upgrade kit

The Qualcomm Snapdragon X Plus and Snapdragon X Elite have proven that ARM proces­sors have earned a place in the lap­top mar­ket, as de­vices like the Lenovo IdeaPad Slim 5 stand out with their long bat­tery life and an af­ford­able price point.

MetaComputing is now of­fer­ing an al­ter­na­tive to Intel, AMD and the Snapdragon X se­ries. Specifically, the com­pany has in­tro­duced a main­board that can be in­stalle in the Framework Laptop 13 or in a mini PC case. This main­board is equipped with a CIX CP8180 ARM chipset, which is also found in­side the Minisforum MS-R1. This proces­sor has a to­tal of eight ARM Cortex-A720 per­for­mance cores, the two fastest can hit boost clock speeds of up to 2.6 GHz. Moreover, there are four Cortex-A520 ef­fi­ciency cores.

...

Read the original on www.notebookcheck.net »

7 268 shares, 14 trendiness

Patterns for Defensive Programming in Rust

Whenever I see the com­ment // this should never hap­pen in code, I try to find out the ex­act con­di­tions un­der which it could hap­pen. And in 90% of cases, I find a way to do just that. More of­ten than not, the de­vel­oper just has­n’t con­sid­ered all edge cases or fu­ture code changes.

In fact, the rea­son why I like this com­ment so much is that it of­ten marks the ex­act spot where strong guar­an­tees fall apart. Often, vi­o­lat­ing im­plicit in­vari­ants that aren’t en­forced by the com­piler are the root cause.

Yes, the com­piler pre­vents mem­ory safety is­sues, and the stan­dard li­brary is best-in-class. But even the stan­dard li­brary has its warts and bugs in busi­ness logic can still hap­pen.

All we can work with are hard-learned pat­terns to write more de­fen­sive Rust code, learned through­out years of ship­ping Rust code to pro­duc­tion. I’m not talk­ing about de­sign pat­terns here, but rather small id­ioms, which are rarely doc­u­mented, but make a big dif­fer­ence in the over­all code qual­ity.

if !matching_users.is_empty() {

let ex­ist­ing_user = &matching_users[0];

What if you refac­tor it and for­get to keep the is_empty() check? The prob­lem is that the vec­tor in­dex­ing is de­cou­pled from check­ing the length. So match­ing_users[0] can panic at run­time if the vec­tor is empty.

Checking the length and in­dex­ing are two sep­a­rate op­er­a­tions, which can be changed in­de­pen­dently. That’s our first im­plicit in­vari­ant that’s not en­forced by the com­piler.

If we use slice pat­tern match­ing in­stead, we’ll only get ac­cess to the el­e­ment if the cor­rect match arm is ex­e­cuted.

match match­ing_users.as_s­lice() {

[] => todo!(“What to do if no users found!?“),

[existing_user] => { // Safe! Compiler guar­an­tees ex­actly one el­e­ment

// No need to in­dex into the vec­tor,

// we can di­rectly use `existing_user` here

_ => Err(RepositoryError::DuplicateUsers)

Note how this au­to­mat­i­cally un­cov­ered one more edge case: what if the list is empty? We had­n’t ex­plic­itly con­sid­ered this case be­fore. The com­piler-en­forced pat­tern match­ing re­quires us to think about all pos­si­ble states! This is a com­mon pat­tern in all ro­bust Rust code: putting the com­piler in charge of en­forc­ing in­vari­ants.

When ini­tial­iz­ing an ob­ject with many fields, it’s tempt­ing to use ..Default::default() to fill in the rest. In prac­tice, this is a com­mon source of bugs. You might for­get to ex­plic­itly set a new field later when you add it to the struct (thus us­ing the de­fault value in­stead, which might not be what you want), or you might not be aware of all the fields that are be­ing set to de­fault val­ues.

Instead of this:

let foo = Foo {

field1: val­ue1,

field2: val­ue2,

..Default::default() // Implicitly sets all other fields

let foo = Foo {

field1: val­ue1,

field2: val­ue2,

field3: val­ue3, // Explicitly set all fields

field4: val­ue4,

Yes, it’s slightly more ver­bose, but what you gain is that the com­piler will force you to han­dle all fields ex­plic­itly. Now when you add a new field to Foo, the com­piler will re­mind you to set it here as well and re­flect on which value makes sense.

If you still pre­fer to use Default but don’t want to lose com­piler checks, you can also de­struc­ture the de­fault in­stance:

let Foo { field1, field2, field3, field4 } = Foo::default();

This way, you get all the de­fault val­ues as­signed to lo­cal vari­ables and you can still over­ride what you need:

let foo = Foo {

field1: val­ue1, // Override what you need

field2: val­ue2, // Override what you need

field3, // Use de­fault value

field4, // Use de­fault value

This pat­tern gives you the best of both worlds:

You get de­fault val­ues with­out du­pli­cat­ing de­fault logic

The com­piler will com­plain when new fields are added to the struct

It’s clear which fields use de­faults and which have cus­tom val­ues

Completely de­struc­tur­ing a struct into its com­po­nents can also be a de­fen­sive strat­egy for API ad­her­ence. For ex­am­ple, let’s say you’re build­ing a pizza or­der­ing sys­tem and have an or­der type like this:

struct PizzaOrder {

size: PizzaSize,

top­pings: Vec

For your or­der track­ing sys­tem, you want to com­pare or­ders based on what’s ac­tu­ally on the pizza - the size, top­pings, and crust_­type. The or­dered_at time­stamp should­n’t af­fect whether two or­ders are con­sid­ered the same.

Here’s the prob­lem with the ob­vi­ous ap­proach:

impl PartialEq for PizzaOrder {

fn eq(&self, other: &Self) -> bool {

self.size == other.size

&& self.top­pings == other.top­pings

&& self.crust_­type == other.crust_­type

// Oops! What hap­pens when we add ex­tra_cheese or de­liv­ery_ad­dress later?

Now imag­ine your team adds a field for cus­tomiza­tion op­tions:

struct PizzaOrder {

size: PizzaSize,

top­pings: Vec

Your PartialEq im­ple­men­ta­tion still com­piles, but is it cor­rect? Should ex­tra_cheese be part of the equal­ity check? Probably yes - a pizza with ex­tra cheese is a dif­fer­ent or­der! But you’ll never know be­cause the com­piler won’t re­mind you to think about it.

impl PartialEq for PizzaOrder {

fn eq(&self, other: &Self) -> bool {

let Self {

size,

top­pings,

crust_­type,

or­dered_at: _,

} = self;

let Self {

size: oth­er_­size,

top­pings: oth­er_­top­pings,

crust_­type: oth­er_crust,

or­dered_at: _,

} = other;

size == oth­er_­size && top­pings == oth­er_­top­pings && crust_­type == oth­er_crust

Now when some­one adds the ex­tra_cheese field, this code won’t com­pile any­more. The com­piler forces you to de­cide: should ex­tra_cheese be in­cluded in the com­par­i­son or ex­plic­itly ig­nored with ex­tra_cheese: _?

This pat­tern works for any trait im­ple­men­ta­tion where you need to han­dle struct fields: Hash, Debug, Clone, etc. It’s es­pe­cially valu­able in code­bases where structs evolve fre­quently as re­quire­ments change.

Code Smell: From Impls That Are Really TryFrom

Sometimes there’s no con­ver­sion that will work 100% of the time. That’s fine. When that’s the case, re­sist the temp­ta­tion to of­fer a From im­ple­men­ta­tion out of habit; use TryFrom in­stead.

Here’s an ex­am­ple of TryFrom in dis­guise:

impl From for DetectorStartupErrorSubject {

fn from(re­port: &DetectorStartupErrorReport) -> Self {

let post­fix = re­port

.get_identifier()

.or_else(get_binary_name)

.unwrap_or_else(|| UNKNOWN_DETECTOR_SUBJECT.to_string());

Self(StreamSubject::from(

for­mat!(“apps.er­rors.de­tec­tors.startup.{post­fix}“).as_str(),

The un­wrap_or_else is a hint that this con­ver­sion can fail in some way. We set a de­fault value in­stead, but is it re­ally the right thing to do for all callers? This should be a TryFrom im­ple­men­ta­tion in­stead, mak­ing the fal­li­ble na­ture ex­plicit. We fail fast in­stead of con­tin­u­ing with a po­ten­tially flawed busi­ness logic.

It’s tempt­ing to use match in com­bi­na­tion with a catch-all pat­tern like _ => {}, but this can haunt you later. The prob­lem is that you might for­get to han­dle a new case that was added later.

match self {

Self::Variant1 => { /* … */ }

Self::Variant2 => { /* … */ }

_ => { /* catch-all */ }

match self {

Self::Variant1 => { /* … */ }

Self::Variant2 => { /* … */ }

Self::Variant3 => { /* … */ }

Self::Variant4 => { /* … */ }

By spelling out all vari­ants ex­plic­itly, the com­piler will warn you when a new vari­ant is added, forc­ing you to han­dle it. Another case of putting the com­piler to work.

If the code for two vari­ants is the same, you can group them:

match self {

Self::Variant1 => { /* … */ }

...

Read the original on corrode.dev »

8 267 shares, 9 trendiness

Influential study on glyphosate safety retracted 25 years after publication

A quar­ter-cen­tury af­ter its pub­li­ca­tion, one of the most in­flu­en­tial re­search ar­ti­cles on the po­ten­tial car­cino­genic­ity of glyphosate has been re­tracted for several crit­i­cal is­sues that are con­sid­ered to un­der­mine the aca­d­e­mic in­tegrity of this ar­ti­cle and its con­clu­sions.” In a re­trac­tion no­tice dated Friday, November 28, the jour­nal Regulatory Toxicology and Pharmacology an­nounced that the study, pub­lished in April 2000 and con­clud­ing the her­bi­cide was safe, has been re­moved from its archives. The dis­avowal comes 25 years af­ter pub­li­ca­tion and eight years af­ter thou­sands of in­ter­nal Monsanto doc­u­ments were made pub­lic dur­ing US court pro­ceed­ings (the Monsanto Papers”), re­veal­ing that the ac­tual au­thors of the ar­ti­cle were not the listed sci­en­tists — Gary M. Williams (New York Medical College), Robert Kroes (Ritox, Utrecht University, Netherlands), and Ian C. Munro (Intertek Cantox, Canada) — but rather Monsanto em­ploy­ees.

Known as ghostwriting,” this prac­tice is con­sid­ered a form of sci­en­tific fraud. It in­volves com­pa­nies pay­ing re­searchers to sign their names to re­search ar­ti­cles they did not write. The mo­ti­va­tion is clear: When a study sup­ports the safety of a pes­ti­cide or drug, it ap­pears far more cred­i­ble if not au­thored by sci­en­tists em­ployed by the com­pany mar­ket­ing the prod­uct.

You have 73.89% of this ar­ti­cle left to read. The rest is for sub­scribers only.

...

Read the original on www.lemonde.fr »

9 266 shares, 15 trendiness

Sam Altman’s Dirty DRAM Deal

Use tab to nav­i­gate through the menu items. Or: How the AI Bubble, Panic, and Unpreparedness Stole ChristmasWritten by Tom of Moore’s Law Is DeadAt the be­gin­ning of November, I or­dered a 32GB DDR5 kit for pair­ing with a Minisforum BD790i X3D moth­er­board, and three weeks later those very same sticks of DDR5 are now listed for a stag­ger­ing $330– a 156% in­crease in price from less than a month ago! At this rate, it seems likely that by Christmas, that DDR5 kit alone could be worth more than the en­tire Zen 4 X3D plat­form I planned to pair it with! How could this hap­pen, and more specif­i­cally — how could this hap­pen THIS quickly? Well, buckle up! I am about to tell you the story of Sam Altman’s Dirty DRAM Deal, or: How the AI bub­ble, panic, and un­pre­pared­ness stole Christmas…But be­fore I dive in, let me make it clear that my RAM kit’s 156% jump in price is­n’t a fluke or some ex­treme ex­am­ple of what’s go­ing on right now. Nope, and in fact, I’d like to pro­vide two more ex­am­ples of how how im­pos­si­ble it is be­com­ing to get ahold of RAM - these were pro­vided by a cou­ple of our sources within the in­dus­try:One source that works at a US Retailer, stated that a RAM Manufacturer called them in or­der to in­quire if they might buy RAM from  to stock up for their other cus­tomers. This would be like Corsair ask­ing a Best Buy if they had any RAM around.An­other source that works at a Prebuilt PC com­pany, was re­cently given an es­ti­mate for when they would re­ceive RAM or­ders if they placed them now…and they were told December…of 2026So what hap­pened?  Well, it all comes down to three per­fectly syn­er­gis­tic events:two un­prece­dented RAM deals that took every­one by sur­prise.The se­crecy and size of the deals trig­gered full-scale panic buy­ing from every­one else.The mar­ket had al­most zero safety stock left due to tar­iffs, worry about RAM prices over the sum­mer, and stalled equip­ment trans­fers.Be­low, we’re go­ing to walk through each of these fac­tors — and then I’m go­ing to warn you about which hard­ware cat­e­gories will be hit the hard­est, which prod­ucts are al­ready be­ing can­celled, and what you should buy  before the shelves turn into a re­peat of 2021–2022…because this is doomed to turn into much more than just RAM scarcity…deals with Samsung and SK Hynix for 40% of the worlds DRAM sup­ply.  Now, did OpenAI’s com­pe­ti­tion sus­pect some big RAM deals could be signed in late 2025? Yes. Ok, but did they think it would be deals this huge and with mul­ti­ple com­pa­nies? NO!  In fact, if you go back and read re­port­ing on Sam Altman’s now in­fa­mous trip to South Korea on October 1st, even just mere hours be­fore the mas­sive deals with Samsung and SK Hynix were  — most re­port­ing sim­ply men­tioned vague re­ports about Sam talk­ing to Samsung, SK Hynix, TSMC, and Foxconn. But the re­port­ing at the time was soft, al­most dis­mis­sive — exploring ties,” seeking co­op­er­a­tion,” probing for part­ner­ships.” Nobody hinted that OpenAI was about to swal­low up to 40% of global DRAM out­put — even on morn­ing be­fore it hap­pened! Nobody saw this com­ing - this is clear in the lack of re­port­ing about the deals be­fore they were an­nounced, and every MLID Source who works in DRAM man­u­fac­tur­ing and dis­tri­b­u­tion in­sist this took every­one in the in­dus­try by sur­prise.To be clear - the shock was­n’t that OpenAI made a big deal, no, it was that they made two mas­sive deals this big, at the same time, with Samsung and SK Hynix si­mul­ta­ne­ously! In fact, ac­cord­ing to our sources - both com­pa­nies had no idea how big each oth­er’s deal was, nor how close to si­mul­ta­ne­ous they were. And this se­crecy mat­tered. It mat­tered a lot.Had Samsung known SK Hynix was about to com­mit a sim­i­lar chunk of sup­ply — or vice-versa — the pric­ing and terms would have likely been dif­fer­ent. It’s en­tirely con­ceiv­able they would­n’t have both agreed to sup­ply such a sub­stan­tial part of global sup­ply if they had known more…but at the end of the day - OpenAI did suc­ceed in keep­ing the cir­cles tight, lock­ing down the NDAs, and lever­ag­ing the fact that these com­pa­nies as­sumed the other was­n’t giv­ing up this much wafer vol­ume si­mul­ta­ne­ously…in or­der to make a sur­gi­cal strike on the global RAM sup­ply chain…and it’s worked so far…Part II — Instant Panic: How did we miss this?Imag­ine you’re run­ning a hy­per scaler, or maybe you’re a ma­jor OEM, or per­haps pre­tend that you are sim­ply one of OpenAI’s chief com­peti­tors: On October 1st of 2025, you would have woken up to the news that OpenAI had just cor­nered the mem­ory mar­ket more ag­gres­sively than any com­pany in the last decade, and you had­n’t heard even a mur­mur that this was com­ing be­fore­hand! Well, you would prob­a­bly make some fol­low-up calls to col­leagues in the in­dus­try, and then also quickly hear ru­mors that it was­n’t just you - also the two largest sup­pli­ers did­n’t even see each oth­er’s si­mul­ta­ne­ous co­op­er­a­tion with OpenAI com­ing ! You would­n’t go: Well, that’s an in­ter­est­ing co­in­ci­dence”, no, you would say: WHAT ELSE IS GOING ON THAT WE DON’T KNOW ABOUT?”Again — it’s not the size of the deals that’s solely the is­sue here, no, it’s also the of them. On October 1st sil­i­con val­ley ex­ec­u­tives and pro­cure­ment man­agers pan­icked over con­cerns like these:What other deals don’t we know about? Is this just the first of many?None of our DRAM sup­pli­ers warned us ahead of time! We have to as­sume they also won’t in the fu­ture, and that it’s pos­si­ble  of global DRAM could be bought up with­out us get­ting a sin­gle warn­ing!We know OpenAI’s com­peti­tors are al­ready panic-buy­ing!  If we don’t move we might be locked out of the mar­ket un­til 2028!OpenAI’s com­peti­tors, OEMs, and cloud providers scram­bled to se­cure what­ever in­ven­tory re­mained out of self-de­fense, and self-de­fense in a world that was en­tirely  due to the ac­cel­er­ant I’ll now ex­plain in Part III…Normally, the DRAM mar­ket has buffers: ware­houses of emer­gency stock, ex­cess wafer starts, older DRAM man­u­fac­tur­ing ma­chin­ery be­ing sold off to bud­get brands while the big brands up­grade their pro­duc­tion lines…but not in 2025, in 2025 those would-be buffers were de­pleted for three sep­a­rate rea­sons:Tar­iff Chaos. Companies had de­lib­er­ately re­duced how much DRAM they or­dered for their safety stock over the sum­mer of 2025 be­cause tar­iffs were chang­ing al­most weekly. Every RAM pur­chase risked be­ing made at the wrong mo­ment — and so fewer pur­chases were made.Prices had been falling all sum­mer. Because of the hes­i­tancy to pur­chase as much safety stock as usual, RAM prices were also gen­uinely falling over time.  And, ob­vi­ously when mem­ory is get­ting cheaper month over month, the thing you’d feel is pres­sured to buy a com­mod­ity that could be cheaper the next month…so every­one waited.Sec­ondary RAM Manufacturing Had Stalled. Budget brands nor­mally buy older DRAM fab­ri­ca­tion equip­ment from mega-pro­duc­ers like Samsung when Samsung up­grades their DRAM lines to the lat­est and great­est equip­ment.  This al­lows the DRAM mar­ket to more than it would oth­er­wise be­cause it makes any up­grad­ing of the fan­ci­est pro­duc­tion lines to still be change to the mar­ket. However, Korean mem­ory firms have been ter­ri­fied that re­selling old equip­ment to China-adjacent OEMs might trig­ger U.S. re­tal­i­a­tion…and so those ma­chines have been sit­ting idle in ware­houses since early spring.Yep, there was no cush­ion. OpenAI hit the mar­ket at the ex­act mo­ment it was least pre­pared. And now time for the biggest twist of all, a twist that’s ac­tu­ally , and there­fore should be get­ting dis­cussed by far more peo­ple in this writer’s opin­ion: OpenAI is­n’t even both­er­ing to buy fin­ished mem­ory mod­ules! No, their deals are un­prece­dent­edly only for raw wafers — un­cut, un­fin­ished, and not even al­lo­cated to a spe­cific DRAM stan­dard yet. It’s not even clear if they have de­cided yet on how or when they will fin­ish them into RAM sticks or HBM!  Right now it seems like these wafers will just be stock­piled in ware­houses — like a kid who hides the toy­box be­cause they’re afraid no­body wants to play with them, and thus self­ishly feels no­body but them should get the toys!And let’s just say it: Here is the un­com­fort­able truth Sam Altman is al­ways loath to ad­mit in in­ter­views: OpenAI is wor­ried about los­ing its lead. The last 18 months have seen com­peti­tors catch­ing up fast — Anthropic, Meta, xAI, and specif­i­cally Google’s Gemini 3 has got­ten a ton of praise just in the past week. Everyone’s chas­ing train­ing ca­pac­ity. Everyone needs mem­ory. DRAM is the lifeblood of scal­ing in­fer­ence and train­ing through­put. Cutting sup­ply to your ri­vals is not a con­spir­acy the­ory. It’s a busi­ness tac­tic as old as busi­ness it­self.  And so, when you con­sider how se­cre­tive OpenAI was about their deals with Samsung and SK Hynix, but ad­di­tion­ally how un­ready they were to im­me­di­ately uti­lize their ware­houses of DRAM wafers — it sure seems like a pri­mary goal of these deals was to , and not just an at­tempt to pro­tect OpenAI’s own sup­ply…Part V — What will be can­celled? What should you buy now?Al­right, now that we are done ex­plain­ing the , let’s get to the  – be­cause even if the RAM short­age mirac­u­lously im­proves im­me­di­ately be­hind the scenes — even if the AI Bubble in­stantly popped or 10 com­pa­nies started tool­ing up for more DRAM ca­pac­ity this sec­ond (and many are, to be fair), at a min­i­mum the next six to nine months are al­ready screwed  See above: DRAM man­u­fac­tures are quot­ing 13-Month lead times for DDR5!  This is not a tem­po­rary blip. This could be a once-in-a-gen­er­a­tion shock. So what gets hit first? What gets hit hard­est? Well, be­low is an E through S-Tier rank­ing of which prod­ucts are the most screwed”:S-Tier (Already Screwed — Too Late to Buy) -RAM it­self, ob­vi­ously. RAM prices have exploded”. The det­o­na­tion is in the past.SSDs. These tends to fol­low DRAM pric­ing with a lag.RADEON GPUs. AMD does­n’t bun­dle RAM in their BOM kits to AIBs the way Nvidia does. In fact, the RX 9070 GRE 16GB this chan­nel leaked months ago is al­most cer­tainly can­celled ac­cord­ing to our sourcesXBOX. Microsoft did­n’t plan. Prices may rise and/​or sup­ply may dwin­dle in 2026.Nvidia GPUs. Nvidia main­tains large mem­ory in­ven­to­ries for its board part­ners, giv­ing them a buffer. But high-ca­pac­ity GPUs (like a hy­po­thet­i­cal 24GB 5080 SUPER) are on ice for now be­cause  stores were never suf­fi­ciently built up. In fact, Nvidia is qui­etly telling part­ners that their SUPER re­fresh might” launch Q3 2026 — al­though most part­ners think it’s just a place­holder for when Nvidia ex­pects new ca­pac­ity to come on­line, and thus SUPER may never launch.C-Tier (Think about buy­ing soon)Lap­tops and phones. These com­pa­nies ne­go­ti­ate im­mense long-term con­tracts, so they’re not hit im­me­di­ately. But once their stock­piles run dry, watch out!D-Tier (Consider buy­ing soon, but there’s no rush)PlaySta­tion. Sony planned bet­ter than al­most any­one else. They bought ag­gres­sively dur­ing the sum­mer price trough, which is why they can af­ford a Black Friday dis­count while every­one else is rais­ing prices.Any­thing with­out RAM. Specifically CPUs that do not come with cool­ers could see price  over time since there could be a  in de­mand for CPUs if no­body has the RAM to feed them in sys­tems.???-Tier —Steam Machine. Valve keeps things quiet, but the big un­known is whether they pre-bought RAM months ago be­fore an­nounc­ing their much-hyped Steam Machine. If they did al­ready stock­pile an am­ple sup­ply of DDR5 - then Steam Machine should launch fine, but sup­ply could dry up tem­porar­ily at some point while they wait for prices to drop. However, if they did­n’t plan ahead - ex­pect a high launch price and very lit­tle re­sup­ply…it might even need to be can­celled or there might need to be a vari­ant of­fered with­out RAM in­cluded (BYO RAM Edition!).And that’s it! This last bit was the most im­por­tant part of the ar­ti­cle in this writer’s opin­ion — an at­tempt at help­ing you avoid get­ting burned. Well, ac­tu­ally, there is one other im­por­tant rea­son for this ar­ti­cle’s ex­is­tence I’ll tack onto the end — a hope that other peo­ple start dig­ging into what’s go­ing on at OpenAI.  I mean se­ri­ously — do we even have a sin­gle re­li­able au­dit of their fi­nan­cials to back up them out­ra­geously spend­ing this much money…  Heck, I’ve even heard from nu­mer­ous sources that OpenAI is buying up the man­u­fac­tur­ing equip­ment as well” — and with­out moun­tains of con­crete proof, and/​or more in­put from ad­di­tional sources on what that re­ally means…I don’t feel I can touch that hot potato with­out get­ting burned…but I hope some­one else will…

...

Read the original on www.mooreslawisdead.com »

10 249 shares, 20 trendiness

Brendan Gregg's Blog

Recent posts:

04 Aug 2025 »

When to Hire a Computer Performance Engineering Team (2025) part 1 of 2

17 Mar 2024 »

The Return of the Frame Pointers

19 Mar 2022 »

Why Don’t You Use …

Blog in­dex

About

RSS

Recent posts:

04 Aug 2025 »

When to Hire a Computer Performance Engineering Team (2025) part 1 of 2

17 Mar 2024 »

The Return of the Frame Pointers

19 Mar 2022 »

Why Don’t You Use …

Blog in­dex

About

RSS

I’ve re­signed from Intel and ac­cepted a new op­por­tu­nity. If you are an Intel em­ployee, you might have seen my fairly long email that sum­ma­rized what I did in my 3.5 years. Much of this is pub­lic:

It’s still early days for AI flame graphs. Right now when I browse CPU per­for­mance case stud­ies on the Internet, I’ll of­ten see a CPU flame graph as part of the analy­sis. We’re a long way from that kind of adop­tion for GPUs (and it does­n’t help that our open source ver­sion is Intel only), but I think as GPU code be­comes more com­plex, with more lay­ers, the need for AI flame graphs will keep in­creas­ing.

I also sup­ported cloud com­put­ing, par­tic­i­pat­ing in 110 cus­tomer meet­ings, and cre­ated a com­pany-wide strat­egy to win back the cloud with 33 spe­cific rec­om­men­da­tions, in col­lab­o­ra­tion with oth­ers across 6 or­ga­ni­za­tions. It is some of my best work and fea­tures a vi­sual map of in­ter­ac­tions be­tween all 19 rel­e­vant teams, de­scribed by Intel long-timers as the first time they have ever seen such a cross-com­pany map. (This strat­egy, sum­ma­rized in a slide deck, is in­ter­nal only.)

I al­ways wish I did more, in any job, but I’m glad to have con­tributed this much es­pe­cially given the con­text: I over­lapped with Intel’s tough­est 3 years in his­tory, and I had a hir­ing freeze for my first 15 months.

My fond mem­o­ries from Intel in­clude meet­ing Linus at an Intel event who said everyone is us­ing fleme graphs these days” (Finnish ac­cent), meet­ing Pat Gelsinger who knew about my work and in­tro­duced me to every­one at an exec all hands, surf­ing lessons at an Intel Australia and HP off­site (mp4), and meet­ing Harshad Sane (Intel cloud sup­port en­gi­neer) who helped me when I was at Netflix and now has joined Netflix him­self — we’ve swapped ends of the meet­ing table. I also en­joyed meet­ing Intel’s hard­ware fel­lows and se­nior fel­lows who were happy to help me un­der­stand proces­sor in­ter­nals. (Unrelated to Intel, but if you’re a Who fan like me, I re­cently met some other peo­ple as well!)

My next few years at Intel would have fo­cused on ex­e­cu­tion of those 33 rec­om­men­da­tions, which Intel can con­tinue to do in my ab­sence. Most of my rec­om­men­da­tions aren’t easy, how­ever, and re­quire ac­cept­ing change, ELT/CEO ap­proval, and mul­ti­ple quar­ters of in­vest­ment. I won’t be there to push them, but other em­ploy­ees can (my CloudTeams strat­egy is in the in­box of var­i­ous ELT, and in a shared folder with all my pre­sen­ta­tions, code, and weekly sta­tus re­ports). This work will hope­fully live on and keep mak­ing Intel stronger. Good luck.

...

Read the original on www.brendangregg.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.