10 interesting stories served every morning and every evening.




1 1,641 shares, 78 trendiness

LinkedIn Is Illegally Searching Your Computer

Every time any of LinkedIn’s one bil­lion users vis­its linkedin.com, hid­den code searches their com­puter for in­stalled soft­ware, col­lects the re­sults, and trans­mits them to LinkedIn’s servers and to third-party com­pa­nies in­clud­ing an American-Israeli cy­ber­se­cu­rity firm.

The user is never asked. Never told. LinkedIn’s pri­vacy pol­icy does not men­tion it.

Because LinkedIn knows each user’s real name, em­ployer, and job ti­tle, it is not search­ing anony­mous vis­i­tors. It is search­ing iden­ti­fied peo­ple at iden­ti­fied com­pa­nies. Millions of com­pa­nies. Every day. All over the world.

Fairlinked e. V. is an as­so­ci­a­tion of com­mer­cial LinkedIn users. We rep­re­sent the pro­fes­sion­als who use LinkedIn, the busi­nesses that in­vest in and de­pend on the plat­form, and the tool­mak­ers who build prod­ucts for it.

BrowserGate is our in­ves­ti­ga­tion and cam­paign to doc­u­ment one of the largest cor­po­rate es­pi­onage and data breach scan­dals in dig­i­tal his­tory, to in­form the pub­lic and reg­u­la­tors, to col­lect ev­i­dence, and to raise funds for the le­gal pro­ceed­ings re­quired to stop it.

LinkedIn’s scan re­veals the re­li­gious be­liefs, po­lit­i­cal opin­ions, dis­abil­i­ties, and job search ac­tiv­ity of iden­ti­fied in­di­vid­u­als. LinkedIn scans for ex­ten­sions that iden­tify prac­tic­ing Muslims, ex­ten­sions that re­veal po­lit­i­cal ori­en­ta­tion, ex­ten­sions built for neu­ro­di­ver­gent users, and 509 job search tools that ex­pose who is se­cretly look­ing for work on the very plat­form where their cur­rent em­ployer can see their pro­file.

Under EU law, this cat­e­gory of data is not reg­u­lated. It is pro­hib­ited. LinkedIn has no con­sent, no dis­clo­sure, and no le­gal ba­sis. Its pri­vacy pol­icy does not men­tion any of this.

LinkedIn scans for over 200 prod­ucts that di­rectly com­pete with its own sales tools, in­clud­ing Apollo, Lusha, and ZoomInfo. Because LinkedIn knows each user’s em­ployer, it can map which com­pa­nies use which com­peti­tor prod­ucts. It is ex­tract­ing the cus­tomer lists of thou­sands of soft­ware com­pa­nies from their users’ browsers with­out any­one’s knowl­edge.

Then it uses what it finds. LinkedIn has al­ready sent en­force­ment threats to users of third-party tools, us­ing data ob­tained through this covert scan­ning to iden­tify its tar­gets.

In 2023, the EU des­ig­nated LinkedIn as a reg­u­lated gate­keeper un­der the Digital Markets Act and or­dered it to open its plat­form to third-party tools. LinkedIn’s re­sponse:

It pub­lished two re­stricted APIs and pre­sented them to the European Commission as com­pli­ance. Together, these APIs han­dle ap­prox­i­mately 0.07 calls per sec­ond. Meanwhile, LinkedIn al­ready op­er­ates an in­ter­nal API called Voyager that pow­ers every LinkedIn web and mo­bile prod­uct at 163,000 calls per sec­ond. In Microsoft’s 249-page com­pli­ance re­port to the EU, the word API ap­pears 533 times. Voyager” ap­pears zero times.

At the same time, LinkedIn ex­panded its sur­veil­lance of the ex­act tools the reg­u­la­tion was de­signed to pro­tect. The scan list grew from roughly 461 prod­ucts in 2024 to over 6,000 by February 2026. The EU told LinkedIn to let third-party tools in. LinkedIn built a sur­veil­lance sys­tem to find and pun­ish every user of those tools.

LinkedIn loads an in­vis­i­ble track­ing el­e­ment from HUMAN Security (formerly PerimeterX), an American-Israeli cy­ber­se­cu­rity firm, zero pix­els wide, hid­den off-screen, that sets cook­ies on your browser with­out your knowl­edge. A sep­a­rate fin­ger­print­ing script runs from LinkedIn’s own servers. A third script from Google ex­e­cutes silently on every page load. All of it en­crypted. None of it dis­closed.

Microsoft has 33,000 em­ploy­ees and a $15 bil­lion le­gal bud­get. We have the ev­i­dence. What we need is peo­ple and fund­ing to hold them ac­count­able.

...

Read the original on browsergate.eu »

2 1,306 shares, 76 trendiness

Gemma 4

Our most in­tel­li­gent open mod­els, built from Gemini 3 re­search and tech­nol­ogy to max­i­mize in­tel­li­gence-per-pa­ra­me­ter

Your browser does not sup­port the video tag. Your browser does not sup­port the video tag. A new level of in­tel­li­gence for mo­bile and IoT de­vices Your browser does not sup­port the video tag. Your browser does not sup­port the video tag. Your browser does not sup­port the video tag. Your browser does not sup­port the video tag.A new level of in­tel­li­gence for mo­bile and IoT de­vices Your browser does not sup­port the video tag. Your browser does not sup­port the video tag.

Build au­tonomous agents that plan, nav­i­gate apps, and com­plete tasks on your be­half, with na­tive sup­port for func­tion call­ing. Develop ap­pli­ca­tions with strong au­dio and vi­sual un­der­stand­ing, for rich mul­ti­modal sup­port.Cre­ate mul­ti­lin­gual ex­pe­ri­ences that go be­yond trans­la­tion and un­der­stand cul­tural con­text.Im­prove per­for­mance for spe­cific tasks by train­ing Gemma us­ing your pre­ferred frame­works and tech­niques.Run mod­els on your own hard­ware for ef­fi­cient de­vel­op­ment and de­ploy­ment.

A new level of in­tel­li­gence for mo­bile and IoT de­vice­sAudio and vi­sion sup­port for real-time edge pro­cess­ing. They can run com­pletely of­fline with near-zero la­tency on edge de­vices like phones, Raspberry Pi, and Jetson Nano.

Advanced rea­son­ing for IDEs, cod­ing as­sis­tants, and agen­tic work­flows. These mod­els are op­ti­mized for con­sumer GPUs — giv­ing stu­dents, re­searchers, and de­vel­op­ers the abil­ity to turn work­sta­tions into lo­cal-first AI servers.

Gemma 4 mod­els un­dergo the same rig­or­ous in­fra­struc­ture se­cu­rity pro­to­cols as our pro­pri­etary mod­els. By choos­ing Gemma 4, en­ter­prises and sov­er­eign or­ga­ni­za­tions gain a trusted, trans­par­ent foun­da­tion that de­liv­ers state-of-the-art ca­pa­bil­i­ties while meet­ing the high­est stan­dards for se­cu­rity and re­li­a­bil­ity.

...

Read the original on deepmind.google »

3 797 shares, 41 trendiness

Why Swedish Schools Are Bringing Back Books

In 2023, the Swedish gov­ern­ment an­nounced that the coun­try’s schools would be go­ing back to ba­sics, em­pha­siz­ing skills such as read­ing and writ­ing, par­tic­u­larly in early grades. After mostly be­ing side­lined, phys­i­cal books are now be­ing rein­tro­duced into class­rooms, and stu­dents are learn­ing to write the old-fash­ioned way: by hand, with a pen­cil or pen, on sheets of pa­per. The Swedish gov­ern­ment also plans to make schools cell­phone-free through­out the coun­try.

Educational au­thor­i­ties have been in­vest­ing heav­ily. Last year alone, the ed­u­ca­tion min­istry al­lo­cated $83 mil­lion to pur­chase text­books and teach­ers’ guides. In a coun­try with about 11 mil­lion peo­ple, the aim is for every stu­dent to have a phys­i­cal text­book for each sub­ject. The gov­ern­ment also put $54 mil­lion to­wards the pur­chase of fic­tion and non-fic­tion books for stu­dents.

These moves rep­re­sent a dra­matic pivot from pre­vi­ous decades, dur­ing which Sweden — and many other na­tions — mov­ing away from phys­i­cal books in fa­vor of tablets and dig­i­tal re­sources in an ef­fort to pre­pare stu­dents for life in an on­line world. Perhaps un­sur­pris­ingly, the Nordic coun­try’s ef­forts have sparked a de­bate on the role of dig­i­tal tech­nol­ogy in ed­u­ca­tion, one that ex­tends well be­yond the coun­try’s bor­ders. U. S. par­ents in dis­tricts that have adopted dig­i­tal tech­nol­ogy to a great ex­tent may be won­der­ing if ed­u­ca­tors will re­verse course, too.

So why did Sweden pivot? In an email to Undark, Linda Fälth, a re­searcher in teacher ed­u­ca­tion at Linnaeus University, wrote that the decision to rein­vest in phys­i­cal text­books and re­duce the em­pha­sis on dig­i­tal de­vices” was prompted by sev­eral fac­tors, in­clud­ing ques­tions around whether the dig­i­tal­iza­tion of class­rooms had been ev­i­dence-based. There was also a broader cul­tural re­assess­ment,” Fälth wrote. Sweden had po­si­tioned it­self as a fron­trun­ner in dig­i­tal ed­u­ca­tion, but over time con­cerns emerged about screen time, dis­trac­tion, re­duced deep read­ing, and the ero­sion of foun­da­tional skills such as sus­tained at­ten­tion and hand­writ­ing.”

Fälth noted that pro­po­nents of re­form be­lieve that basic skills — es­pe­cially read­ing, writ­ing, and nu­mer­acy — must be firmly es­tab­lished first, and that phys­i­cal text­books are of­ten bet­ter suited for that pur­pose.”

Between 2000 and 2012, Swedish stu­dents’ scores on stan­dard­ized tests steadily de­clined in read­ing, math, and sci­ence. Though they re­cov­ered ground be­tween 2012 and 2018, those scores had dropped again by 2022.

Though it’s un­clear pre­cisely how much of the de­cline is due to dig­i­ti­za­tion, there is some ev­i­dence that ana­log teach­ing ma­te­ri­als for read­ing may be su­pe­rior to screen learn­ing. However, this ap­plies to ex­pos­i­tory as op­posed to nar­ra­tive texts. Narrative texts tell a story, whether fic­tion or non-fic­tion, while ex­pos­i­tory texts are de­signed to in­form, de­scribe, or ex­plain a topic in a log­i­cal, fac­tual man­ner.

Swedish of­fi­cials em­pha­size that dig­i­tal tech­nol­ogy is­n’t be­ing re­moved from schools al­to­gether. Rather, dig­i­tal aids should only be in­tro­duced in teach­ing at an age when they en­cour­age, rather than hin­der, pupils’ learn­ing.” Achieving dig­i­tal com­pe­tence re­mains an im­por­tant ob­jec­tive, par­tic­u­larly in higher grades.

Historically, the tech­nol­ogy in­dus­try has pushed for more use of dig­i­tal learn­ing, see­ing it­self as a trans­former of ed­u­ca­tion. In the 1980s, Apple helped bring about the use of com­put­ers in schools. Then, start­ing with the use of the in­ter­net, and later in­te­grat­ing mo­bile de­vices, tech­nol­ogy re­shaped the ed­u­ca­tional land­scape. Education ex­perts sug­gest it can fos­ter a learn­ing ex­pe­ri­ence that is more in­ter­ac­tive, ac­ces­si­ble, and tai­lored to the needs of in­di­vid­ual stu­dents.

In the U. S., the trend na­tion­ally in re­cent years has been to­ward the use of in­creas­ingly so­phis­ti­cated meth­ods of dig­i­tal learn­ing, such as pro­vid­ing chil­dren with lap­tops or de­vices like the iPad. According to a sur­vey con­ducted by the EdWeek Research Center, part of the trade pub­li­ca­tion Education Week, 90 per­cent of school dis­trict lead­ers were pro­vid­ing de­vices for every mid­dle and high school stu­dent as of March 2021. More than 80 per­cent of school dis­trict lead­ers said the same was true for el­e­men­tary school stu­dents.

And now, tech­nol­ogy gi­ants such as Google, Microsoft, and OpenAI are urg­ing schools to teach lit­er­acy in ar­ti­fi­cial in­tel­li­gence. It’s be­lieved by some work­ing in ed­u­ca­tion that schools ought to pre­pare pupils for em­ploy­ers who ex­pect dig­i­tal flu­ency. This may in­deed be per­ti­nent in the age of AI. More than 50 per­cent of teens in America have used AI chat­bots for school­work, ac­cord­ing to a sur­vey con­ducted by the Pew Research Center.

According to a 2023 sur­vey, 30 per­cent of ed­u­ca­tors said their stu­dents spend at least half of their class­room read­ing time do­ing so dig­i­tally. But this may have draw­backs. Researchers sug­gest that read­ing on dig­i­tal dis­plays in­stead of pa­per may be more de­mand­ing men­tally, es­pe­cially for younger stu­dents. Studies have linked heavy dig­i­tal use to re­duced com­pre­hen­sion and mem­ory re­ten­tion as well as eye strain.

The lim­i­ta­tions of ed­u­ca­tional tech­nol­ogy be­came ap­par­ent dur­ing the Covid-19 pan­demic. When on­line learn­ing be­came the norm, ex­perts be­gan ques­tion­ing whether tech­nol­o­gy’s promises had ma­te­ri­al­ized. In a post on LinkedIn, Pam Kastner, a lit­er­acy con­sul­tant and ad­junct pro­fes­sor at Mount Saint Joseph University, sug­gests: Technology is a tool, not a teacher.” She views the cog­ni­tive ar­chi­tec­ture for read­ing as be­ing built for print.

A well-known critic of the use of smart­phones and so­cial me­dia by chil­dren, Jonathan Haidt, posted in February: Putting com­put­ers and tablets on stu­dents desks in K-12 may turn out to be among the costli­est mis­takes in the his­tory of ed­u­ca­tion”.

The U. S. spent $30 bil­lion in 2024 on lap­tops and tablets and other ed­u­ca­tional tech­nol­ogy, 10 times more than on text­books. Neuroscientist and ed­u­ca­tor Jared Cooney Horvath has lamented the heavy use of dig­i­tal de­vices in ed­u­ca­tion. He has said that Gen Z, per­sons born roughly be­tween 1997 and 2012 and known for grow­ing up with dig­i­tal tech­nol­ogy as an in­te­gral part of their lives, is the first gen­er­a­tion in mod­ern his­tory to score lower on cog­ni­tive mea­sures than the pre­vi­ous one. In January of this year, he told a Senate com­mit­tee that this has re­sulted in a gen­er­a­tion of chil­dren who are less cog­ni­tively ca­pa­ble than their par­ents.

Whether the U. S. will fol­low Sweden’s path re­mains to be seen. Naomi Baron, a pro­fes­sor emerita of lin­guis­tics at American University, told Undark she does­n’t see the U.S. turn­ing to Sweden for ad­vice. This is in part be­cause of fi­nan­cial in­cen­tives: First, com­mer­cial text­book pub­lish­ers have been push­ing dig­i­tal ma­te­ri­als — heav­ily for fi­nan­cial rea­sons gen­er­ally ig­nor­ing the re­search com­par­ing com­pre­hen­sion, etc. with print vs. dig­i­tal read­ing.” Baron also wrote that American ed­u­ca­tors them­selves are gen­er­ally un­aware of the now sub­stan­tial re­search lit­er­a­ture here, and in­stead fo­cus on sav­ing their stu­dents (or school dis­tricts) money.” Still, some American ed­u­ca­tors ap­pear to be aware that dig­i­tal tech­nol­ogy might be mak­ing ed­u­ca­tion worse. Teachers seem es­pe­cially con­cerned about the pos­si­ble detri­men­tal ef­fects of overuse of AI.

At the same time, some American par­ents have re­cently started form­ing net­works, teach­ing one an­other how to opt out of school-is­sued lap­tops and de­vices and back into phys­i­cal text­books, along with a re­ver­sion to pen or pen­cil and pa­per. Parents point to ev­i­dence show­ing bet­ter in­for­ma­tion re­ten­tion when pupils read it on pa­per. This re­ac­tion may re­flect a grow­ing back­lash to dig­i­tal tech­nol­ogy in ed­u­ca­tion, dri­ven by con­cerns about pos­si­bly ex­ces­sive screen time and po­ten­tial harms to youth, in­clud­ing pos­si­bly ad­dic­tive dis­trac­tions.

If U. S. ed­u­ca­tional lead­ers were to con­sult their Swedish col­leagues, the ad­vice they’d likely get is not to re­move dig­i­tal tech­nol­ogy whole cloth. The goal is re­cal­i­bra­tion rather than re­ver­sal,” wrote Fälth. This was echoed in a state­ment sent to Undark by the Swedish Ministry of Education and Research: The Swedish gov­ern­ment be­lieves that dig­i­tal­iza­tion is fun­da­men­tally im­por­tant and ben­e­fi­cial, but the use of dig­i­tal tools in schools must be car­ried out care­fully and thought­fully.”

In other words, the ob­jec­tive is not to re­ject dig­i­tal­iza­tion. It’s more nu­anced than that. The goal is to ju­di­ciously es­tab­lish bound­aries around tech­nol­o­gy’s se­lec­tive and se­quen­tial use over stages of a pupil’s ed­u­ca­tional de­vel­op­ment. This means in­tro­duc­ing dig­i­tal tech­nol­ogy at later ages af­ter ba­sic read­ing and other skills have been achieved.

...

Read the original on undark.org »

4 563 shares, 61 trendiness

How Microsoft Vaporized a Trillion Dollars

This is the first of a se­ries of ar­ti­cles in which you will learn about what may be one of the sil­li­est, most pre­ventable, and most costly mishaps of the 21st cen­tury, where Microsoft all but lost OpenAI, its largest cus­tomer, and the trust of the US gov­ern­ment.

I joined Azure Core on the dull Monday morn­ing of May 1st, 2023, as a se­nior mem­ber of the Overlake R&D team, the folks be­hind the Azure Boost of­fload card and net­work ac­cel­er­a­tor.

I was­n’t new to Azure, hav­ing run what is likely the longest-run­ning pro­duc­tion sub­scrip­tion of this cloud ser­vice, which launched in February 2010 as Windows Azure.

I was­n’t new to Microsoft ei­ther, hav­ing been part of the Windows team since 1/1/2013 and later helped mi­grate SharePoint Online to Azure, be­fore join­ing the Core OS team as a ker­nel en­gi­neer, where I no­tably helped im­prove the ker­nel and helped in­vent and de­liver the Container plat­form that sup­ports Docker, Azure Kubernetes, Azure Container Instances, Azure App Services, and Windows Sandbox, all ship­ping tech­nolo­gies that re­sulted in mul­ti­ple granted patents.

Furthermore, I con­tributed to brain­storm­ing the early Overlake cards in 2020-2021, draft­ing a pro­posal for a Host OS Accelerator Card com­mu­ni­ca­tion pro­to­col and net­work stack, when all we had was a de­bug­ger’s se­r­ial con­nec­tion. I also served as a Core OS spe­cial­ist, help­ing Azure Core en­gi­neers di­ag­nose deep OS is­sues.

I re­joined in 2023 as an Azure ex­pert on day one, hav­ing con­tributed to the de­vel­op­ment of some of the tech­nolo­gies on which Azure re­lies and hav­ing used the plat­form for more than a decade, both out­side and in­side Microsoft at a global scale.

As a re­turn­ing em­ployee, I skipped the New Employee Orientation and had my Global Security in­vite for 12 noon to pick up my badge, but my fu­ture man­ager asked if I could come in ear­lier, as the team had their monthly plan­ning meet­ing that morn­ing.

I, of course, agreed and ar­rived a few min­utes be­fore 10 am at the en­trance of the Studio X build­ing, not far from The Commons on the West Campus in Redmond. A man showed up in the lobby and opened the door for me. I fol­lowed him to a meet­ing room through a labyrinth of cor­ri­dors.

The room was chock-full, with more peo­ple on a live con­fer­ence call. The dev man­ager, the leads, the ar­chi­tects, the prin­ci­pal and se­nior en­gi­neers shared the space with what ap­peared to be new hires and ju­nior per­son­nel.

The screen pro­jected a slide where I rec­og­nized a num­ber of fa­mil­iar acronyms, like COM, WMI, perf coun­ters, VHDX, NTFS, ETW, and a dozen oth­ers, mixed with new Azure-related ones, in an im­broglio of boxes linked by ar­rows.

I sat qui­etly at the back while a man was walk­ing the room through a big port­ing plan of their cur­rent stack to the Overlake ac­cel­er­a­tor. As I lis­tened, it was not im­me­di­ately clear what that se­ries of boxes with Windows user-mode and ker­nel com­po­nents had to do with that plan.

After a few min­utes, I risked a ques­tion: Are you plan­ning to port those Windows fea­tures to Overlake? The an­swer was yes, or at least they were look­ing into it. The dev man­ager showed some doubt, and the man replied that they could at least ask a cou­ple of ju­nior devs to look into it.”

The room re­mained silent for an in­stant. I had seen the hard­ware specs for the SoC on the Overlake card in my pre­vi­ous tenure: the RAM ca­pac­ity and the power bud­get, which was just a tiny frac­tion of the TDP you can ex­pect from a reg­u­lar server CPU.

The hard­ware folks I had spo­ken with told me they could only spare 4KB of dual-ported mem­ory on the FPGA for my door­bell shared-mem­ory com­mu­ni­ca­tion pro­to­col.

Everything was nim­ble, ef­fi­cient, and power-savvy, and the team I had joined 10 min­utes ear­lier was se­ri­ously con­sid­er­ing port­ing half of Windows to that tiny, fan­less, Linux-running chip the size of a fin­ger­nail.

That felt like Elon talk­ing about col­o­niz­ing Mars: just nuke the poles then grow an at­mos­phere! Easier said than done, uh?

That en­tire 122-strong org was knee-deep in im­pos­si­ble ru­mi­na­tions in­volv­ing port­ing Windows to Linux to sup­port their ex­ist­ing VM man­age­ment agents.

The man was a Principal Group Engineering Manager over­see­ing a chunk of the soft­ware run­ning on each Azure node; his boss, a Partner Engineering Manager, was in the room with us, and they re­ally con­tem­plated port­ing Windows to Linux to sup­port their cur­rent soft­ware.

At first, I ques­tioned my un­der­stand­ing. Was that se­ri­ous? The rest of the talk left no doubt: the plan was out­lined, and the dev leads were tasked with con­tribut­ing peo­ple to the ef­fort. It was im­me­di­ately clear to me that this plan would never suc­ceed and that the org needed a lot of help.

That first hour in the new role left me with a mix of strange feel­ings, stu­pe­fac­tion, and in­credulity.

The stack was hit­ting its scal­ing lim­its on a 400 Watt Xeon at just a few dozen VMs per node, I later learned, a far cry from the 1,024 VMs limit I knew the hy­per­vi­sor was ca­pa­ble of, and was a noisy neigh­bor con­sum­ing so many re­sources that it was caus­ing jit­ter ob­serv­able from the cus­tomer VMs.

There is no di­men­sion in the uni­verse where this stack would fit on a tiny ARM SoC and scale up by many fac­tors. It was not go­ing to hap­pen.

I have seen a lot in my decades of in­dus­try (and Microsoft) ex­pe­ri­ence, but I had never seen an or­ga­ni­za­tion so far from re­al­ity. My day-one prob­lem was there­fore not to ramp up on new tech­nol­ogy, but rather to con­vince an en­tire org, up to my skip-skip-level, that they were on a death march.

Somewhere, I knew it was go­ing to be a fierce up­hill bat­tle. As you can imag­ine, it did­n’t go well, as you will later learn.

I spent the next few days read­ing more about the plans, study­ing the cur­rent sys­tems, and vis­it­ing old friends in Core OS, my alma mater. I was lost away from home in a bizarre ter­ri­tory where peo­ple made plans that did­n’t make sense with the aplomb of a drunk LLM.

I no­tably spent more than 90 min­utes chat­ting in per­son with the head of the Linux System Group, a solid scholar with a PhD from INRIA, who was among the folks who hired me on the ker­nel team years ear­lier.

His org is re­spon­si­ble for de­liv­er­ing Mariner Linux (now Azure Linux) and the trimmed-down dis­tro run­ning on the Overlake / Azure Boost card. He kindly an­swered all my ques­tions, and I learned that they had iden­ti­fied 173 agents (one hun­dred sev­enty-three) as can­di­dates for port­ing to Overlake.

I later re­searched this fur­ther and found that no one at Microsoft, not a sin­gle soul, could ar­tic­u­late why up to 173 agents were needed to man­age an Azure node, what they all did, how they in­ter­acted with one an­other, what their fea­ture set was, or even why they ex­isted in the first place.

Azure sells VMs, net­work­ing, and stor­age at the core. Add ob­serv­abil­ity and ser­vic­ing, and you should be good. Everything else, SQL, K8s, AI work­loads, and what­not all build on VMs with xPU, net­work­ing, and stor­age, and the heavy lift­ing to make the magic hap­pen is done by the good Core OS folks and the hy­per­vi­sor.

How the Azure folks came up with 173 agents will prob­a­bly re­main a mys­tery, but it takes a se­ri­ous amount of mis­un­der­stand­ing to get there, and this is also how dis­as­ters are built.

Now, fathom for a sec­ond that this pile of un­con­trolled stuff” is or­ches­trat­ing the VMs run­ning Anthropic’s Claude, what’s left of OpenAI’s APIs on Azure, SharePoint Online, the gov­ern­ment clouds and other mis­sion-crit­i­cal in­fra­struc­ture, and you’ll be close to un­der­stand­ing how a grain of sand in that frag­ile pileup can cause a global col­lapse, with se­ri­ous National Security im­pli­ca­tions as well as po­ten­tial busi­ness-end­ing con­se­quences for Microsoft.

We are still far from the va­por­ized tril­lion in mar­ket cap, my let­ters to the CEO, to the Microsoft Board of Directors, and to the Cloud + AI EVP and their to­tal si­lence, the quasi-loss of OpenAI, the breach of trust with the US gov­ern­ment as pub­licly stated by the Secretary of Defense, the wasted en­gi­neer­ing ef­forts, the Rust man­date, my stint on the OpenAI bare-metal team in Azure Core, the es­cort ses­sions from China and else­where, and the de­layed fea­tures pub­licly im­plied as ship­ping since 2023, be­fore the work even be­gan.

If you’re run­ning pro­duc­tion work­loads on Azure or re­ly­ing on it for mis­sion-crit­i­cal sys­tems, this story mat­ters more than you think.

...

Read the original on isolveproblems.substack.com »

5 483 shares, 23 trendiness

Local AI for Text, Images, and Speech

Chat

What can I do with 128 GB of uni­fied RAM?

Load up mod­els like gpt-oss-120b or Qwen-Coder-Next for ad­vanced tool use.

What should I tune first?

You can use –no-mmap to speed up load times and in­crease con­text size to 64 or more.

Image Generation

A pitcher of lemon­ade in the style of a re­nais­sance paint­ing

Speech

Hello, I am your AI as­sis­tant. What can I do for you to­day?

Open source. Private. Ready in min­utes on any PC.

Lemonade ex­ists be­cause lo­cal AI should be free, open, fast, and pri­vate.

Lemonade is in­te­grated in many apps and works out-of-box with hun­dreds more thanks to the OpenAI API stan­dard.

Chat

What can I do with 128 GB of uni­fied RAM?

Load up mod­els like gpt-oss-120b or Qwen-Coder-Next for ad­vanced tool use.

What should I tune first?

You can use –no-mmap to speed up load times and in­crease con­text size to 64 or more.

Image Generation

A pitcher of lemon­ade in the style of a re­nais­sance paint­ing

Speech

Hello, I am your AI as­sis­tant. What can I do for you to­day?

Open source. Private. Ready in min­utes on any PC.

Lemonade ex­ists be­cause lo­cal AI should be free, open, fast, and pri­vate.

Lemonade is in­te­grated in many apps and works out-of-box with hun­dreds more thanks to the OpenAI API stan­dard.

Everything from in­stall to run­time is op­ti­mized for fast setup, broad com­pat­i­bil­ity, and lo­cal-first ex­e­cu­tion.

Lightweight ser­vice that is only 2MB.

Simple in­staller that sets up the stack au­to­mat­i­cally.

Works with hun­dreds of apps out-of-box and in­te­grates in min­utes.

Configures de­pen­den­cies for your GPU and NPU.

Works with llama.cpp, Ryzen AI SW, FastFlowLM, and more.

Run more than one model at the same time.

A GUI that lets you down­load, try, and switch mod­els quickly.

...

Read the original on lemonade-server.ai »

6 483 shares, 27 trendiness

Qwen

...

Read the original on qwen.ai »

7 418 shares, 11 trendiness

I Am Not a Number

In mem­ory of the 72,000+ Palestinians killed in the Israeli geno­cide in Gaza.

...

Read the original on bkhmsi.github.io »

8 401 shares, 34 trendiness

How notch traversal works on MacBooks

Tailscale should feel nearly in­vis­i­ble when it’s con­nect­ing you and all your de­vices to­gether. But on some MacBooks, for a time, it could be a lit­tle too in­vis­i­ble. We have two fixes for it: one small and slightly quirky, and an­other re­ally use­ful one, avail­able now on ma­cOS.

The small, quirky fix might soon be­come a thing of the past for the vast ma­jor­ity of Tailscale users on Macs. I wanted to doc­u­ment it here: to help other de­vel­op­ers, to mark this mo­ment in time, and qui­etly crow about our win­dowed ma­cOS in­ter­face now be­ing gen­er­ally avail­able.

So here’s the is­sue we had with Tailscale’s icon slip­ping into dark­ness, its lit­tle work-around, and then our greater so­lu­tion.

At its de­but on ma­cOS, Tailscale was a com­mand-line tool and a menu bar util­ity. Some MacBooks, start­ing with 2021 MacBook Pro mod­els, have a notch in the top-mid­dle of their dis­play. And de­pend­ing on how many other apps with menu bar icons are run­ning, the Tailscale ap­p’s icon can be hid­den in­side that notch.

Apple, a com­pany that tra­di­tion­ally fa­vors sim­ple func­tion­al­ity over dense set­tings, does not of­fer users, or de­vel­op­ers, a path out of the dark­ness. If there are more menu bar icons then there is space to the right side of the notch, the menu bar items sim­ply dis­ap­pear into the notch-y ether. If you don’t see it, you can’t click it. There is no no­ti­fi­ca­tion to the user, no over­flow sec­tion, no op­tions to re­arrange the menu bar items.

As of this writ­ing, Apple has some in­di­rect work-arounds, like push­ing more of its own sys­tem icons into a re­vamped Control Center, and of­fer­ing a some­what in­el­e­gant Scale to fit be­low cam­era” op­tion. Third-party menu-bar-man­ag­ing apps like ICE and Bartender can help, but they add com­pli­ca­tions and over­head.

We don’t have any con­trol over where things get ren­dered in the menu bar,” said one Tailscale en­gi­neer, who asked to go name­less so as to share their hon­est opin­ion. You just say, I want to be a menu bar app.’ They shove it up there, and that’s it, you end up where you end up.”

Given this there-or-not-there be­hav­ior, Tailscale de­vel­op­ers re­ceived a num­ber of bug re­ports from users when, af­ter the notched MacBooks’ de­but, their Tailscale icons fell into the mid­dle-screen dis­tance. They were like, Actually, I can’t find my Tailscale. It’s gone. It did­n’t start,” the en­gi­neer said. We’re like, No, it’s there, it’s just hid­ing be­hind the notch.’ But we kind of got sick of that.”

Mac menu bar icons may not know they are trapped in­side the no-pixel phan­tom zone, but they can re­port that some­thing is block­ing them. Using data from oc­clu­sion­State, the Tailscale app can see that its icon is in mid-bar limbo.

And while it can­not move, it can speak. Specifically, a pop-up mes­sage can say:

This af­fa­ble warn­ing is not per­fect, by any means. The notch warn­ing can be in­ad­ver­tently trig­gered by other dis­play quirks, like open­ing and clos­ing the MacBook lid, mov­ing be­tween mon­i­tors, or some com­bi­na­tion of the two. But it helped triage the Where are my Tailscale set­tings?” is­sue for a while.

Apple could cer­tainly make some changes to pre­vent this be­ing an is­sue at all. The sys­tem could pre­vent menu bar icons from ren­der­ing in the notch area at all. An over­flow mech­a­nism could stack the icons that would oth­er­wise drop into a neg­a­tive notch zone. Or de­vel­op­ers could be given more in­for­ma­tion and tools about icons’ notch-itive states.

In the mean­time, here’s a look at the Swift code that let our app know it should chirp a bit when hid­den. It should be un­nec­es­sary with the new win­dowed app—un­less you en­able the Hide Dock icon” op­tion in the win­dowed client op­tions, in which case it might still call out its hid­den na­ture.

As we noted at its September beta re­lease, a win­dowed ver­sion of Tailscale’s ma­cOS app does­n’t re­place the menu bar app, but runs along­side it. It can be pulled up from the Dock or a Spotlight search, and makes a lot of Tailscale data and fea­tures more ac­ces­si­ble.

The win­dowed in­ter­face, en­abled by de­fault start­ing with ver­sion 1.96.2 of our ma­cOS client, of­fers:

* A search­able list of tail­net de­vices and their con­nec­tion sta­tus

* Easily ping, copy IP ad­dresses, and send files through Taildrop to de­vices

* Easy ac­cess to exit nodes, search­able and with one rec­om­mended based on la­tency, per­for­mance, and lo­ca­tion

* A red dot on the Dock icon to note crit­i­cal er­rors

* A mini player” that shrinks Tailscale down to the bare min­i­mum

* A prod­uct tour of all these things upon in­stalling/​up­dat­ing

Let us know what you think of the new in­ter­face so we can make it bet­ter. We’re work­ing on a com­pa­ra­ble UI for Windows de­vices. And we’re al­ways look­ing for ways to bring a lit­tle bit of func­tional whimsy to our soft­ware.

...

Read the original on tailscale.com »

9 394 shares, 0 trendiness

CERN levels up with new superconducting karts

The race is on to test new ve­hi­cles in the un­der­ground Large Hadron Collider tun­nel, ahead of ma­jor works start­ing this sum­mer

The race is on to test new ve­hi­cles in the un­der­ground Large Hadron Collider tun­nel, ahead of ma­jor works start­ing this sum­mer

Update: did you en­joy our April Fool’s day story? While we won’t be rac­ing karts through the tun­nel, we are gear­ing up for ma­jor works to pre­pare for HiLumi LHC and its new tech­nolo­gies. The im­age is based on a real 1991 CERN im­age of the mono­rail used to trans­port peo­ple and equip­ment in the tun­nel dur­ing the life­time of the Large Electron-Positron Collider (LEP), which pre­ceded the LHC.

Following on from the ro­botic mice, CERN en­gi­neers have now de­vel­oped a su­per-charged kart to en­able work­ers to race through the Large Hadron Collider (LHC) un­der­ground tun­nel dur­ing the up­com­ing ma­jor works, start­ing this sum­mer.

The karts promise a power boost to ac­tiv­i­ties dur­ing this pe­riod, known as Long Shutdown 3 (LS3), which will see the LHC trans­formed into the High-Luminosity LHC. These ve­hi­cles will re­place the bi­cy­cles that were used un­til now to travel through the 27-km un­der­ground tun­nel, en­abling en­gi­neers and tech­ni­cians to speed to ar­eas where im­prove­ments to the ac­cel­er­a­tor are re­quired.

Each kart is turbo-boosted by 64 su­per­con­duct­ing en­gines,” ex­plains pro­ject leader Mario Idraulico. When the en­gines are cooled to be­low their crit­i­cal tem­per­a­tures, the Meissner ef­fect lev­i­tates the karts, al­low­ing them to zip through the tun­nels at high speeds and, mamma mia, they’re su­per!”

Early tests have been promis­ing, and the next steps in­volve test­ing dif­fer­ent kart de­signs in an un­der­ground race. Safety co­or­di­na­tor Luigi Fratello has en­sured that each dri­ver will be is­sued with Safety and Health Equipment for Long and Limited Stays (SHELLS), al­though his re­sponse to dri­vers want­ing ba­nanas in the tun­nel was Oh no!”

These karts, al­though de­vel­oped to sup­port CERNs fun­da­men­tal re­search pro­gramme, show clear ap­pli­ca­tions for so­ci­ety. CERNs Knowledge Transfer Group has be­gun dis­cus­sions with European startup com­pany Quantum Mushroom to ex­plore aero­space ap­pli­ca­tions and pow­er­ing for next-gen­er­a­tion anti-grav­ity ve­hi­cles.

Surprisingly, the kart pro­ject be­gan from a col­lab­o­ra­tion be­tween CERN en­gi­neers and on­site nurs­ery school chil­dren — one ex­am­ple of CERNs com­mit­ment to in­spir­ing fu­ture gen­er­a­tions. We’re thrilled that the chil­dren’s kart de­signs were the in­spi­ra­tion for the en­gi­neered karts,” ex­claimed school­teacher Yoshi Kyouryuu, mid-way through paint­ing spots on eggs for an Easter egg hunt.

As ed­u­ca­tors, we pro­mote cu­rios­ity from a young age, which is why we paint ques­tion marks all over our yel­low school walls,” ex­plained school di­rec­tor, Rosalina Pfirsich, look­ing up from her sto­ry­book. With all the con­tri­bu­tions the chil­dren have made to the up­com­ing High-Luminosity LHC pro­ject, we’ve taken to call­ing them Luma!”

Find out more about the High-Luminosity LHC pro­ject.

...

Read the original on home.web.cern.ch »

10 368 shares, 24 trendiness

niki grayson (@nikigrayson.com)

This is a heav­ily in­ter­ac­tive web ap­pli­ca­tion, and JavaScript is re­quired. Simple HTML in­ter­faces are pos­si­ble, but that is not what this is.

Learn more about Bluesky at bsky.so­cial and at­proto.com. right now the as­tro­nauts are call­ing hous­ton be­cause the com­puter on the space­ship is run­ning two in­stances of mi­crosoft out­look and they can’t fig­ure out why. nasa is about to re­mote into the com­puter

...

Read the original on bsky.app »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.