10 interesting stories served every morning and every evening.




1 682 shares, 34 trendiness

Why are 38 percent of Stanford students saying they're disabled?

The stu­dents at America’s elite uni­ver­si­ties are sup­posed to be the smartest, most promis­ing young peo­ple in the coun­try. And yet, shock­ing per­cent­ages of them are claim­ing aca­d­e­mic ac­com­mo­da­tions de­signed for stu­dents with learn­ing dis­abil­i­ties.

In an ar­ti­cle pub­lished this week in The Atlantic, ed­u­ca­tion re­porter Rose Horowitch lays out some shock­ing num­bers. At Brown and Harvard, 20 per­cent of un­der­grad­u­ate stu­dents are dis­abled. At Amherst College, that’s 34 per­cent. At Stanford University, it’s a galling 38 per­cent. Most of these stu­dents are claim­ing men­tal health con­di­tions and learn­ing dis­abil­i­ties, like anx­i­ety, de­pres­sion, and ADHD.

Obviously, some­thing is off here. The idea that some of the most elite, se­lec­tive uni­ver­si­ties in America—schools that re­quire 99th per­centile SATs and ster­ling es­says—would be ed­u­cat­ing large num­bers of gen­uinely learn­ing dis­abled stu­dents is clearly bo­gus. A stu­dent with real cog­ni­tive strug­gles is much more likely to end up in com­mu­nity col­lege, or not in higher ed­u­ca­tion at all, right?

The pro­fes­sors Horowitz in­ter­viewed largely back up this the­ory. You hear students with dis­abil­i­ties’ and it’s not kids in wheel­chairs,” one pro­fes­sor told Horowitch. It’s just not. It’s rich kids get­ting ex­tra time on tests.” Talented stu­dents get to col­lege, start strug­gling, and run for a di­ag­no­sis to avoid bad grades. Ironically, the very schools that cog­ni­tively chal­lenged stu­dents are most likely to at­tend—com­mu­nity col­leges—have far lower rates of dis­abled stu­dents, with only three to four per­cent of such stu­dents get­ting ac­com­mo­da­tions.

To be fair, some of the stu­dents re­ceiv­ing these ac­com­mo­da­tions do need them. But the cur­rent lan­guage of the Americans with Disabilities Act (ADA) al­lows stu­dents to get ex­pan­sive ac­com­mo­da­tions with lit­tle more than a doc­tor’s note.

While some stu­dents are no doubt seek­ing these ac­com­mo­da­tions as semi-con­scious cheaters, I think most gen­uinely iden­tify with the men­tal health con­di­tion they’re us­ing to get ex­tra time on tests. Over the past few years, there’s been a ris­ing push to see men­tal health and neu­rode­vel­op­men­tal con­di­tions as not just a med­ical fact, but an iden­tity marker. Will Lindstrom, the di­rec­tor of the Regents’ Center for Learning Disorders at the University of Georgia, told Horowitch that he sees a grow­ing num­ber of stu­dents with this per­spec­tive. It’s al­most like it’s part of their iden­tity,” Lindstrom told her. By the time we see them, they’re con­vinced they have a neu­rode­vel­op­men­tal dis­or­der.”

What’s dri­ving this trend? Well, the way con­di­tions like ADHD, autism, and anx­i­ety get talked about on­line—the place where most young peo­ple first learn about these con­di­tions—is prob­a­bly a con­tribut­ing fac­tor. Online cre­ators tend to paint a very broad pic­ture of the con­di­tions they de­scribe. A quick scroll of TikTok re­veals cre­ators la­bel­ing every­thing from al­ways wear­ing head­phones, to be­ing bad at man­ag­ing your time, to doo­dling in class as a sign that some­one may have a di­ag­nos­able con­di­tion. According to these videos, who is­n’t dis­abled?

The re­sult is a deeply dis­torted view of normal.” If ever strug­gling to fo­cus or ex­pe­ri­enc­ing bore­dom is a sign you have ADHD, the im­pli­ca­tion is that a normal,” nondis­abled per­son has es­sen­tially no prob­lems. A neurotypical” per­son, the think­ing goes, can churn out a 15-page pa­per with no hint of pro­cras­ti­na­tion, main­tain per­fect fo­cus dur­ing a bor­ing lec­ture, and never ex­pe­ri­ence so­cial anx­i­ety or awk­ward­ness. This view is buf­feted by the cur­rent way many of these con­di­tions are di­ag­nosed. As Horowitch points out, when the lat­est is­sue of the DSM, the man­ual psy­chi­a­trists use to di­ag­nose pa­tients, was re­leased in 2013, it sig­nif­i­cantly low­ered the bar for an ADHD di­ag­no­sis. When the de­f­i­n­i­tion of these con­di­tions is set so lib­er­ally, it’s easy to imag­ine a highly in­tel­li­gent Stanford stu­dent be­com­ing con­vinced that any sign of aca­d­e­mic strug­gle proves they’re learn­ing dis­abled, and any prob­lems mak­ing friends are a sign they have autism.

Risk-aversion, too, seems like a com­pelling fac­tor dri­ving bright stu­dents to claim learn­ing dis­abil­i­ties. Our na­tion’s most promis­ing stu­dents are also its least as­sured. So afraid of fail­ure—of bad grades, of a poorly-re­ceived es­say—they take any sign of strug­gle as a di­ag­nos­able con­di­tion. A few decades ago, a stu­dent who en­tered col­lege and found the ma­te­r­ial harder to mas­ter and their time less eas­ily man­aged than in high school would have been seen as rel­a­tively nor­mal. Now, every time she picks up her phone, a bar­rage of in­flu­encers is clam­or­ing to tell her this is a sign she has ADHD. Discomfort and dif­fi­culty are no longer per­ceived as typ­i­cal parts of grow­ing up.

In this con­text, it’s easy to read the rise of aca­d­e­mic ac­com­mo­da­tions among the na­tion’s most in­tel­li­gent stu­dents as yet an­other man­i­fes­ta­tion of the risk-aver­sion en­demic in the striv­ing chil­dren of the up­per mid­dle class. For most of the elite-col­lege stu­dents who re­ceive them, aca­d­e­mic ac­com­mo­da­tions are a pro­tec­tion against fail­ure and self-doubt. Unnecessary ac­com­mo­da­tions are a two-front form of cheat­ing—they give you an un­just leg-up on your fel­low stu­dents, but they also al­low you to cheat your­self out of gen­uine in­tel­lec­tual growth. If you mask learn­ing de­fi­cien­cies with ex­tra time on texts, soothe so­cial anx­i­ety by for­go­ing pre­sen­ta­tions, and ne­glect time man­age­ment skills with dead­line ex­ten­sions, you might forge a path to bet­ter grades. But you’ll also find your­self less ca­pa­ble of tack­ling the chal­lenges of adult life.

...

Read the original on reason.com »

2 467 shares, 19 trendiness

Transparent Leadership Beats Servant Leadership

: Parenting and lead­er­ship is sim­i­lar. Teach a man to fish, etc.

I spent a cou­ple of years man­ag­ing a team, and I en­tered that role — like many — with­out know­ing any­thing about how to do it. I tried to fig­ure out how to be a good man­ager, and do­ing so I ended up read­ing a lot about ser­vant lead­er­ship. It never quite sat right with me, though. Servant lead­er­ship seems to me a lot like curl­ing par­ent­ing: the leader/​par­ent an­tic­i­pate prob­lems and sweep the way for their di­rect re­ports/​chil­dren.

To be clear, this prob­a­bly feels very good (initially, any­way) for the di­rect re­ports/​chil­dren. But the ser­vant leader/​curl­ing par­ent quickly be­comes an over­worked sin­gle point of fail­ure, and once they leave there is no­body else who knows how to han­dle the ob­sta­cles the leader moved out of the way for every­one. In the worst cases, they leave be­hind a group of peo­ple who have been com­pletely iso­lated from the rest of the or­gan­i­sa­tion, and has no idea what their pur­pose is and how to fit in with the rest of the world.

I would like to in­vent my own buzz­word: trans­par­ent lead­er­ship. In my book, a good leader

ex­plains val­ues and prin­ci­ples em­braced by the or­gan­i­sa­tion to aid them in

mak­ing aligned de­ci­sions on their own,

cre­ates di­rect links be­tween sup­ply and de­mand (instead of de­lib­er­ately mak­ing

them­selves a mid­dle man),

al­lows their di­rect re­ports ca­reer growth by grad­u­ally tak­ing over lead­er­ship

re­spon­si­bil­i­ties,

The mid­dle man­ager that does­n’t per­form any use­ful work is a fun stereo­type, but I also think it’s a good tar­get to aim for. The dif­fer­ence lies in what to do once one has ren­dered one­self re­dun­dant. A com­mon re­sponse is to in­vent new work, ask for sta­tus re­ports, and add bu­reau­cracy.

A bet­ter re­sponse is to go back to work­ing on tech­ni­cal prob­lems. This keeps the man­ager’s skills fresh and gets them more re­spect from their re­ports. The man­ager should turn into a high-pow­ered spare worker, rather than a pa­per-shuf­fler.

...

Read the original on entropicthoughts.com »

3 412 shares, 16 trendiness

Microsoft drops AI sales targets in half after salespeople miss their quotas

Microsoft has low­ered sales growth tar­gets for its AI agent prod­ucts af­ter many sales­peo­ple missed their quo­tas in the fis­cal year end­ing in June, ac­cord­ing to a re­port Wednesday from The Information. The ad­just­ment is re­port­edly un­usual for Microsoft, and it comes af­ter the com­pany missed a num­ber of am­bi­tious sales goals for its AI of­fer­ings.

AI agents are spe­cial­ized im­ple­men­ta­tions of AI lan­guage mod­els de­signed to per­form mul­ti­step tasks au­tonomously rather than sim­ply re­spond­ing to sin­gle prompts. So-called agentic” fea­tures have been cen­tral to Microsoft’s 2025 sales pitch: At its Build con­fer­ence in May, the com­pany de­clared that it has en­tered the era of AI agents.”

The com­pany has promised cus­tomers that agents could au­to­mate com­plex tasks, such as gen­er­at­ing dash­boards from sales data or writ­ing cus­tomer re­ports. At its Ignite con­fer­ence in November, Microsoft an­nounced new fea­tures like Word, Excel, and PowerPoint agents in Microsoft 365 Copilot, along with tools for build­ing and de­ploy­ing agents through Azure AI Foundry and Copilot Studio. But as the year draws to a close, that promise has proven harder to de­liver than the com­pany ex­pected.

According to The Information, one US Azure sales unit set quo­tas for sales­peo­ple to in­crease cus­tomer spend­ing on a prod­uct called Foundry, which helps cus­tomers de­velop AI ap­pli­ca­tions, by 50 per­cent. Less than a fifth of sales­peo­ple in that unit met their Foundry sales growth tar­gets. In July, Microsoft low­ered those tar­gets to roughly 25 per­cent growth for the cur­rent fis­cal year. In an­other US Azure unit, most sales­peo­ple failed to meet an ear­lier quota to dou­ble Foundry sales, and Microsoft cut their quo­tas to 50 per­cent for the cur­rent fis­cal year.

...

Read the original on arstechnica.com »

4 391 shares, 13 trendiness

RAM is so expensive, Samsung won't even sell it to Samsung

When you pur­chase through links in our ar­ti­cles, we may earn a small com­mis­sion. This does­n’t af­fect our ed­i­to­r­ial in­de­pen­dence.

RAM is so ex­pen­sive, Samsung won’t even sell it to Samsung

Due to ris­ing prices from the AI bub­ble, Samsung Semiconductor re­port­edly re­fused a RAM or­der for new Galaxy phones from Samsung Electronics.

The price of eggs has noth­ing on the price of com­puter mem­ory right now. Thanks to a sup­ply crunch from the AI bub­ble, RAM chips are the new gold, with prices on con­sumer PC mem­ory kits bal­loon­ing out of con­trol. In an ob­ject les­son in the ridicu­lous­ness of an eco­nomic bub­ble, Samsung won’t even sell its mem­ory to… Samsung.

Here’s the sit­u­a­tion. Samsung makes every­thing from re­frig­er­a­tors to su­per­mas­sive oil tankers. Getting all that stuff made re­quires an or­ga­ni­za­tion that’s lit­er­ally dozens of af­fil­i­ated com­pa­nies and sub­sidiaries, which don’t nec­es­sar­ily work as closely or har­mo­niously as you might as­sume. For this story, we’re talk­ing about Samsung Electronics, which makes Galaxy phones, tablets, lap­tops, watches, etc., and Samsung Semiconductor Global, which man­u­fac­tures mem­ory and other chips and sup­plies the global mar­ket. That global mar­ket in­cludes both Samsung sub­sidiaries and their com­peti­tors—lap­tops from Samsung, Dell, and Lenovo sit­ting on a Best Buy store shelf might all have Samsung-manufactured mem­ory sit­ting in their RAM slots.

Samsung sub­sidiaries are, nat­u­rally, go­ing to look to Samsung Semiconductor first when they need parts. Such was re­port­edly the case for Samsung Electronics, in search of mem­ory sup­plies for its newest smart­phones as the com­pany ramps up pro­duc­tion for 2026 flag­ship de­signs. But with so much RAM hard­ware go­ing into new AI data cen­ters—and those com­pa­nies will­ing to pay top dol­lar for their hard­ware—mem­ory man­u­fac­tur­ers like Samsung, SK Hynix, and Micron are pri­or­i­tiz­ing data cen­ter sup­pli­ers to max­i­mize prof­its.

The end re­sult, ac­cord­ing to a re­port from SE Daily spot­ted by SamMobile, is that Samsung Semiconductor re­jected the orig­i­nal or­der for smart­phone DRAM chips from Samsung Electronics’ Mobile Experience di­vi­sion. The smart­phone man­u­fac­tur­ing arm of the com­pany had hoped to nail down pric­ing and sup­ply for an­other year. But re­ports say that due to chipflation,” the phone-mak­ing di­vi­sion must rene­go­ti­ate quar­terly, with a long-term sup­ply deal re­jected by its cor­po­rate sib­ling. A short-term deal, with higher prices, was re­port­edly ham­mered out.

Assuming that this in­for­ma­tion is ac­cu­rate—and to be clear, we can’t in­de­pen­dently con­firm it—con­sumers will see prices rise for Samsung phones and other mo­bile hard­ware. But that’s hardly a sur­prise. Finished elec­tron­ics prob­a­bly won’t see the same me­te­oric rise in prices as con­sumer-grade RAM mod­ules, but this ris­ing tide is flood­ing all the boats. Raspberry Pi, which strives to keep its mod-friendly elec­tron­ics as cheap as pos­si­ble, has re­cently had to bring prices up and called out mem­ory costs as the cul­prit. Lenovo, the world’s largest PC man­u­fac­turer, is stock­pil­ing mem­ory sup­plies as a bul­wark against the mar­ket.

But if you’re hop­ing to see prices lower in 2026, don’t hold your breath. According to a fore­cast from mem­ory sup­plier TeamGroup, com­po­nent prices have tripled re­cently, caus­ing fin­ished mod­ules to jump in prices as quickly as 100 per­cent in a month. Absent some kind of dis­as­trous mar­ket col­lapse, prices are ex­pected to con­tinue ris­ing into next year, and sup­ply could re­main con­strained well into 2027 or later.

Michael is a 10-year vet­eran of tech­nol­ogy jour­nal­ism, cov­er­ing every­thing from Apple to ZTE. On PCWorld he’s the res­i­dent key­board nut, al­ways us­ing a new one for a re­view and build­ing a new me­chan­i­cal board or ex­pand­ing his desk­top battlestation” in his off hours. Michael’s pre­vi­ous by­lines in­clude Android Police, Digital Trends, Wired, Lifehacker, and How-To Geek, and he’s cov­ered events like CES and Mobile World Congress live. Michael lives in Pennsylvania where he’s al­ways look­ing for­ward to his next kayak­ing trip.

AMDs FSR Redstone tech to get wider roll­out in December

...

Read the original on www.pcworld.com »

5 382 shares, 18 trendiness

Thoughts on Go vs. Rust vs. Zig

...

Read the original on sinclairtarget.com »

6 365 shares, 1 trendiness

Anthropic taps IPO lawyers as it races OpenAI to go public

Roula Khalaf, Editor of the FT, se­lects her favourite sto­ries in this weekly newslet­ter.

Roula Khalaf, Editor of the FT, se­lects her favourite sto­ries in this weekly newslet­ter.

Anthropic has tapped law firm Wilson Sonsini to be­gin work on one of the largest ini­tial pub­lic of­fer­ings ever, which could come as soon as 2026, as the ar­ti­fi­cial in­tel­li­gence start-up races OpenAI to the pub­lic mar­ket.

The maker of the Claude chat­bot, which is in talks for a pri­vate fund­ing round that would value it at more than $300bn, chose the US west coast law firm in re­cent days, ac­cord­ing to two peo­ple with knowl­edge of the de­ci­sion.

The start-up, led by chief ex­ec­u­tive Dario Amodei, had also dis­cussed a po­ten­tial IPO with big in­vest­ment banks, ac­cord­ing to mul­ti­ple peo­ple with knowl­edge of those talks. The peo­ple char­ac­terised the dis­cus­sions as pre­lim­i­nary and in­for­mal, sug­gest­ing that the com­pany was not close to pick­ing its IPO un­der­writ­ers.

Nonetheless, these moves rep­re­sent a sig­nif­i­cant step up in Anthropic’s prepa­ra­tions for an IPO that would test the ap­petite of pub­lic mar­kets to back the mas­sive, loss­mak­ing re­search labs at the heart of the AI boom.

Wilson Sonsini has ad­vised Anthropic since 2022, in­clud­ing on com­mer­cial as­pects of multi­bil­lion-dol­lar in­vest­ments from Amazon, and has worked on high-pro­file tech IPOs such as Google, LinkedIn and Lyft.

Its in­vestors are en­thu­si­as­tic about an IPO, ar­gu­ing that Anthropic can seize the ini­tia­tive from its larger ri­val OpenAI by list­ing first.

Anthropic could be pre­pared to list in 2026, ac­cord­ing to one per­son with knowl­edge of its plans. Another per­son close to the com­pany cau­tioned that an IPO so soon was un­likely.

It’s fairly stan­dard prac­tice for com­pa­nies op­er­at­ing at our scale and rev­enue level to ef­fec­tively op­er­ate as if they are pub­licly traded com­pa­nies,” said an Anthropic spokesper­son. We haven’t made any de­ci­sions about when or even whether to go pub­lic, and don’t have any news to share at this time.”

OpenAI was also un­der­tak­ing pre­lim­i­nary work to ready it­self for a pub­lic of­fer­ing, ac­cord­ing to peo­ple with knowl­edge of its plans, though they cau­tioned it was too soon to set even an ap­prox­i­mate date for a list­ing.

But both com­pa­nies may also be ham­pered by the fact that their rapid growth and the as­tro­nom­i­cal costs of train­ing AI mod­els make their fi­nan­cial per­for­mance dif­fi­cult to fore­cast.

The pair will also be at­tempt­ing IPOs at val­u­a­tions that are un­prece­dented for US tech start-ups. OpenAI was val­ued at $500bn in October. Anthropic re­ceived a $15bn com­mit­ment from Microsoft and Nvidia last month, which will form part of a fund­ing round ex­pected to value the group be­tween $300bn and $350bn.

Anthropic had been work­ing through an in­ter­nal check­list of changes re­quired to go pub­lic, ac­cord­ing to one per­son fa­mil­iar with the process.

The San Francisco-headquartered start-up hired Krishna Rao, who worked at Airbnb for six years and was in­stru­men­tal in that com­pa­ny’s IPO, as chief fi­nan­cial of­fi­cer last year.

Wilson Sonsini did not re­spond to a re­quest for com­ment.

...

Read the original on www.ft.com »

7 355 shares, 12 trendiness

The RAM Shortage Comes for Us All

Memory price in­fla­tion comes for us all, and if you’re not af­fected yet, just wait.

I was build­ing a new PC last month us­ing some parts I had bought ear­lier this year. The 64 Gigabyte T-Create DDR5 mem­ory kit I used cost $209 then. Today? The same kit costs $650!

Just in the past week, we found out Raspberry Pi’s in­creas­ing their sin­gle board com­puter prices. Micron’s killing the Crucial brand of RAM and stor­age de­vices com­pletely, mean­ing there’s gonna be one fewer con­sumer mem­ory man­u­fac­turer. Samsung can’t even buy RAM from them­selves to build their own Smartphones, and small ven­dors like Libre Computer and Mono are see­ing RAM prices dou­ble, triple, or even worse, and they’re not even buy­ing the lat­est RAM tech!

I think PC builders might be the first crowd to get im­pacted across the board—just look at these in­sane graphs from PC Parts Picker, show­ing RAM prices go­ing from like $30 to $120 for DDR4, or like $150 to five hun­dred dol­lars for 64 gigs of DDR5.

But the im­pacts are only just start­ing to hit other mar­kets.

Libre Computer men­tioned on Twitter a sin­gle 4 gi­ga­byte mod­ule of LPDDR4 mem­ory costs $35. That’s more ex­pen­sive than every other com­po­nent on one of their sin­gle board com­put­ers com­bined! You can’t sur­vive sell­ing prod­ucts at a loss, so once the cur­rent pro­duc­tion batches are sold through, ei­ther prices will be in­creased, or cer­tain prod­uct lines will go out of stock.

The smaller the com­pany, the worse the price hit will be. Even Raspberry Pi, who I’m sure has a lit­tle more mar­gin built in, al­ready raised SBC prices (and in­tro­duced a 1 GB Pi 5—maybe a good ex­cuse for de­vel­op­ers to drop Javascript frame­works and pro­gram for lower mem­ory re­quire­ments again?).

Cameras, gam­ing con­soles, tablets, al­most any­thing that has mem­ory will get hit sooner or later.

I can’t be­lieve I’m say­ing this, but com­pared to the cur­rent mar­ket, Apple’s in­sane mem­ory up­grade pric­ing is… ac­tu­ally in line with the rest of the in­dus­try.

The rea­son for all this, of course, is AI dat­a­cen­ter build­outs. I have no clue if there’s any price fix­ing go­ing on like there was a few decades ago—that’s some­thing con­spir­acy the­o­rists can de­bate—but the prob­lem is there’s only a few com­pa­nies pro­duc­ing all the world’s mem­ory sup­plies.

And those com­pa­nies all re­al­ized they can make bil­lions more dol­lars mak­ing RAM just for AI dat­a­cen­ter prod­ucts, and ne­glect the rest of the mar­ket.

So they’re shut­ting down their con­sumer mem­ory lines, and de­vot­ing all pro­duc­tion to AI.

Even com­pa­nies like GPU board man­u­fac­tur­ers are get­ting shafted; Nvidia’s not giv­ing mem­ory to them along with their chips like they used to, ba­si­cally telling them good luck, you’re on your own for VRAM now!”

Which is es­pe­cially rich, be­cause Nvidia’s prof­it­ing ob­scenely off of all this stuff.

That’s all bad enough, but some peo­ple see a sil­ver lin­ing. I’ve seen some peo­ple say well, once the AI bub­ble bursts, at least we’ll have a ton of cheap hard­ware flood­ing the mar­ket!”

And yes, in past decades, that might be one out­come.

But the prob­lem here is the RAM they’re mak­ing, a ton of it is ei­ther in­te­grated into spe­cial­ized GPUs that won’t run on nor­mal com­put­ers, or be­ing fit­ted into spe­cial types of mem­ory mod­ules that don’t work on con­sumer PCs, ei­ther. (See: HBM).

That, and the GPUs and servers be­ing de­ployed now don’t even run on nor­mal power and cool­ing, they’re part of mas­sive sys­tems that would take a ton of ef­fort to get run­ning in even the most well-equipped home­labs. It’s not like the clas­sic Dell R720 that just needs some air and a wall out­let to run.

That is to say, we might be hit­ting a weird era where the PC build­ing hobby is gut­ted, SBCs get pro­hib­i­tively ex­pen­sive, and any­one who did­n’t stock­pile parts ear­lier this year is, pretty much, in a lurch.

Even Lenovo ad­mits to stock­pil­ing RAM, mak­ing this like the toi­let pa­per sit­u­a­tion back in 2020, ex­cept for mas­sive cor­po­ra­tions. Not enough sup­ply, so com­pa­nies who can af­ford to get some will buy it all up, hop­ing to stave off the short­ages that will prob­a­bly last longer, partly be­cause of that stock­pil­ing.

I don’t think it’s com­pletely out­landish to think some com­pa­nies will start scav­eng­ing mem­ory chips (ala dos­dude1) off other sys­tems for stock, es­pe­cially if RAM prices keep go­ing up.

It’s ei­ther that, or just stop mak­ing prod­ucts. There are some echoes to the global chip short­ages that hit in 2021-2022, and that re­ally shook up the mar­ket for smaller com­pa­nies.

I hate to see it hap­pen­ing again, but some­how, here we are a few years later, ex­cept this time, the AI bub­ble is to blame.

Sorry for not hav­ing a pos­i­tive note to end this on, but I guess… maybe it’s a good time to dig into that pile of old pro­jects you never fin­ished in­stead of buy­ing some­thing new this year.

How long will this last? That’s any­body’s guess. But I’ve al­ready put off some pro­jects I was gonna do for 2026, and I’m sure I’m not the only one.

...

Read the original on www.jeffgeerling.com »

8 339 shares, 17 trendiness

Django 6.0 release notes

These re­lease notes cover the new fea­tures, as well as some back­wards in­com­pat­i­ble changes you should be aware of when up­grad­ing from Django 5.2 or ear­lier. We’ve

be­gun the dep­re­ca­tion process for some fea­tures.

See the How to up­grade Django to a newer ver­sion guide if you’re up­dat­ing an ex­ist­ing pro­ject.

The Django 5.2.x se­ries is the last to sup­port Python 3.10 and 3.11.

Django 6.0 sup­ports Python 3.12, 3.13, and 3.14. We highly rec­om­mend, and only of­fi­cially sup­port, the lat­est re­lease of each se­ries.

Following the re­lease of Django 6.0, we sug­gest that third-party app au­thors drop sup­port for all ver­sions of Django prior to 5.2. At that time, you should be able to run your pack­age’s tests us­ing python -Wd so that dep­re­ca­tion warn­ings ap­pear. After mak­ing the dep­re­ca­tion warn­ing fixes, your app should be com­pat­i­ble with Django 6.0.

These re­lease notes cover the new fea­tures, as well as some back­wards in­com­pat­i­ble changes you should be aware of when up­grad­ing from Django 5.2 or ear­lier. We’ve

be­gun the dep­re­ca­tion process for some fea­tures.

See the How to up­grade Django to a newer ver­sion guide if you’re up­dat­ing an ex­ist­ing pro­ject.

The Django 5.2.x se­ries is the last to sup­port Python 3.10 and 3.11.

Django 6.0 sup­ports Python 3.12, 3.13, and 3.14. We highly rec­om­mend, and only of­fi­cially sup­port, the lat­est re­lease of each se­ries.

Following the re­lease of Django 6.0, we sug­gest that third-party app au­thors drop sup­port for all ver­sions of Django prior to 5.2. At that time, you should be able to run your pack­age’s tests us­ing python -Wd so that dep­re­ca­tion warn­ings ap­pear. After mak­ing the dep­re­ca­tion warn­ing fixes, your app should be com­pat­i­ble with Django 6.0.

Built-in sup­port for the Content Security Policy (CSP)

stan­dard is now avail­able, mak­ing it eas­ier to pro­tect web ap­pli­ca­tions against con­tent in­jec­tion at­tacks such as cross-site script­ing (XSS). CSP al­lows de­clar­ing trusted sources of con­tent by giv­ing browsers strict rules about which scripts, styles, im­ages, or other re­sources can be loaded.

CSP poli­cies can now be en­forced or mon­i­tored di­rectly us­ing built-in tools: head­ers are added via the

ContentSecurityPolicyMiddleware, nonces are sup­ported through the csp() con­text proces­sor, and poli­cies are con­fig­ured us­ing the SECURE_CSP and

SECURE_CSP_REPORT_ONLY set­tings.

These set­tings ac­cept Python dic­tio­nar­ies and sup­port Django-provided con­stants for clar­ity and safety. For ex­am­ple:

from django.utils.csp im­port CSP

SECURE_CSP = {

default-src”: [CSP. SELF],

script-src”: [CSP.SELF, CSP.NONCE],

img-src”: [CSP.SELF, https:“],

The re­sult­ing Content-Security-Policy header would be set to:

de­fault-src self’; script-src self’ nonce-SECRET’; img-src self’ https:

To get started, fol­low the CSP how-to guide. For in-depth guid­ance, see the CSP se­cu­rity overview and the

ref­er­ence docs, which in­clude de­tails about dec­o­ra­tors to over­ride or dis­able poli­cies on a per-view ba­sis.

Django now in­cludes a built-in Tasks frame­work for run­ning code out­side the HTTP re­quest–re­sponse cy­cle. This en­ables of­fload­ing work, such as send­ing emails or pro­cess­ing data, to back­ground work­ers.

The frame­work pro­vides task de­f­i­n­i­tion, val­i­da­tion, queu­ing, and re­sult han­dling. Django guar­an­tees con­sis­tent be­hav­ior for cre­at­ing and man­ag­ing tasks, while the re­spon­si­bil­ity for run­ning them con­tin­ues to be­long to ex­ter­nal worker processes.

Tasks are de­fined us­ing the task() dec­o­ra­tor:

from django.core.mail im­port send_­mail

from django.tasks im­port task

@task

def email_users(emails, sub­ject, mes­sage):

re­turn send_­mail(sub­ject, mes­sage, None, emails)

Once de­fined, tasks can be en­queued through a con­fig­ured back­end:

email_users.en­queue(

emails=[“user@ex­am­ple.com],

sub­ject=“You have a mes­sage”,

mes­sage=“Hello there!”,

Backends are con­fig­ured via the TASKS set­ting. The two

built-in back­ends in­cluded in this re­lease are pri­mar­ily in­tended for de­vel­op­ment and test­ing.

Django han­dles task cre­ation and queu­ing, but does not pro­vide a worker mech­a­nism to run tasks. Execution must be man­aged by ex­ter­nal in­fra­struc­ture, such as a sep­a­rate process or ser­vice.

See Django’s Tasks frame­work for an overview and the Tasks ref­er­ence for API de­tails.

Email han­dling in Django now uses Python’s mod­ern email API, in­tro­duced in Python 3.6. This API, cen­tered around the

email.mes­sage. EmailMessage class, of­fers a cleaner and Unicode-friendly in­ter­face for com­pos­ing and send­ing emails. It re­places use of Python’s older legacy (Compat32) API, which re­lied on lower-level MIME classes (from email.mime) and re­quired more man­ual han­dling of mes­sage struc­ture and en­cod­ing.

Notably, the re­turn type of the EmailMessage.message() method is now an in­stance of Python’s

email.mes­sage.EmailMes­sage. This sup­ports the same API as the pre­vi­ous SafeMIMEText and SafeMIMEMultipart re­turn types, but is not an in­stance of those now-dep­re­cated classes.

...

Read the original on docs.djangoproject.com »

9 304 shares, 22 trendiness

2021 > PHEV BMW iBMUCP 21F37E Post-Crash Recovery — When EU engineering becomes a synonym for “unrepairable” + "generating waste".

This is HQ EV CLINIC Lab for EV Research and Development, EVC Academy train­ings, fran­chis­ing and Networking. For ser­vice ap­pointe­ment, choose fran­chise lo­ca­tions. Dismiss

File Service

Tesla

Model S X

File Service

Tesla

Model S X

HomeHybridBMW2021 > PHEV BMW iB­MUCP 21F37E Post-Crash Recovery — When EU en­gi­neer­ing be­comes a syn­onym for unrepairable” + generating waste”.

2021 > PHEV BMW iB­MUCP 21F37E Post-Crash Recovery — When EU en­gi­neer­ing be­comes a syn­onym for unrepairable” + generating waste”.

2021 > PHEV BMW iB­MUCP PHEV Post-Crash Recovery — When EU en­gi­neer­ing be­comes a syn­onym for unrepairable” + generating waste”.

If you own a BMW PHEV — or if you’re an in­sur­ance com­pany — every pot­hole, every curb im­pact, small or large in­ci­deng and even any rab­bit jump­ing out of a bush rep­re­sents a po­ten­tial €5,000 cost, just for a sin­gle blown fuse in­side the high-volt­age bat­tery sys­tem.

This safety fuse” is de­signed to shut the sys­tem down the mo­ment any crash event is de­tected. Sounds safe — but ex­tremely ex­pen­sive. Theoraticaly in­sur­ance for BMW PHEV should be 3x higher than ICE or EV

Unfortunately, that’s not the only is­sue.

BMW has over-en­gi­neered the di­ag­nos­tic pro­ce­dure to such a level that even their own tech­ni­cians of­ten do not know the cor­rect re­place­ment process. And it gets worse: the orig­i­nal iB­MUCP mod­ule, which in­te­grates the py­ro­fuse, con­tac­tors, BMS and in­ter­nal cop­per-bonded cir­cuitry, is fully welded shut. There are no screws, no ser­vice open­ings, and it is not de­signed to be opened, even though the py­ro­fuse and con­tac­tors are tech­ni­cally re­place­able com­po­nents. Additionally, the pro­ce­dure re­quires flash­ing the en­tire ve­hi­cle both be­fore and af­ter the re­place­ment, which adds sev­eral hours to the process and in­creases risk of bricked com­po­nents which can in­crease the re­cov­ery cost by fac­tor 10x.

But that is still not the only prob­lem.

Even af­ter we man­aged to open the unit and ac­cess every­thing in­side, we dis­cov­ered that the Infineon TC375 MCU is fully locked. Both the D-Flash sec­tors and crash-flag ar­eas are un­read­able via DAP or via se­r­ial ac­cess.

Meaning: even if you re­place the py­ro­fuse, you still can­not clear the crash flag, be­cause the TC375 is cryp­to­graph­i­cally locked.

This leaves only one method:

➡️ Replace the en­tire iB­MUCP mod­ule with a brand-new one. (1100€ + tax for faulty fuse)

And the reg­is­tra­tion of the new com­po­nent is eas­ily one of the worst pro­ce­dures we have ever seen. You need an ICOM, IMIB, and AOS sub­scrip­tion — to­talling over €25,000 in tools — just to re­place a fuse. (even we man­aged to ac­ti­vate this one with IMIB, it will be nec­es­sary in some sit­u­a­tion)

Yes, you read that cor­rectly, 25,000€

Lot of ve­hi­cles de­signed and pro­duced in Europe — ICE, PHEV, and EV — have ef­fec­tively be­come a misslead­ing ECO ex­er­cise. Vehicles mar­keted as CO₂-friendly” end up pro­duc­ing mas­sive CO₂ foot­prints through forced ser­vices, throw-away com­po­nents, high fail­ure rates and un­nec­es­sary parts man­u­fac­tur­ing cy­cles, over­com­pli­cated ser­vice pro­ce­dures, far larger than what the pub­lic is told. If we are de­stroy­ing our ICE au­to­mo­tive in­dus­try based on EURO norms, who is cal­cu­lat­ing real ECO foot­print of re­place­ment part manuc­fac­tur­ing, un­ec­es­sary ser­vic­ing and real waste cost?

We saw this years ago on diesel and petrol cars:

DPF fail­ures, EGR valves, high-pres­sure pumps, tim­ing belts run­ning in oil, low qual­ity au­to­matic trans­mis­sions, and lu­bri­ca­tion sys­tem de­fects. Everyone cal­cu­lates the CO₂ foot­print of a mov­ing ve­hi­cle — no­body cal­cu­lates the CO₂ foot­print of a ve­hi­cle that is con­stantly bro­ken and cre­at­ing waste.

ISTAs of­fi­cial iB­MUCP re­place­ment pro­ce­dure is so risky that if you miss one sin­gle step — poorly ex­plained within ISTA — the sys­tem trig­gers ANTITHEFT LOCK.

This causes the bal­anc­ing con­troller to wipe and lock mod­ules.

Meaning: even in an au­tho­rised ser­vice cen­tre, sys­tem can ac­ci­den­tally delete the con­fig­u­ra­tion and end up need­ing not only a new iB­MUCP, but also all new bat­tery mod­ules.

Yes — re­plac­ing a fuse can ac­ci­den­tally trig­ger the re­place­ment of all healthy HV mod­ules, cost­ing €6,000+ VAT per mod­ule, plus a mas­sive un­known CO₂ foot­print.

This has al­ready hap­pened to sev­eral work­shops in the re­gion.

The next prob­lem: BMW re­fuses to pro­vide train­ing ac­cess for ISTA us­age. We sub­mit­ted two of­fi­cial cer­ti­fi­ca­tion re­quests — both were re­jected by the cen­tral of­fice in Austria, which is bor­der­line dis­crim­i­na­tory.

One more next prob­lem: Battery erasal can hap­pen in OEM and can hap­pen in our or any other 3rd party work­shop, but if pro­ce­dure was started in work­shop 1, it cant be con­tin­ued in work­shop 2. If bat­tery dam­age hap­pens in our work­shop dur­ing fuse change, and than bat­tery swap needed, we or even OEM work­shop do not cover costs of com­pletely new bat­tery pack. Which in­creases heav­ily own­er­ship costs.

All of this rep­re­sents un­nec­es­sary com­plex­ity with no mean­ing­ful pur­pose.

While Tesla’s py­ro­fuse costs €11 and the BMS re­set is around 50€, al­low­ing the car to be safely re­stored, BMWs ap­proach bor­ders on il­log­i­cal en­gi­neer­ing, with no ben­e­fit to safety, no ben­e­fit to anti-theft pro­tec­tion — the only out­come is the gen­er­a­tion of bill­able labour hours and mas­sive amounts of need­less elec­tronic/​lithium waste.

Beyond that, we are ac­tively work­ing on break­ing the JTAG/DAP pro­tec­tion to gain di­rect ac­cess to the D-Flash data and de­crypt its con­tents to­gether with our col­leagues from Hungary. The goal is to sim­plify the en­tire bat­tery-re­cov­ery pro­ce­dure, re­duce costs, and ac­tu­ally de­liver the CO₂ re­duc­tion that the EU keeps misslead­ing— since the man­u­fac­tur­ers clearly won’t.

21F35B high volt­age bat­tery unit,

volt­age and elec­tric cur­rent sen­sor, cur­rent: Counter for the reuse of cell mod­ules ex­ceeded (safety func­tion)

OEM Service cost: 4000€+tax (aprox — if you have bmw quote, send)

OEM iB­MUCP : 1100€+tax

Labor hours: 24h — 50h

EVC: 2500€+tax (full ser­vice)

**It is cheaper to change LG Battery on Tesla, than chang­ing fuse on BMW PHEV, and prob­a­bly even less CO2 foot­pring

If you want to book your ser­vice with EV CLINIC:

Zagreb 1: www.evclinic.hr

Berlin: www.evclinic.de

Slovenija: www.evclinic.si

Serbia: www.evclinic.rs

Click to share on Facebook (Opens in new win­dow)

Click to share on X (Opens in new win­dow)

...

Read the original on evclinic.eu »

10 295 shares, 14 trendiness

AncientJames/multivox

This is the code I cur­rently use to drive my vol­u­met­ric dis­plays.

It sup­ports two closely re­lated de­vices which are con­fig­ured in the src/​dri­ver/​gad­gets di­rec­tory:

* Rotovox is a 400mm Orb fea­tur­ing two 128x64 pan­els arranged ver­ti­cally side by side.

* Vortex is a 300mm Orb fea­tur­ing two 128x64 pan­els arranged hor­i­zon­tally, back to back.

Rotovox has a higher ver­ti­cal res­o­lu­tion and bet­ter hor­i­zon­tal den­sity; Vortex is brighter and has a higher re­fresh rate.

The 3D print­able parts for Vortex are avail­able here.

This code was orig­i­nally writ­ten for a sin­gle dis­play, and the de­vice spe­cific code was later some­what ab­stracted out to sup­port a sec­ond sim­i­lar gad­get. There are as­sump­tions about the hard­ware that are pretty well baked in:

* It con­sists of two HUB75 LED pan­els spin­ning around a ver­ti­cal axis.

* The pan­els use ei­ther ABCDE ad­dress­ing or ABC shift reg­is­ter ad­dress­ing.

* It uses a sin­gle GPIO (a pho­to­di­ode or sim­i­lar) to sync to ro­ta­tion - high for 180°, low for 180°.

The GPIO map­pings and panel lay­out are de­fined in src/​dri­ver/​gad­gets/​gad­get_. GPIO is via mem­ory mapped ac­cess - if you’re us­ing a dif­fer­ent model of Pi you’ll need to change BCM_BASE in the GPIO code. I haven’t tested this, and you should prob­a­bly as­sume it does­n’t work.

Input is via a blue­tooth gamepad - I’ve been us­ing an Xbox con­troller, and the in­put sys­tem is based on the de­fault map­ping for that.

Audio out is also via blue­tooth. I haven’t had suc­cess with the higher qual­ity codecs, but the head­set pro­to­col works.

There are two parts to this code - the dri­ver, which cre­ates a voxel buffer in shared mem­ory and scans its con­tents out in sync with ro­ta­tion, and the client code which gen­er­ates con­tent and writes it into the voxel buffer. Both dri­ver and client code are de­signed to run on the same de­vice, a Raspberry Pi em­bed­ded in the hard­ware and spin­ning at sev­eral hun­dred RPM. There is a demo in­cluded in the Python di­rec­tory which streams point clouds from a PC over wifi to the de­vice, but fun­da­men­tally it’s de­signed as a self con­tained gad­get, like an al­ter­nate time­line Vectrex. A blue­tooth gamepad is used to con­trol the demos.

On the Raspberry Pi, clone the repos­i­tory:

Configure the pro­ject for your hard­ware:

First, the dri­ver has to be run­ning:

When in­voked from the com­mand line it pe­ri­od­i­cally out­puts pro­fil­ing in­for­ma­tion (frame rate, ro­ta­tion rate), and ac­cepts key­board in­put for var­i­ous di­ag­nos­tics:

While that’s run­ning, try one of the toys:

The viewer takes a list of .obj and .png files as ar­gu­ments. You can scale, ro­tate and so on us­ing the gamepad, and it also ac­cepts key­board in­put when run re­motely from the com­mand line.

If you don’t have a phys­i­cal vol­u­met­ric dis­play, there’s a sim­u­la­tor, vir­tex, which you can run in place of vor­tex. It ex­poses the same voxel buffer in shared mem­ory, but ren­ders the con­tents us­ing OpenGL in an X11 win­dow.

Run with­out com­mand line ar­gu­ments it cre­ates a dis­play com­pat­i­ble with the cur­rently con­fig­ured gad­get, but there are some op­tions to let you ex­per­i­ment with dif­fer­ent geome­tries:

An ide­alised de­vice with lin­ear scan­ning and 3 bits per chan­nel can be in­voked like this:

The sim­u­la­tor is fill rate in­ten­sive; if you’re run­ning it on a Raspberry Pi you’ll prob­a­bly want to re­duce the slice count.

If you want it to start up au­to­mat­i­cally on boot, you can in­stall vor­tex as a ser­vice, and set mul­ti­vox to run on startup.

First in­stall every­thing to its de­fault lo­ca­tion ~/Multivox:

This will build the ex­e­cutable files and copy them into the des­ti­na­tion di­rec­tory, as well as cre­at­ing .mct files in ~/Multivox/carts for the built in toys.

and fill in the fol­low­ing in­for­ma­tion:

Then start it up:

The dri­ver as­signs it­self to core 3 - you can add isol­c­pus=3 to the end of /boot/cmdline.txt to en­sure it’s the only thing run­ning on that core.

You’ll also want the launcher to start up on boot:

If every­thing goes smoothly, when you turn on the de­vice it will boot up into Multivox. This is a fan­tasy con­sole which acts as a launcher for all the games and demos you run on the hard­ware. The bun­dled toys are au­to­mat­i­cally in­stalled in the ~/Multivox/carts/ di­rec­tory as .mct files, and ex­ter­nal apps can be launched by adding a .mct file con­tain­ing its com­mand, path and ar­gu­ments.

Each .mct file ap­pears as a car­tridge in the Multivox front end. They should each have a la­bel on the side; at the mo­ment all you can do to dis­tin­guish be­tween them is change their colour in the .mct.

When you exit an app back to the launcher, it saves a snap­shot of the voxel vol­ume, and this gives a pre­view of what you’ll see when you launch a cart. This means there are two com­pet­ing rep­re­sen­ta­tions of the same in­for­ma­tion, and any fu­ture work on the front end will prob­a­bly start with over­haul­ing the en­tire ap­proach.

Some ba­sic UI for con­trols such as chang­ing bit depth, re­boot­ing and so on would also be a boon.

...

Read the original on github.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.