10 interesting stories served every morning and every evening.




1 700 shares, 38 trendiness

Chuck Norris, Action Icon and ‘Walker, Texas Ranger’ Star, Dies at 86

Chuck Norris, the mar­tial arts cham­pion who be­came an iconic ac­tion star and led the hit se­ries Walker, Texas Ranger,” has died. He was 86.

Norris was hos­pi­tal­ized in Hawaii on Thursday, and his fam­ily posted a state­ment Friday say­ing that he died that morn­ing. While we would like to keep the cir­cum­stances pri­vate, please know that he was sur­rounded by his fam­ily and was at peace,” his fam­ily wrote.

Disney and Marc Jacobs Team Up on Mickey & Friends’ Tennis Collection

Debra OConnell Rises to Chairman of U. S. Entertainment TV for Disney as Creative Chief Dana Walden Unveils Her Executive Team

To the world, he was a mar­tial artist, ac­tor, and a sym­bol of strength. To us, he was a de­voted hus­band, a lov­ing fa­ther and grand­fa­ther, an in­cred­i­ble brother, and the heart of our fam­ily,” the state­ment con­tin­ued. He lived his life with faith, pur­pose, and an un­wa­ver­ing com­mit­ment to the peo­ple he loved. Through his work, dis­ci­pline, and kind­ness, he in­spired mil­lions around the world and left a last­ing im­pact on so many lives.”

As an ac­tion star, Norris had a de­gree of cred­i­bil­ity that most oth­ers could not match.. Not only did he ap­pear op­po­site the leg­endary Bruce Lee in 1972 film The Way of the Dragon” (aka Return of the Dragon”), but he was a gen­uine mar­tial arts cham­pion who was a black belt in judo, 3rd de­gree black belt in Brazilian Jiu-Jitsu, 5th de­gree black belt in Karate, 8th de­gree black belt in Taekwondo, 9th de­gree black belt in Tang Soo Do and 10th de­gree black belt in Chun Kuk Do.

Norris was ex­tremely pro­lific in the late 1970s and 80s, star­ring in The Delta Force” and Missing in Action” films, Good Guys Wear Black” (1978), The Octagon” (1980), Lone Wolf McQuade” (1983), “Code of Silence” (1985) and Firewalker” (1986).

Norris joined a bevy of other ac­tion stars in the Sylvester Stallone-directed The Expendables 2” in 2012 af­ter an ab­sence from the screen of seven years.

While he scored high on cred­i­bil­ity, Norris did not leaven his work with hu­mor the way Arnold Schwarzenegger, Bruce Willis and Jackie Chan did. He was nev­er­the­less the ac­tion star of choice for those seek­ing an all-Amer­i­can icon.

In 1984, Norris starred in Missing in Action,” the first in a se­ries of films cen­tered around the res­cue of American POWs pur­port­edly still held af­ter be­ing cap­tured dur­ing the Vietnam War. (Norris’ younger brother Wieland had been killed while serv­ing in Vietnam, and the ac­tor ded­i­cated his Missing in Action” films to his broth­er’s mem­ory, but crit­ics of Norris and pro­ducer Cannon Films main­tained that the films bor­rowed too heav­ily from the cen­tral con­ceit of Stallone’s highly suc­cess­ful Rambo” films.)

As Norris’ movie ca­reer be­gan to wane, he made a timely move to tele­vi­sion, star­ring in the CBS se­ries Walker, Texas Ranger,” in­spired by his film Lone Wolf McQuade.” The pro­gram ran from 1993-2001, and the ac­tor reprised the role of Cordell Walker in the TV movies Walker Texas Ranger 3: Deadly Reunion” (1994) and Walker, Texas Ranger: Trial by Fire” (2005). (Also in 2005 Norris made the last film in which he starred, the straight-to-DVD The Cutter.”)

In his later years, Norris was por­trayed in memes doc­u­ment­ing fic­tional, fre­quently ab­surd feats as­so­ci­ated with him, such as Chuck Norris kills 100% of germs” and Paper beats rock, rock beats scis­sors, and scis­sors beats pa­per, but Chuck Norris beats all 3 at the same time.” In his later years Norris ap­peared in in­fomer­cials for work­out equip­ment and be­came in­creas­ingly out­spo­ken as a po­lit­i­cal con­ser­v­a­tive.

Carlos Ray Norris was born in Ryan, Okla.; his fa­ther served as a sol­dier in World War II. In 1958 he joined the Air Force as an Air Policeman (AP, anal­o­gous to the Army’s MPs). While serv­ing at Osan Air Base in South Korea, Norris first ac­quired the nick­name Chuck” and be­gan his train­ing in Tang Soo Do (aka tang­sudo), lead­ing to his achieve­ments in other mar­tial arts and to his de­vel­op­ment of hy­brid style Chun Kuk Do (“The Universal Way”). He re­turned to the U. S. and served as an AP at March Air Force Base in California.

After his 1962 dis­charge, Norris worked for aero­space com­pany Northrop and opened a chain of karate schools; celebrity clients at the schools in­cluded Steve McQueen, Chad McQueen, Bob Barker, Priscilla Presley, Donny Osmond and Marie Osmond.

Norris made his act­ing de­but in an un­cred­ited role in the 1969 cult Matt Helm film The Wrecking Crew,” star­ring Dean Martin. Norris met Bruce Lee at a mar­tial arts demon­stra­tion in Long Beach, Calif., and played the neme­sis of Lee’s char­ac­ter in 1972 movie The Way of the Dragon” (retitled Return of the Dragon” for U. S. dis­tri­b­u­tion). In 1974 McQueen spurred Norris to be­gin tak­ing act­ing classes at MGM.

Norris first starred in the 1977 ac­tion film Breaker! Breaker!,” in which he played a trucker search­ing for his brother, who’s dis­ap­peared in a town with a judge who’s cor­rupt.

The ac­tor proved his box of­fice met­tle with his sub­se­quent films, “Good Guys Wear Black” (1978), The Octagon” (1980), An Eye for an Eye” (1981) and Lone Wolf McQuade.”

Norris be­gan star­ring in movies for Cannon Films in 1984. Over the next four years, he be­came Cannon’s most promi­nent star, ap­pear­ing in eight films, in­clud­ing the three Missing in Action” films; Code of Silence” — qual­i­ta­tively, one of his best films — the two Delta Force” films and Firewalker.” Norris’ brother Aaron Norris pro­duced sev­eral of these films, and also be­came a pro­ducer on “Walker, Texas Ranger.”

A long­time sup­porter of con­ser­v­a­tive politi­cians, he wrote sev­eral books with Christian and pa­tri­otic themes.

Norris was twice mar­ried, the first time to Di­anne Holechek from 1958 un­til their di­vorce in 1988.

He is sur­vived by sec­ond wife Gena O’Kelley, whom he mar­ried in 1998; three sons, Eric, Mike and Dakota, daugh­ters Danilee and Dina; and a num­ber of grand­chil­dren.

...

Read the original on variety.com »

2 577 shares, 46 trendiness

Fake Compliance as a Service

The ver­sion that ar­rived in your mail­box is trun­cated. Visit the full ar­ti­cle here to view the rest.

Reporters who want to get in touch: Drop an email at aicp­nay@pro­ton.me. In case my Proton ac­count gets blocked or you don’t get an an­swer, tweet us­ing the hash­tag #AICPNAY with con­tact de­tails and I’ll do my best to get in touch with you.

At its core, this ar­ti­cle ar­gues that Delve fakes com­pli­ance while cre­at­ing the ap­pear­ance of com­pli­ance with­out the un­der­ly­ing sub­stance.

Delve achieves its claim of be­ing the fastest plat­form by pro­duc­ing fake ev­i­dence, gen­er­at­ing au­di­tor con­clu­sions on be­half of cer­ti­fi­ca­tion mills that rub­ber stamp re­ports, and skip­ping ma­jor frame­work re­quire­ments while telling clients they have achieved 100% com­pli­ance. Their US-based au­di­tors” are Indian cer­ti­fi­ca­tion mills op­er­at­ing through empty US shells and mail­box agents. Auditors breach in­de­pen­dence rules by sign­ing off any­way, leav­ing com­pa­nies un­know­ingly ex­posed to crim­i­nal li­a­bil­ity un­der HIPAA and hefty fines un­der GDPR.

Delve hands cus­tomers fab­ri­cated ev­i­dence of board meet­ings, tests, and processes that never hap­pened. The plat­form forces com­pa­nies to choose be­tween adopt­ing fake ev­i­dence or per­form­ing mostly man­ual work with lit­tle real au­toma­tion or AI. It pro­duces au­dit re­ports that falsely claim in­de­pen­dent ver­i­fi­ca­tion while Delve it­self ef­fec­tively wears the au­di­tor hat, gen­er­at­ing iden­ti­cal re­ports for all clients, in­clud­ing Lovable, Bland, Cluely, and NASDAQ-traded Duos Edge. It hosts trust pages that list se­cu­rity mea­sures that were never im­ple­mented.

When clients ask hard ques­tions, Delve dodges. They de­mand calls where founders charm, promise, and name­drop Lovable, Bland, and every Fortune 500” as proof. When that fails, donuts ar­rive.

Preface - How it all start­ed­What the leak and our re­search re­veals

Two months ago, an email went out to a few hun­dred Delve clients in­form­ing them that Delve had leaked their au­dit re­ports, along­side other con­fi­den­tial in­for­ma­tion, through a Google spread­sheet that was pub­licly ac­ces­si­ble. This email also claimed that Delve’s au­dit re­ports were fraud­u­lent. The com­pany I work for was one of those clients that re­ceived that email.

Instead of pro­vid­ing clar­i­fi­ca­tion and be­ing trans­par­ent, Delve’s lead­er­ship de­cided to go into deny and de­flect mode. When di­rectly ask­ing them for clar­i­fi­ca­tion, they flat-out de­nied every­thing.

This was se­ri­ous, as it po­ten­tially in­volved nearly all of Delve’s clients and raised ques­tions about the va­lid­ity of the com­pli­ance re­ports Delve’s clients had re­ceived.

Multiple col­leagues in my net­work re­ceived the same email. Having the shared ex­pe­ri­ence of be­ing un­der­whelmed with the Delve ex­pe­ri­ence, and hav­ing the over­all sense that some­thing fishy was go­ing on, we de­cided to pool re­sources and in­ves­ti­gate to­gether. This ar­ti­cle is the re­sult of that col­lab­o­ra­tion.

The rea­son we felt the way we did was due to how lit­tle ac­tual work any of us had to per­form to be­come compliant’, com­bined with a prod­uct prac­ti­cally de­void of any real AI. It mostly felt like a SOC 2 tem­plate pack with a thin SAAS plat­form wrap­per where you sim­ply adopt and sign all tem­plated doc­u­ments. No cus­tom tai­lor­ing, no AI guid­ance, no real au­toma­tion. Just pre-pop­u­lated forms that re­quired you to click save”.

Some of us have gone through com­pli­ance be­fore and felt there was a huge mis­match be­tween our past ex­pe­ri­ence and our ex­pe­ri­ence with Delve.

In this ar­ti­cle I will walk you through a typ­i­cal ex­pe­ri­ence with Delve, the leak that ex­posed their op­er­a­tion, and how it re­vealed the fraud we un­cov­ered. I will show, among other things, how for each of the be­low cat­e­gories:

* Delve breaches AICPA/ISO rules by act­ing as au­di­tor, gen­er­at­ing pre-drafted as­sess­ments, tests, and con­clu­sions

* Delve re­lies on au­dit firms that rub­ber stamp re­ports be­cause gen­uine in­de­pen­dent ver­i­fi­ca­tion would ex­pose the ev­i­dence as fab­ri­cated or de­fi­cient

* Delve mis­leads clients by claim­ing re­ports are pro­duced by US-based CPA firms, when in re­al­ity they are pro­duced by Delve and rub­ber stamped by Indian cer­ti­fi­ca­tion mills

* Delve leads clients to be­lieve they are com­pli­ant when they are not

* Delve helps clients mis­lead the pub­lic by host­ing trust pages that con­tain se­cu­rity mea­sures that were never im­ple­mented

* Delve lies to clients when di­rectly ques­tioned, deny­ing doc­u­mented facts about the leak and re­port gen­er­a­tion

* Delve mar­kets AI-driven au­toma­tion while the prod­uct is prac­ti­cally de­void of AI, re­ly­ing on pre-pop­u­lated tem­plates, man­ual forms, and fab­ri­cated ev­i­dence

* Delve’s prod­uct is un­able to get com­pa­nies truly com­pli­ant

* Delve’s plat­form forces com­pa­nies to choose be­tween adopt­ing fake ev­i­dence or per­form­ing mostly man­ual work with lit­tle real au­toma­tion

* Unable to de­liver real com­pli­ance through its plat­form, Delve de­pends on fraud­u­lent au­di­tors who rub­ber stamp re­ports for clients, falling back on off-plat­form man­ual work with ex­ter­nal vCISOs and good au­di­tors only when com­plaints or pro­file threaten its busi­ness in­ter­ests

* Delve’s process re­sults in clients vi­o­lat­ing GDPR and HIPAA re­quire­ments, ex­pos­ing them to crim­i­nal li­a­bil­ity un­der HIPAA and fines up to 4% of global rev­enue un­der GDPR

For any prospect con­sid­er­ing Delve or any cur­rent Delve client:

Delve loves claim­ing this is all an at­tempt by their jealous com­peti­tors” to fraud­u­lently dis­credit them. When clients ask con­crete ques­tions, they dodge an­swer­ing the ques­tion and in­stead coax you into get­ting on a call with them, where they charm you and tell you every­thing you want to hear. They’ll even throw in some donuts.

One other tac­tic they fre­quently em­ploy is to promise is­sues won’t arise again be­cause their old process, prod­uct or au­di­tor is about to be or has al­ready been re­placed with some­thing bet­ter, that they are about to see the re­sults of. This is a de­flec­tion and de­lay­ing tech­nique. Whenever they start be­ing fre­quntly called out on us­ing a par­tic­u­lar au­di­tor, they’ll switch to an­other equally dodgy one that has­n’t been flagged yet. If they’re called out on their de­fi­cient prod­uct, they point to a su­pe­rior tier or new fea­ture on the time­line that will solve every­thing.

Expressing un­hap­pi­ness and a de­sire to leave will lead to Delve pair­ing you with an ex­ter­nal vir­tual CISO that will help you do com­pli­ance right. You should know that this means you will have to do all the work man­u­ally.

If you are con­cerned about Delve’s con­duct and prac­tices, ask them ques­tions in writ­ing. Do not al­low them to de­flect. Do not get on a call with them. In the clos­ing words at the end of this ar­ti­cle you’ll find more ad­vice.

All in­for­ma­tion con­tained in this ar­ti­cle can be re­pro­duced by con­sult­ing pub­lic sources and hav­ing ac­cess to the Delve plat­form.

All screen­shots and in­for­ma­tion are ac­tual as of mid January 2026.

Here we set the stage. We’ll quickly list all par­ties in­volved, and will pro­vide some back­ground con­text that is use­ful to un­der­stand the rest of this ar­ti­cle.

High pro­file com­pa­nies like those listed in the im­age above, and hun­dreds of oth­ers are af­fected by this. Companies ad­di­tion­ally af­fected are those that part­ner with Delve’s clients, hav­ing been mis­in­formed of the risks in­volved in part­ner­ing with those clients.

Many of those com­pa­nies process PHI of mil­lions of US cit­i­zens on a daily ba­sis. Some of those even serve na­tional de­fense in­ter­ests.

The au­dit firms listed in the il­lus­tra­tion above were iden­ti­fied dur­ing our re­search process, but are not nec­es­sar­ily all au­dit firms used by Delve.

From what we were able to es­tab­lish, 99%+ of Delve’s clients went through ei­ther Accorp or Gradient over the past 6 months.

In the wake of Delve’s leak in December, Delve is re­ported to have switched to Glocert as their pri­mary ISO 27001 au­dit­ing firm.

* Karun Kaushik and Selin Kocalar - The founders of Delve

The above in­di­vid­u­als know­ingly par­tic­i­pated in Delve’s de­lib­er­ate mis­con­duct re­gard­ing au­dit prac­tices.

Delve is a com­pli­ance com­pany. They help busi­nesses get cer­ti­fied for frame­works like SOC 2, ISO 27001, HIPAA, and GDPR. Companies need these cer­ti­fi­ca­tions to prove they han­dle data in a se­cure way and to un­lock deals with larger cus­tomers who re­quire them.

Compliance has tra­di­tion­ally been a time-con­sum­ing process that in­volved lots of spread­sheets. It used to be man­ual, ex­pen­sive and slow.

To give you an idea of what this is all about, we will pri­mar­ily fo­cus on SOC 2 in this ar­ti­cle. SOC 2 is the most com­monly pur­sued frame­work in the US. Practically all tech com­pa­nies that sell to en­ter­prises are ex­pected to be SOC 2 com­pli­ant’, which ba­si­cally means they’ve had to have a SOC 2 au­dit per­formed in the last year.

Getting a clean SOC 2 re­port means hir­ing a CPA firm to re­view your se­cu­rity con­trols. If they suc­cess­fully ver­ify the se­cu­rity you claim to have, through a lot of ev­i­dence you pro­vide them, they is­sue a re­port say­ing your se­cu­rity mea­sures are sound. This re­port be­comes proof you can show cus­tomers and in­vestors.

SOC 2 and ISO 27001, the European coun­ter­part to SOC 2, are vol­un­tary frame­works. HIPAA and GDPR are not.

HIPAA ap­plies to any com­pany han­dling health records in the US. Penalties are se­vere, with will­ful ne­glect pun­ish­able by crim­i­nal charges and prison time.

GDPR cov­ers any com­pany pro­cess­ing data of EU res­i­dents, re­gard­less of where the com­pany is based. Fines run up to 4% of global an­nual rev­enue or 20 mil­lion eu­ros, whichever is higher.

These frame­works carry the force of law be­cause they pro­tect in­for­ma­tion peo­ple can­not eas­ily pro­tect them­selves: med­ical his­to­ries, ge­netic data, bio­met­ric iden­ti­fiers, lo­ca­tion pat­terns, and the full record of their dig­i­tal lives.

The com­pa­nies that help other com­pa­nies be­come com­pli­ant through au­toma­tion are called GRC au­toma­tion plat­forms.

Delve was founded in 2023 by Karun Kaushik and Selin Kocalar, both Forbes 30 Under 30 mem­bers and MIT dropouts who met as fresh­men. They started with a med­ical AI scribe, piv­oted to com­pli­ance af­ter hit­ting HIPAA headaches them­selves, and went through Y Combinator in 2024.

In July 2025, Delve raised $32 mil­lion in Series A fund­ing led by Insight Partners. Before that they had raised a $3.3 mil­lion seed round and went through Y Combinator.

Delve’s pitch is speed through AI. They claim to get com­pa­nies com­pli­ant in days rather than months, us­ing what they call agentic AI through an AI-native” plat­form.

Their mar­ket­ing promises AI agents that au­to­mat­i­cally col­lect ev­i­dence, write re­ports, and mon­i­tor com­pli­ance gaps with­out hu­man busy­work.

The re­al­ity, as this ar­ti­cle will show, is dif­fer­ent.

SOC 2 au­dits op­er­ate un­der strict in­de­pen­dence re­quire­ments de­signed to pre­serve trust in the at­tes­ta­tion process. These rules ex­ist pre­cisely to pre­vent the kind of con­duct this ar­ti­cle ex­poses. While Delve of­fers com­pli­ance ser­vices for HIPAA, GDPR, and ISO 27001, this ar­ti­cle fo­cuses pri­mar­ily on SOC 2. The rules sur­round­ing those other frame­works would re­quire their own de­tailed treat­ment; the goal here is to es­tab­lish a clear pat­tern in Delve’s be­hav­ior, not to cat­a­log every pos­si­ble reg­u­la­tory vi­o­la­tion.

The fun­da­men­tal prin­ci­ple is sim­ple: the party im­ple­ment­ing con­trols can­not be the party at­test­ing to their ef­fec­tive­ness. AICPAs Code of Professional Conduct states that mem­bers must accept the oblig­a­tion to act in a way that will serve the pub­lic in­ter­est, honor the pub­lic trust, and demon­strate a com­mit­ment to pro­fes­sion­al­ism.” When ac­coun­tants can­not be ex­pected to make truth­ful rep­re­sen­ta­tions, we lose the abil­ity to as­sess any pub­lic or pri­vate com­pa­ny’s ac­tual per­for­mance.”

For SOC 2 specif­i­cally, AT-C Section 205 re­quires prac­ti­tion­ers to main­tain in­de­pen­dence in both fact and ap­pear­ance through­out the en­gage­ment. The prac­ti­tioner must not as­sume man­age­ment re­spon­si­bil­i­ties or act as an ad­vo­cate for the sub­ject mat­ter be­ing ex­am­ined.

Under AT-C Section 315, au­di­tors must seek to ob­tain rea­son­able as­sur­ance that the en­tity com­plied with the spec­i­fied re­quire­ments, in all ma­te­r­ial re­spects, in­clud­ing de­sign­ing the ex­am­i­na­tion to de­tect both in­ten­tional and un­in­ten­tional ma­te­r­ial non­com­pli­ance.” This re­quires in­de­pen­dent de­sign of test pro­ce­dures, in­de­pen­dent eval­u­a­tion of ev­i­dence, and in­de­pen­dent for­ma­tion of con­clu­sions.

The au­di­tor’s re­port must rep­re­sent their own pro­fes­sional judg­ment, not pre-writ­ten con­clu­sions pro­vided by the en­tity be­ing au­dited or its plat­form ven­dor.

Delve’s model in­verts this struc­ture. By gen­er­at­ing au­di­tor con­clu­sions, test pro­ce­dures, and fi­nal re­ports be­fore any in­de­pen­dent re­view oc­curs, Delve places it­self in the role of both im­ple­menter and ex­am­iner. This is not a tech­ni­cal­ity. It is a struc­tural fraud that in­val­i­dates the en­tire at­tes­ta­tion.

For read­ers who wish to ver­ify these re­quire­ments:

This story has many threads, and I strug­gled with where to start. Do I lead with the leaked doc­u­ments? The au­di­tor shell game? The fake AI?

In the end, I felt that start­ing with my com­pa­ny’s ex­pe­ri­ence, which il­lus­trates many of the points I make and prove later on, was the most ef­fec­tive way to get the pic­ture across.

I col­lab­o­rated on this ar­ti­cle with users from other Delve clients. We com­pared notes. The pat­terns I de­scribe be­low showed up across all of our ac­counts. Unless men­tioned oth­er­wise, noth­ing here is cherry-picked. Later sec­tions dis­sect spe­cific mech­a­nisms.

This sec­tion shows what it is like to be a Delve client.

Delve goes all out dur­ing the sales process. Through their mar­ket­ing and sales process they con­tin­u­ously em­pha­size be­ing the fastest to get com­pa­nies com­pli­ant, thanks to their AI. They re­peat­edly em­pha­size the im­pres­sive com­pa­nies they work with, and how they part­ner with the best and most re­spected US-based au­dit firms, and that their work is ac­cepted by Fortune 500 com­pa­nies.

Within the first few min­utes of the demo call we did with Delve, they had al­ready men­tioned how com­pa­nies like Lovable, Bland, WisprFlow and hun­dreds of oth­ers are choos­ing Delve over com­peti­tors. The main point they kept dri­ving home was that their rev­o­lu­tion­ary tech, sup­pos­edly way ahead of any other com­pa­ny’s, en­abled com­pli­ance to be achieved with just 10 hours of work in­stead of the hun­dreds it used to take.

Their demo was short but did a good job show­ing how au­to­mated every­thing sup­pos­edly was. They clicked through every­thing pretty quickly, and stopped at every place that made the process look easy or au­to­mated. They showed an in­te­gra­tion pulling in­for­ma­tion out of AWS, the ready to go” de­fault poli­cies you could just adopt with­out mod­i­fi­ca­tion, the rea­son­ably short list of tasks, the AI ques­tion­naire au­toma­tion, the beau­ti­ful trust page you could pub­lish and their AI-copilot’ chat­bot.

What they did­n’t show, how­ever, was that most in­te­gra­tions were fake and re­quired man­ual screen­shots, that tons of forms need to be man­u­ally filled out, that the trust page was­n’t ac­cu­rate and they did­n’t show any of the tasks that had pre-cre­ated fake ev­i­dence.

Their best of­fer” for SOC 2 started at $15,000, which we were able to ne­go­ti­ate down to $13,000 on the call. They kept em­pha­siz­ing how it was a great deal, that we were get­ting so much value that other com­pa­nies paid more for. They re­mained pretty in­flex­i­ble on pric­ing un­til we made clear we were con­sid­er­ing a com­peti­tor. We must have been told at least four times that they could­n’t go any lower be­cause they’d make a loss. There was a lot of pres­sure and pos­tur­ing, but the price quickly dropped to just $6,000 when they re­al­ized we were se­ri­ous about go­ing else­where, and they would throw in ISO 27001 and a 200 hour pen­e­tra­tion test as well.

Pushing us to sign within 24 hours or lose that good a deal, we de­cided to just move for­ward and get it over with. In hind­siht, there were many red flags, but we just wanted to get the job done quickly and move on. If Delve was good enough for com­pa­nies like Lovable, they had to be do­ing some­thing right. Right?

Once you’ve been in­vited to the Delve plat­form and log in for the first time, you’ll be greeted with an in­ter­face that re­veals there are four cat­e­gories for ac­tiv­i­ties:

But even be­fore you get started do­ing any real work, you can go into the trust tab and ac­ti­vate and pub­lish the trust page for your com­pany. You’d think it would prob­a­bly be a very min­i­mal trust page with every­thing fail­ing at first, since you haven’t done any work, right?

Nope, you im­me­di­ately get a fully pop­u­lated trust page that would have you be­lieve you’re run­ning the most se­cure com­pany on earth. Delve’s trust page pre­sented our com­pany as fully se­cure be­fore we had com­pleted any com­pli­ance work, en­abling us to close deals based on mis­rep­re­sented se­cu­rity claims.

Noteworthy is that the list had­n’t changed af­ter we fin­ished com­pli­ance in any way, but still was­n’t truth­ful. My ex­pec­ta­tion was that the list re­flected the se­cu­rity you’d get at the end of Delve’s process, but it took get­ting there to learn that that was­n’t true ei­ther.

It says we did vul­ner­a­bil­ity scan­ning and a pen­test, when we only ever did the scan. It says we did data re­cov­ery sim­u­la­tions, which we never did. It says we re­me­di­ated vul­ner­a­bil­i­ties, which we never did.

It is lit­er­ally a made-up list of se­cu­rity mea­sures of which more than half are not im­ple­mented, or even sup­ported or ad­dressed by Delve’s process and plat­form.

In short, this prod­uct is built to help com­pa­nies fake se­cu­rity rather than com­mu­ni­cate their real se­cu­rity.

When you do ac­tu­ally de­cide to get started and do the work, you’ll find that the plat­form ex­pects you to per­form a num­ber of tasks across four cat­e­gories:

Policies - These are all pre-cre­ated, and Delve rec­om­mends adopt­ing them as they are.

Sadly, un­less you spend a whole week of man­u­ally re­vis­ing them to be ac­cu­rate, you will have in­ac­cu­rate poli­cies full of false promises. Every sin­gle one of Delve’s poli­cies claims to have mea­sures in place that Delve’s process and plat­form do not ad­dress. Even though we knew we’d tech­ni­cally be ly­ing about our se­cu­rity to any­one we sent these poli­cies to for re­view (clients, au­di­tors, in­vestors), we de­cided to adopt these poli­cies be­cause we sim­ply did­n’t have the band­width to rewrite them all man­u­ally. Team - Background checks, se­cu­rity train­ing and man­ual de­vice se­cu­rity through screen­shots for every em­ployee.

This is a lot of man­ual work. it took each of our em­loy­ees 2 hours to their in­di­vid­ual tasks done.30 min­utes to man­u­ally se­cure and screen­shot lap­top se­cu­rity con­fig­u­ra­tion. It is a lot of work if you have a big com­pany.

Also, we did­n’t have disk en­cryp­tion sup­port on a few lap­tops, but that did­n’t stop Delve from sign­ing off on HIPAA:

30 min­utes (or more) to man­u­ally write up a per­for­mance eval­u­a­tion:30 min­utes to do all train­ing (watching Delve’s Youtube videos) and quizzes.

Tech - You add ven­dors here. This is what Delve calls in­te­gra­tions, but most of these are just forms where you are asked to sub­mit screen­shots:Com­pany - Security pro­ce­dures and com­pany-wide tasks are live here. This is just a list of forms that are all pre-filled with fake ev­i­dence. You can speedrun through this by just click­ing ac­cept on every one of them.

One thing that stands out as you go through all of Delve’s tasks is that there is­n’t any AI to speed up any of the work. The only thing avail­able is an AI Co-pilot’ that can pro­vide generic ad­vice, and that does­n’t seem to have much con­text be­yond what form you’re in. More than half of the time the AI co-pi­lot would tell me the ev­i­dence in the plat­form was not suf­fi­cient, and it would re­fer to links of other GRC plat­forms.

As op­posed to what is promised dur­ing the sales process, the tasks are not cus­tomized to the com­pany. You are ba­si­cally dumped into an in­ter­face and told by the cus­tomer suc­cess per­son that you can ask ques­tions on Slack if you get stuck.

The re­al­ity is that there will be very few times you’ll ever need any help from Delve’s team, pri­mar­ily be­cause every­thing in their plat­form is pre-pop­u­lated. You won’t have to ask how to do a risk as­sess­ment, be­cause you get the same pre-gen­er­ated risks that every other Delve cus­tomer gets. You won’t have to com­plain about hav­ing to do board meet­ings, af­ter you were ex­plic­itly promised dur­ing the sales call that this is some­thing all other providers but Delve ask for, be­cause you get pre-cre­ated fake board meet­ing notes that you can adopt as is.

Seriously, be­com­ing com­pli­ant with Delve is noth­ing more than click­ing through a bunch of pre-pop­u­lated forms and ac­cept­ing every­thing. Unless you want to do com­pli­ance the proper way, in which case Dropbox is as good a tool as Delve since you need to then man­u­ally col­lect and write every­thing.

Ok, you sit down to get crack­ing on com­pli­ance. What do you do next?

You ac­cept the de­fault poli­cies that are in­ac­cu­rate out of the box. Like this pol­icy that claims to have an MDM in place when the Delve process con­sists of mak­ing a man­ual screen­shot of your Mac fire­wall set­tings:

You ac­cept the pre-cre­ated con­tents for se­cu­rity sim­u­la­tion by ac­cept­ing the three security in­ci­dents”:

You do the risk as­sess­ment by adopt­ing the ten de­fault risks:

One of our ob­vi­ous con­cerns was that this ap­proach would never pass an au­dit, but we were ex­plic­itly told Delve never failed a sin­gle au­dit in the past, and that au­di­tors have never flagged a sin­gle is­sue with their process. They tried to put our minds at ease by telling us about all the amaz­ing Delve clients that sold to Fortune 500 com­pa­nies us­ing the ex­act same process.

Delve con­tin­u­ously re­mind­ing us that they serve clients like Lovable, Bland, WisprFlow and many oth­ers ended up wear­ing us down, so we just took their word for it and moved on.

When the time comes to ac­tu­ally hook up your stack to Delve, so that Delve can do that continuous mon­i­tor­ing’ thing, you’ll find that the vast ma­jor­ity of their in­te­gra­tions don’t in­te­grate with any­thing at all. They are just con­tain­ers for screen­shots you’ll have to go out and man­u­ally col­lect.

Imagine my sur­prise when I learned that AI-native com­pli­ance would mean I’d have to spend many hours man­u­ally col­lect­ing screen­shots and fill­ing out forms. I truly feel like a mind­less agent in what Delve calls the agen­tic ex­pe­ri­ence”.

Here you can see how the Linear integration’ con­sists of Manual Tests and Forms:

On the em­ployee tab, you man­u­ally do back­ground checks through Certn, fill out more forms, watch use­less YouTube videos, and man­u­ally screen­shot lap­top se­cu­rity. For 100 em­ploy­ees, all 100 of them have to man­u­ally se­cure lap­tops once and up­load screen­shots.

You also do man­ual per­for­mance re­views for every em­ployee with no way to pull data from other so­lu­tions. Lots of typ­ing if you have 20+ em­ploy­ees.

...

Read the original on substack.com »

3 500 shares, 72 trendiness

The open source AI coding agent

Zen gives you ac­cess to a hand­picked set of AI mod­els that OpenCode has tested and bench­marked specif­i­cally for cod­ing agents. No need to worry about in­con­sis­tent per­for­mance and qual­ity across providers, use val­i­dated mod­els that work.

...

Read the original on opencode.ai »

4 445 shares, 55 trendiness

Our commitment to Windows quality

Skip to main con­tent

Skip to main con­tent

I want to speak to you di­rectly, as an en­gi­neer who has spent his ca­reer build­ing tech­nol­ogy that peo­ple de­pend on every day. Windows touches more peo­ple’s lives than al­most any tech­nol­ogy on Earth. Every day, we hear from the com­mu­nity about how you ex­pe­ri­ence Windows. And over the past sev­eral months, the team and I have spent a great deal of time an­a­lyz­ing your feed­back. What came through was the voice of peo­ple who care deeply about Windows and want it to be bet­ter.

Today, I’m shar­ing what we are do­ing in re­sponse. Here are some of the ini­tial changes we will pre­view in builds with Windows Insiders this month and through­out April.

More taskbar cus­tomiza­tion, in­clud­ing ver­ti­cal and top po­si­tions: Repositioning the taskbar is one of the top asks we’ve heard from you. We are in­tro­duc­ing the abil­ity to repo­si­tion it to the top or sides of your screen, mak­ing it eas­ier to per­son­al­ize your work­space.

Integrating AI where it’s most mean­ing­ful, with craft and fo­cus: You will see us be more in­ten­tional about how and where Copilot in­te­grates across Windows, fo­cus­ing on ex­pe­ri­ences that are gen­uinely use­ful and well‑crafted. As part of this, we are re­duc­ing un­nec­es­sary Copilot en­try points, start­ing with apps like Snipping Tool, Photos, Widgets and Notepad.

Reducing dis­rup­tion from Windows Updates: Receiving up­dates should be pre­dictable and easy to plan around, so we’re giv­ing you more con­trol. This in­cludes the abil­ity to skip up­dates dur­ing de­vice setup to get to the desk­top faster, restart or shut down with­out in­stalling up­dates and pause up­dates for longer when needed, all while re­duc­ing up­date noise with fewer au­to­matic restarts and no­ti­fi­ca­tions.

Faster and more de­pend­able File Explorer: File Explorer is one of the most used sur­faces in Windows. Our first round of im­prove­ments will fo­cus on a quicker launch ex­pe­ri­ence, re­duced flicker, smoother nav­i­ga­tion and more re­li­able per­for­mance for every­day file tasks.

More con­trol over wid­gets and feed ex­pe­ri­ences: Widgets should feel help­ful and rel­e­vant, not dis­tract­ing or over­whelm­ing. We’re in­tro­duc­ing qui­eter de­faults, more con­trol over when and how wid­gets ap­pear, and im­proved per­son­al­iza­tion for the Discover feed.

A sim­pler, more trans­par­ent Windows Insider Program: The Windows Insider Program is how you help shape the fu­ture of Windows, and it should be easy to un­der­stand what to ex­pect and how to par­tic­i­pate. We are im­ple­ment­ing changes to make it eas­ier for you to nav­i­gate with clearer chan­nel de­f­i­n­i­tions, eas­ier ac­cess to new fea­tures, higher qual­ity builds, bet­ter vis­i­bil­ity into how your feed­back shapes Windows, and more op­por­tu­ni­ties to en­gage di­rectly with us.

Improved Feedback Hub, avail­able start­ing to­day: Your feed­back is es­sen­tial to im­prov­ing Windows, and it should be easy to share and see what oth­ers are say­ing. Today, we’re rolling out the largest up­date to Feedback Hub yet to our Insiders, with a re­designed ex­pe­ri­ence that makes it faster and eas­ier to sub­mit feed­back and en­gage with the com­mu­nity.

Building on these changes, what fol­lows be­low is our broader plan and ar­eas of fo­cus for the year to raise the bar on Windows 11 qual­ity. The work is un­der­way. You can ex­pect to see tan­gi­ble progress that you’ll be able to feel as you pre­view builds from us through­out the rest of the year.

Last night I had the chance to sit down with a small group of Windows Insiders here in Seattle to lis­ten, to an­swer ques­tions, and to share more about where we’re headed. The Seattle meetup was the first of sev­eral stops our team will be mak­ing to en­gage in per­son, in more cities around the world, to con­nect with the Windows com­mu­nity.

Thank you for hold­ing us to a high stan­dard. Windows is as much yours as it is ours. We’re com­mit­ted to strength­en­ing its foun­da­tion and de­liv­er­ing in­no­va­tion where it mat­ters, for you.

Please keep the feed­back com­ing, to help us shape the fu­ture of Windows to­gether.

What fol­lows is our plan to raise the bar on Windows 11 qual­ity this year, with a fo­cus on per­for­mance, re­li­a­bil­ity and well-crafted ex­pe­ri­ences. These ar­eas have mean­ing­ful im­pact on how you ex­pe­ri­ence Windows: how fast it starts and re­sponds, how sta­ble it is un­der real work­loads, and how con­sis­tent and thought­ful the ex­pe­ri­ence feels.

We are fo­cus­ing on mak­ing Windows 11 more re­spon­sive and con­sis­tent, so per­for­mance feels smooth and re­li­able.

Over the course of the year, we’re im­prov­ing sys­tem per­for­mance, app re­spon­sive­ness, File Explorer, and the Windows Subsystem for Linux, help­ing Windows stay fast as you move be­tween apps and work­loads.

Improving sys­tem per­for­mance: Reducing re­source us­age by Windows to free up more per­for­mance for what you’re do­ing.

Faster and more re­spon­sive Windows ex­pe­ri­ences, with early im­prove­ments al­ready de­liv­er­ing launch time re­duc­tions in apps like File Explorer

Improved mem­ory ef­fi­ciency, low­er­ing the base­line mem­ory foot­print for Windows, free­ing up more ca­pac­ity for the apps you run

More con­sis­tent per­for­mance, even un­der load, so apps stay re­spon­sive through­out the day

More fluid and re­spon­sive app in­ter­ac­tions: Reducing in­ter­ac­tion la­tency by mov­ing core Windows ex­pe­ri­ences to the WinUI3 frame­work.

Improving the shared UI in­fra­struc­ture that Windows ex­pe­ri­ences rely on, re­duc­ing in­ter­ac­tion la­tency and over­head at the plat­form level

Faster re­spon­sive­ness in core Windows ex­pe­ri­ences like the Start menu, by mov­ing more ex­pe­ri­ences to WinUI3

Improving File Explorer fun­da­men­tals: Reducing la­tency and im­prov­ing re­li­a­bil­ity across search, nav­i­ga­tion and file op­er­a­tions.

Copying and mov­ing large files will be faster and more re­li­able

Elevating the Windows Subsystem for Linux (WSL) ex­pe­ri­ence: Improving per­for­mance, re­li­a­bil­ity and in­te­gra­tion for de­vel­op­ers us­ing Linux tools and en­vi­ron­ments on Windows.

Better en­ter­prise man­age­ment with stronger pol­icy con­trol, se­cu­rity and gov­er­nance

Reliability is the bedrock of trust. You should trust that your PC is go­ing to be there and func­tion when you need it most.

Across the op­er­at­ing sys­tem, we will fo­cus on im­prov­ing the base­line re­li­a­bil­ity of ar­eas such as the Windows Insider Program, dri­vers and apps, up­dates and Windows Hello.

Strengthening re­li­a­bil­ity and qual­ity of the Windows Insider Program: Making it clearer what to ex­pect from each Insider chan­nel, rais­ing the qual­ity bar for builds and strength­en­ing feed­back sig­nals to im­prove build qual­ity be­fore broad re­lease.

Clearer vis­i­bil­ity into what fea­tures are in­cluded in each Insider build, so you know what to ex­pect

More con­trol over which new fea­tures you try, with eas­ier switch­ing be­tween Insider chan­nels to match your de­sired level of sta­bil­ity or early ac­cess

Higher qual­ity builds en­ter­ing each chan­nel, with more rig­or­ous val­i­da­tion and feed­back sig­nals be­fore re­lease

Stronger feed­back loops across Windows so is­sues are iden­ti­fied, pri­or­i­tized and ad­dressed faster

Increasing OS, dri­ver and app re­li­a­bil­ity: Delivering a smoother, more de­pend­able Windows 11 ex­pe­ri­ence by strength­en­ing sys­tem sta­bil­ity, dri­ver qual­ity and app re­li­a­bil­ity across our vi­brant ecosys­tem of sil­i­con, ISV and OEM part­ners. Our pri­or­i­ties in­clude:

Strengthening the Windows foun­da­tion by re­duc­ing OS level crashes, im­prov­ing dri­ver qual­ity and app sta­bil­ity across our ecosys­tem so PCs run smoothly and re­li­ably every day

Creating eas­ier, faster and sta­ble con­nec­tions with Bluetooth ac­ces­sories, fewer USB re­lated crashes and con­nec­tion loss, and im­proved printer dis­cov­er­abil­ity and con­nec­tions

More re­li­able cam­era and au­dio con­nec­tions to in­crease your pro­duc­tiv­ity at work and play

More con­sis­tent de­vice wake (including fur­ther wake con­sis­tency im­prove­ments for dock­ing sce­nar­ios) so you can get back to your work faster

Improving the Windows Update ex­pe­ri­ence: Faster, more pre­dictable up­dates with clearer con­trol over restarts and tim­ing.

Less dis­rup­tion from Windows Update, mov­ing de­vices to a sin­gle monthly re­boot, while or­ga­ni­za­tions and users who wish to get new fea­tures and fixes faster re­main able to do so

More di­rect con­trol over up­dates, in­clud­ing the abil­ity to pause up­dates for as long as you need and restart or shut down with­out be­ing forced to in­stall them

Faster, more re­li­able up­date ex­pe­ri­ences, with clearer progress dur­ing up­dates and built‑in re­cov­ery to help keep de­vices sta­ble if some­thing goes wrong

Improving Windows Hello bio­met­ric au­then­ti­ca­tion: We’re strength­en­ing Windows Hello sign‑in so it feels re­li­able, ef­fort­less and se­cure, re­duc­ing fric­tion while in­creas­ing con­fi­dence that your de­vice rec­og­nizes you cor­rectly.

More re­li­able fa­cial recog­ni­tion, so you can trust sign‑in to work when you need it

Faster and more de­pend­able fin­ger­print sign‑in, with fewer re­tries

Easier se­cure sign‑in on gam­ing hand­helds like the ROG Xbox Ally X, with full gamepad sup­port for cre­at­ing a PIN dur­ing setup and in Settings.

To us, craft is the dis­ci­pline that turns func­tional prod­ucts into loved ones through us­abil­ity, pol­ish, co­her­ence and re­fine­ment.

This year, you will see us in­vest in rais­ing the bar on the over­all us­abil­ity of the ex­pe­ri­ence, with more op­por­tu­ni­ties for per­son­al­iza­tion, less noise, less dis­trac­tion and more con­trol across the OS. That in­cludes be­ing thought­ful about how and where we bring AI into Windows, lead­ing with trans­parency, choice and con­trol, so that new ca­pa­bil­i­ties en­hance the ex­pe­ri­ence rather than com­pli­cate it.

Improving the Start and Taskbar ex­pe­ri­ence: Making these core Windows sur­faces more re­li­able, flex­i­ble and per­son­al­ized so you can nav­i­gate your PC in the way that works best for you.

Start and Taskbar de­liver even more con­sis­tent, de­pend­able ac­cess to apps and files, so mov­ing be­tween your con­tent feels fluid through­out the day

Expanded taskbar per­son­al­iza­tion op­tions, in­clud­ing al­ter­nate taskbar po­si­tions and a smaller taskbar, giv­ing you greater con­trol over how this core sur­face fits your work­flow

A more rel­e­vant Recommended sec­tion in Start will sur­face apps and con­tent you care about most, with clear con­trols to cus­tomize the ex­pe­ri­ence or turn it off

More fo­cused user ex­pe­ri­ence with less dis­trac­tions: Making the Windows ex­pe­ri­ence qui­eter, to help you stay fo­cused, min­i­mize dis­trac­tions and stay in your flow.

Device setup on new Windows PCs is qui­eter and more stream­lined, with fewer pages and re­boots so get­ting started is sim­pler

Widgets sur­face in­for­ma­tion more in­ten­tion­ally by de­fault, keep­ing con­tent glance­able and re­duc­ing un­nec­es­sary in­ter­rup­tions

Simpler set­tings make it eas­ier to per­son­al­ize, opt into or turn off Widgets and feed con­tent based on your pref­er­ences

Reduced no­ti­fi­ca­tions so you can stay fo­cused through­out the day

Enhancing Search: Delivering faster, more ac­cu­rate re­sults with con­sis­tent search ex­pe­ri­ence across Windows sur­faces.

Find what mat­ters faster, with search that sur­faces apps, files and set­tings clearly so you can get to the right re­sult quickly

Clearer and more trust­wor­thy re­sults, with re­sults from con­tent on your de­vice easy to un­der­stand and clearly dis­tinct from web re­sults

A more con­sis­tent search ex­pe­ri­ence across the Taskbar, Start, File Explorer and Settings

As part of this ef­fort, we are evolv­ing how Windows is built be­hind the scenes to raise the qual­ity bar and de­liver in­no­va­tion where it mat­ters most, shaped by the feed­back we are hear­ing from you.

This in­cludes deeper val­i­da­tion and broader test­ing across real-world hard­ware and us­age sce­nar­ios be­fore new ex­pe­ri­ences reach Windows Insiders, and a more in­ten­tional ap­proach to where and how new ca­pa­bil­i­ties are in­tro­duced. The re­sult will be higher qual­ity builds, more mean­ing­ful in­no­va­tion and greater flex­i­bil­ity in choos­ing what you want to try. This is how we will con­tinue to build and ship Windows 11, so we can de­liver bet­ter ex­pe­ri­ences with greater con­fi­dence, month af­ter month.

In line with Microsoft’s Secure Future Initiative, we will con­tinue to make Windows more se­cure with every re­lease, build­ing in new ca­pa­bil­i­ties and strength­en­ing se­cu­rity by de­fault to help pro­tect users, de­vices and data.

As we im­prove and in­no­vate, we look for­ward to your con­tin­ued feed­back on where we can keep mak­ing Windows bet­ter.

...

Read the original on blogs.windows.com »

5 312 shares, 21 trendiness

HP realizes that mandatory 15-minute support call wait times isn’t good support

In an odd ap­proach to try­ing to im­prove cus­tomer tech sup­port, HP al­legedly im­ple­mented manda­tory, 15-minute wait times for peo­ple call­ing the ven­dor for help with their com­put­ers and print­ers in cer­tain ge­o­gra­phies.

Callers from the United Kingdom, France, Germany, Ireland, and Italy were met with the forced hold­ing pe­ri­ods, The Register re­ported on Thursday. The pub­li­ca­tion cited in­ter­nal com­mu­ni­ca­tions it saw from February 18 that re­port­edly said the wait times aimed to influence cus­tomers to in­crease their adop­tion of dig­i­tal self-solve, as a faster way to ad­dress their sup­port ques­tion. This in­volves in­sert­ing a mes­sage of high call vol­umes, to ex­pect a de­lay in con­nect­ing to an agent and of­fer­ing dig­i­tal self-solve so­lu­tions as an al­ter­na­tive.”

Even if HPs tele­phone sup­port cen­ter was­n’t busy, callers would re­port­edly hear:

We are ex­pe­ri­enc­ing longer wait­ing times and we apol­o­gize for the in­con­ve­nience. The next avail­able rep­re­sen­ta­tive will be with you in about 15 min­utes.

To quickly re­solve your is­sue, please visit our web­site sup­port.hp.com to check out other sup­port op­tions or find help­ful ar­ti­cles and as­sis­tant to get a guided help by vis­it­ing vir­tu­ala­gent.hp­cloud.hp.com.

Callers were then told to please stay on the line” if they wanted to speak to a rep­re­sen­ta­tive. The phone sys­tem was also set to re­mind cus­tomers of their other sup­port op­tions and to apol­o­gize for the long (HP-induced) wait times upon the fifth, 10th, and 13th minute of the call.

The manda­tory sup­port call times have been lifted, per a com­pany state­ment shared by HP spokesper­son Katie Derkits:

We’re al­ways look­ing for ways to im­prove our cus­tomer ser­vice ex­pe­ri­ence. This sup­port of­fer­ing was in­tended to pro­vide more dig­i­tal op­tions with the goal of re­duc­ing time to re­solve in­quiries. We have found that many of our cus­tomers were not aware of the dig­i­tal sup­port op­tions we pro­vide. Based on ini­tial feed­back, we know the im­por­tance of speak­ing to live cus­tomer ser­vice agents in a timely fash­ion is para­mount. As a re­sult, we will con­tinue to pri­or­i­tize timely ac­cess to live phone sup­port to en­sure we are de­liv­er­ing an ex­cep­tional cus­tomer ex­pe­ri­ence.

HP did­n’t im­me­di­ately clar­ify when it re­moved the wait times. Some HP work­ers were re­port­edly un­happy with the manda­tory hold times, with an anony­mous insider” in HPs European op­er­a­tions re­port­edly telling The Register, per its Thursday re­port: Many within HP are pretty un­happy [about] the mea­sures be­ing taken and the fact those mak­ing de­ci­sions don’t have to deal with the cus­tomers who their de­ci­sions im­pact.”

...

Read the original on arstechnica.com »

6 307 shares, 22 trendiness

The Los Angeles Aqueduct is Wild — Practical Engineering

[Note that this ar­ti­cle is a tran­script of the video em­bed­ded above.]

On the north­ern edge of Los Angeles, fresh wa­ter spills down two stark con­crete chutes perched on the foothills of the San Gabriel Mountains, a place sim­ply called The Cascades. It’s a de­cep­tively sim­ple-look­ing fin­ish line: the end of a roughly 300-mile (or 500 km) jour­ney from the east­ern slopes of the Sierra Nevada into the city.

On November 5, 1913, tens of thou­sands of peo­ple climbed these hills to watch the first wa­ter ar­rive. When the gates fi­nally opened, wa­ter trick­led through, but that trickle quickly be­came a tor­rent. The pro­jec­t’s chief en­gi­neer, William Mulholland, leaned over to the mayor and shouted the line that’s been re­peated ever since: There it is, Mr. Mayor. Take it!”

That mo­ment was pro­found for a lot of rea­sons, de­pend­ing on where you live and how you feel about wa­ter rights. LA did­n’t be­come LA by liv­ing within the lim­its of its lo­cal re­sources. Its me­te­oric growth into the me­trop­o­lis we know was en­abled by an early and ex­tra­or­di­nary de­ci­sion to reach far be­yond its own wa­ter­shed and pull a whole new river into town. Today, roughly a third of LAs wa­ter comes from the Eastern Sierra through the Los Angeles Aqueduct sys­tem. That share swings with snow­pack, drought, and en­vi­ron­men­tal con­straints, but this one piece of in­fra­struc­ture helped turn a wa­ter-lim­ited town into a world city. It’s one of the most im­pres­sive and con­tro­ver­sial en­gi­neer­ing pro­jects in American his­tory.

But to re­ally ap­pre­ci­ate that wa­ter in the cas­cades, you have to look way up­stream and see what it took to get it there. It’s grav­ity, ge­ol­ogy, pol­i­tics, and hu­man am­bi­tion all in a part of the state that most peo­ple never see. Let’s take a lit­tle tour so you can see what I mean. I’m Grady and this is Practical Engineering.

When most peo­ple think about aque­ducts, this is what they pic­ture: a bridge car­ry­ing wa­ter over a val­ley or river. And, just to be clear, these are aque­ducts. But en­gi­neers of­ten use the term more broadly to de­scribe any type of con­veyance sys­tem that car­ries wa­ter over a long dis­tance from a source to a dis­tri­b­u­tion point. Could be a canal, a pipe, a tun­nel, or even just a ditch. In the case of the LA aque­duct, it’s all of them, plus a lot of sup­port­ing in­fra­struc­ture as well.

From the cen­ter of the city, it’s about a four hour drive to the Owens River Diversion Weir. It’s not ac­ces­si­ble to the pub­lic, but it is the of­fi­cial start of the LA Aqueduct, at least when it was orig­i­nally built. Here, all the snowmelt and rain from a huge drainage sys­tem be­tween the Sierra Nevada and Inyo Mountains fun­nel down into the Owens River, where a large con­crete di­ver­sion weir peels nearly all of it out of its nat­ural course and into a canal. This point is roughly 2,500 feet (or 750 me­ters) higher in el­e­va­tion than the bot­tom of the Cascades at the down­stream end, which makes it ob­vi­ous why LA chose it as a source. The en­tire aque­duct is a grav­ity ma­chine. There are no pumps push­ing the wa­ter to­ward the city. Half a mile of el­e­va­tion change feels like a lot un­til you re­al­ize you have to spread it out over 300 miles. It’s all achieved through care­ful grad­ing and man­ag­ing el­e­va­tions along the way to keep the flow mov­ing.

That care is par­tic­u­larly im­por­tant in this up­per sec­tion of the aque­duct, where the wa­ter flows in an open canal. To do this ef­fi­ciently, you need a rel­a­tively con­stant slope from start to fin­ish. That’s a tough thing to achieve on the sur­face of a bumpy earth. Following a river val­ley makes this eas­ier, but you can see the twists and turns nec­es­sary to keep the aque­duct on its gen­tle slope to­ward LA.

If it seems kind of wild that a city would buy up the land and wa­ter rights from some­where so far away, it did to a lot of the peo­ple who lived in the Owens Valley, too. A lot of the ac­qui­si­tions and pol­i­tics of the orig­i­nal LA Aqueduct were car­ried out in bad faith, sour­ing re­la­tion­ships with landown­ers, ranch­ers, farm­ers, and com­mu­ni­ties in the area. The saga is full of bro­ken promises and shady deal­ings. Then when the di­ver­sion started, the area dried up, dis­rupt­ing the ecol­ogy of the re­gion, mak­ing agri­cul­ture more dif­fi­cult and res­i­dents even more re­sent­ful. Many re­sorted to vi­o­lence, not against peo­ple but against the in­fra­struc­ture. They van­dal­ized parts of the aque­duct, a con­flict that later be­came known as the California Water Wars. In one case in 1924, ranch­ers used dy­na­mite to blow up a part of the canal. Later that year, they seized the Alabama Gates.

About 20 miles or 35 kilo­me­ters down­stream from the di­ver­sion weir, a set of gates sits on the east­ern bank of the aque­duct canal. Because it runs be­side the river val­ley, the aque­duct cap­tures some of the wa­ter that flows down from the sur­round­ing moun­tains in ad­di­tion to what’s di­verted out of the Owens River, par­tic­u­larly dur­ing strong storms. That means it’s ac­tu­ally pos­si­ble for the canal to over­fill. The Alabama Gates serve as a spill­way, al­low­ing op­er­a­tors to di­vert wa­ter back down to the river. This also helps drain the canal for main­te­nance or re­pairs when needed.

Those Owens Valley ranch­ers un­der­stood ex­actly what the Alabama Gates con­trolled. Open them, and the wa­ter would run back where it had al­ways run, down the Owens River, in­stead of south to Los Angeles. The re­sis­tance sim­mered and flared for years, but it did­n’t end in the dra­matic show­down at the aque­duct. Instead, it ended at a bank counter. The Inyo County Bank was run by two broth­ers who were also key or­ga­niz­ers and fi­nanciers of the re­sis­tance cam­paign. In August 1927, an au­dit re­vealed ma­jor short­falls and on­go­ing em­bez­zle­ment, and the bank quickly col­lapsed. Residents across the val­ley saw their sav­ings wiped out or frozen overnight, shat­ter­ing what was left of the com­mu­ni­ty’s abil­ity to keep fight­ing.

The Alabama Gates weren’t just a po­lit­i­cal flash­point though. They also marked an im­por­tant di­vid­ing line in the aque­duc­t’s de­sign. LA knew that even if the ranch­ers did­n’t re­lease the wa­ter to the river in protests, a lot of it would end up there any­way through seep­age. As the canal climbed away from the val­ley floor and crossed more porous soil, it would nat­u­rally lose its wa­ter through the ground. So, at the Alabama Gates, the aque­duct tran­si­tions from an un­lined canal to a con­crete-lined chan­nel. It’s still open to the air, so there’s no pro­tec­tion against evap­o­ra­tion or con­t­a­m­i­na­tion, but the losses to the ground are a lot less.

This de­sign con­tin­ues for about 35 miles (or 55 kilo­me­ters) through the val­ley. Along the way, the aque­duct passes the re­mains of Owens Lake. Once a large body of wa­ter, it quickly dried up with the di­ver­sion of the Owens River. Of course, there were im­pacts to wildlife from the loss of wa­ter, but the big­ger prob­lem came later: dust. All the fine sed­i­ment that set­tled on the lakebed over thou­sands of years was now ex­posed to the hot desert sun. When the wind picked up, it filled the air with fine par­tic­u­lates that are dan­ger­ous to breathe. Over the years, there have been times when Owens Lake is the sin­gle largest source of dust pol­lu­tion in the en­tire coun­try, and LA has spent more than a bil­lion dol­lars just try­ing to fix this prob­lem alone. The aque­duct pass­ing along the hill­side past the lake and its chal­lenges is a re­minder that the true cost of wa­ter is of­ten a lot more than the in­fra­struc­ture it takes to de­liver it.

So far, it might be ob­vi­ous that this aque­duct sys­tem is pretty frag­ile to be mak­ing up a ma­jor part of a city’s fresh wa­ter sup­ply. Even be­yond the van­dal­ism and po­lit­i­cal re­sis­tance, there are a lot of things that could go wrong along the way, from bank col­lapses, earth­quakes, di­ver­sion fail­ures, and more. That’s why Haiwee Reservoir was orig­i­nally built in a nar­row sad­dle be­tween two hills as a kind of buffer. With a dam on ei­ther side, it stored wa­ter up so the aque­duct could keep run­ning even dur­ing a dis­rup­tion up­stream. It also slowed the wa­ter down, ex­pos­ing it to the hot desert sun as a nat­ural form of UV dis­in­fec­tion. In the 1960s, the reser­voir was re­con­fig­ured into two basins to add some flex­i­bil­ity. That’s be­cause, around that time, the LA aque­duct be­came two. While the open-topped canal sec­tion was large enough to meet de­mands, the un­der­ground con­duit in the next sec­tion was­n’t. So, LA built a sec­ond one in 1970 to in­crease the flow. If you look at this map of the Haiwee Reservoirs, you can see that wa­ter has two paths: it can flow into the sec­ond aque­duct here from the north basin, or it can pass through the Merritt Cut to the south reser­voir, through the in­take there, and into the first aque­duct. This setup al­lows for some re­dun­dancy, along with reg­u­la­tion and bal­anc­ing of the flows be­tween the two aque­ducts. Haiwee marks the start of the long desert run, with both sys­tems no longer in open-topped lined canals, but run­ning un­der­ground in con­crete con­duits.

There are a lot of ad­van­tages to run­ning an aque­duct in a closed con­duit un­der­ground, es­pe­cially one this long through a desert land­scape. There’s far less evap­o­ra­tion and less po­ten­tial for con­t­a­m­i­na­tion. It does­n’t di­vide the land­scape at the sur­face level, so there’s no need for bridges, cul­verts, and wildlife cross­ings. Going un­der­ground also of­fers more flex­i­bil­ity when it comes to topog­ra­phy. You don’t have to fol­low the con­tours of the sur­face so care­fully be­cause if you come to a hill, you can just dig a lit­tle deeper to keep the con­stant slope.

Of course, those ben­e­fits come with a cost. An un­der­ground con­duit is more ex­pen­sive than a sim­ple chan­nel on the sur­face, and not all the prob­lems with topog­ra­phy are solved. This is Jawbone Canyon, one of the biggest drops for the first aque­duct. Rather than tak­ing a ma­jor de­tour around it, the aque­duct de­scends 850 feet (or 250 me­ters) and then as­cends back up. This type of struc­ture is of­ten called an in­verted siphon. I’ve done a video on how these work for sewer sys­tems, and I’ve also done a video on flood tun­nels that work in a sim­i­lar way, if you want to learn more af­ter this.

Unlike the con­crete con­duit, which re­ally just acts like an un­der­ground canal with a roof, this is one of the places where the wa­ter in the aque­duct is pres­sur­ized. 850 feet of wa­ter col­umn is about 370 psi, 26 bar, or two-and-a-half Megapascals. It’s a lot of pres­sure. These sec­tions of pipe had to be spe­cially man­u­fac­tured on the East Coast, where the ma­jor steel fa­cil­i­ties were, and trans­ported by ship be­cause of their size. They trav­elled all the way around Cape Horn, since the Panama Canal was still un­der con­struc­tion. There are ac­tu­ally quite a few of these siphons cross­ing canyons in this sec­tion of the aque­duct, but Jawbone Canyon is the biggest one.

A lit­tle fur­ther down­stream, the LA aque­duct crosses the California Aqueduct, part of the State Water Project. That sys­tem has a con­nec­tion to LA as well, but this branch at the cross­ing ac­tu­ally heads to Silverwood Lake. However, there is a trans­fer fa­cil­ity, re­cently com­pleted, that can pump wa­ter out of the California Aqueduct di­rectly into the first LA aque­duct. This cre­ates op­por­tu­ni­ties for LA to buy wa­ter that moves through the state sys­tem and of­fers some flex­i­bil­ity in where that wa­ter ends up. There’s also a turn-in that can move wa­ter from the LA aque­duct into the California aque­duct for sit­u­a­tions where trades make sense. The sec­ond LA aque­duct passes un­der­neath the state canal here. And this is a good ex­am­ple of the dif­fer­ences be­tween the first pro­ject (built in the 1910s) and the sec­ond one, built in the 1960s. Over that time, the price of la­bor went up a lot more than the price of ma­te­ri­als. Where the first one care­fully fol­lowed the ex­ist­ing topog­ra­phy with bends and turns to min­i­mize the need for ex­pen­sive pres­sur­ized pipe, the sec­ond one could take a more di­rect path, re­duc­ing la­bor in re­turn for the more spe­cial­ized con­duit ma­te­ri­als.

After wan­der­ing more than a hun­dred miles (or 160 kilo­me­ters) apart, the two Los Angeles Aqueducts come back to­gether at Fairmont Reservoir, in the north­ern foothills of the Sierra Pelona Mountains. This is the last ma­jor topo­graphic bar­rier on the way to Los Angeles. There was no way to go up and over with­out pumps, so in­stead they went straight through. The largest pro­ject was the Elizabeth tun­nel.

Here, the two aque­ducts come to­gether again into a sin­gle wa­ter­course. About 5 miles or 8 kilo­me­ters of ex­ca­va­tion through every­thing from hard rock to loose, wet ground be­came one of the most dif­fi­cult parts of the en­tire pro­ject. The tun­nel re­quired con­tin­u­ous tem­po­rary sup­ports along most of its length, fol­lowed by a per­ma­nent con­crete lin­ing. It was a mon­u­men­tal ef­fort for its time and es­sen­tial not only to cross the range. The Elizabeth Tunnel also de­liv­ers that wa­ter un­der pres­sure to the San Francisquito Power Plant Number 1.

This is the largest of the eight hy­dro­elec­tric plants that run along the aque­duct, cap­tur­ing some of the en­ergy from the wa­ter as it flows down­ward to­ward LA. These plants are a ma­jor part of how the pro­ject paid for it­self, and they con­tinue to serve as an im­por­tant source of elec­tric­ity in the re­gion to­day.

Continuing down­stream, Bouquet Canyon reser­voir adds an­other layer of op­er­a­tional flex­i­bil­ity. It helps reg­u­late flow through the power plants and pro­vides ad­di­tional stor­age, a sort of in­sur­ance pol­icy since this whole reach de­pends on a sin­gle ma­jor tun­nel cross­ing the San Andreas Fault. In case of a ma­jor earth­quake, it’d be best if Angelinos could avoid a si­mul­ta­ne­ous wa­ter short­age.

The aque­duct splits again just up­stream of the San Francisquito Plant Number 2, which was fa­mously de­stroyed by the St. Francis Dam fail­ure. That reser­voir pro­ject was de­signed to sup­ple­ment the stor­age ca­pac­ity along the aque­duct, but the dam failed cat­a­stroph­i­cally in 1928, just 2 years af­ter it was com­pleted, killing more than 400 peo­ple and de­stroy­ing sev­eral parts of the aque­duct as well. The tragedy was one of the worst en­gi­neer­ing dis­as­ters in American his­tory. It put an­other stain on the aque­duct pro­ject, and it ef­fec­tively ru­ined the rep­u­ta­tion of William Mulholland, who was largely con­sid­ered a hero in LA for all his work on the aque­duct and the city’s wa­ter sys­tem. The dam was never re­built, but work­ers re­stored the aque­duct to func­tion­ing ser­vice in only 12 days.

At Drinkwater Reservoir, the two aque­ducts run roughly par­al­lel through the Santa Clarita area, some­times above­ground and some­times be­low, be­fore fi­nally reach­ing the ter­mi­nal struc­tures that carry wa­ter into LA. Usually, the wa­ter stays in the con­duits, which feed the two hy­dropower plants at the foot of the moun­tains. If the plants are out of ser­vice or there’s more flow than they can han­dle, you see ex­cess wa­ter thun­der­ing through the cas­cade struc­tures in­stead.

From here, the aque­duct drops out of the moun­tains and into the north end of the San Fernando Valley, where the wa­ter is treated and pre­pared for dis­tri­b­u­tion. After fil­tra­tion and dis­in­fec­tion, it’s stored in the Los Angeles Reservoir, the sys­tem’s ter­mi­nal reser­voir, so the city can smooth out day-to-day swings in de­mand even while the aque­duc­t’s in­flow stays rel­a­tively steady.

For most of Los Angeles’ his­tory, that finished wa­ter stor­age” was out in the open air. But in the 2000s, drink­ing-wa­ter rules pushed util­i­ties to add stronger pro­tec­tion for treated wa­ter held in un­cov­ered reser­voirs. There’s a good chance you’ve seen their so­lu­tion on the Veritasium chan­nel or else­where: 96 mil­lion plas­tic shade balls that act like a float­ing cover, block­ing sun­light to pre­vent wa­ter-chem­istry prob­lems and help­ing keep wildlife out. They’re the fi­nal pro­tec­tion for this wa­ter that trav­eled so long to reach the city. While the LA Reservoir is, in a sense, the end of the jour­ney for this wa­ter, the orig­i­nal di­ver­sion way back at Owen’s River is­n’t even tech­ni­cally the start any­more!

In 1940, LA ex­tended the aque­duct sys­tem up­stream north­ward by con­nect­ing the Mono basin and fun­nel­ing its wa­ter through tun­nels to the Owens River basin. Like Owens Lake down­stream, Mono Lake be­gan dry­ing out as well. And also like Owens Lake, law­suits, court or­ders, and en­vi­ron­men­tal reg­u­la­tions have tem­pered the value of this wa­ter source, forc­ing LA to sig­nif­i­cantly re­duce di­ver­sions and im­ple­ment costly restora­tion pro­jects.

That’s kind of the story of the LA aque­duct in a nut­shell. The pro­ject seemed ob­vi­ous from an en­gi­neer­ing per­spec­tive. There was lots of snowmelt in the moun­tains; the city had the tech­ni­cal prowess, the fund­ing, the el­e­va­tion, and the po­lit­i­cal power to reach out and take it. The re­sult was one of the most im­pres­sive works of in­fra­struc­ture of the early 20th cen­tury. And con­tin­ued ef­forts to ex­pand and im­prove the sys­tem have made it even more ef­fi­cient, flex­i­ble, and valu­able to the many mil­lions of peo­ple who live in one of the most pop­u­lous cities in America, de­liv­er­ing not only wa­ter but also hun­dreds of megawatts of hy­dropower.

But it many ways, it was not only un­scrupu­lous, but also short-sighted. Residents of the Owens Valley watched ranch­land and farm­land dry up as the wa­ter that had shaped their home was rerouted south. Native com­mu­ni­ties saw their home­land trans­formed with ac­cess to gath­er­ing ar­eas dis­rupted, places made un­rec­og­niz­able, and cul­tural ties strained by changes they did­n’t choose. Wind picked up al­ka­line dust from dried lakebeds. Habitats were dis­rupted, and the birds that de­pended on these wa­ters and wet­lands lost part of what made this mi­gra­tion cor­ri­dor work. It’s easy to see why the aque­duct re­mains con­tro­ver­sial, and why what we some­times dis­miss as red tape” around ma­jor in­fra­struc­ture is of­ten com­pletely jus­ti­fied due dili­gence. As en­gi­neers, and re­ally, as hu­mans, we have to try and ac­count for costs that don’t show up on a bal­ance sheet, but can come back later as decades of law­suits, mit­i­ga­tion, and restora­tion.

And even the aque­duc­t’s orig­i­nal the­sis (that there’s re­li­able snowmelt up there, and a grow­ing city down here) is start­ing to fal­ter. In re­cent decades, the moun­tains have de­liv­ered less pre­dictable runoff: more swings, more years when the tim­ing is wrong, and more un­cer­tainty about what normal” even means any­more. California’s cli­mate has al­ways moved in long cy­cles, but the mar­gin for er­ror is thin­ner now, and no one can say with much con­fi­dence when or if the mois­ture the state de­pends on will re­turn to its old pat­tern.

The hope­ful part is that this is ex­actly where en­gi­neer­ing makes a dif­fer­ence: at the messy in­ter­sec­tion of ge­ol­ogy, cli­mate, cul­ture, pol­i­tics, and hu­man need. The Los Angeles Aqueduct is a case study in what we can build when we’re am­bi­tious, but also what hap­pens when we treat a land­scape like a ma­chine with only one out­put. The next era of wa­ter en­gi­neers can learn a lot from it.

...

Read the original on practical.engineering »

7 259 shares, 12 trendiness

The FSF doesn't usually sue for copyright infringement, but when we do, we settle for freedom — Free Software Foundation — Working together for free software

The Free Software Foundation (FSF), like many oth­ers, re­ceived a no­tice re­gard­ing set­tle­ment in the copy­right in­fringe­ment law­suit Bartz v. Anthropic. It is a class ac­tion law­suit claim­ing that Anthropic in­fringed copy­right by down­load­ing works in Library Genesis and Pirate Library Mirror datasets for pur­poses of train­ing large lan­guage mod­els (LLMs). According to the no­tice, the dis­trict court ruled that us­ing the books to train LLMs was fair use but left for trial the ques­tion of whether down­load­ing them for this pur­pose was le­gal. Apparently, the par­ties agreed to set­tle in­stead of wait­ing for the trial and they are now reach­ing out to po­ten­tial copy­right hold­ers to of­fer money in lieu of po­ten­tial dam­ages.

The FSF holds copy­rights to many pro­grams in the GNU Project, as well as to sev­eral books. We pub­lish all works that we hold copy­rights to un­der free (as in free­dom) li­censes. Among the works we hold copy­rights over is Sam Williams and Richard Stallman’s Free as in free­dom: Richard Stallman’s cru­sade for free soft­ware, which was found in datasets used by Anthropic as train­ing in­puts for their LLMs. It was pub­lished by O’Reilly and by the FSF un­der the GNU Free Documentation License (GNU FDL). This is a free li­cense al­low­ing use of the work for any pur­pose with­out pay­ment.

Obviously, the right thing to do is pro­tect com­put­ing free­dom: share com­plete train­ing in­puts with every user of the LLM, to­gether with the com­plete model, train­ing con­fig­u­ra­tion set­tings, and the ac­com­pa­ny­ing soft­ware source code. Therefore, we urge Anthropic and other LLM de­vel­op­ers that train mod­els us­ing huge datasets down­loaded from the Internet to pro­vide these LLMs to their users in free­dom. We are a small or­ga­ni­za­tion with lim­ited re­sources and we have to pick our bat­tles, but if the FSF were to par­tic­i­pate in a law­suit such as Bartz v. Anthropic and find our copy­right and li­cense vi­o­lated, we would cer­tainly re­quest user free­dom as com­pen­sa­tion.

...

Read the original on www.fsf.org »

8 254 shares, 10 trendiness

Oregon School Cell Phone Ban: ‘Engaged students, joyful teachers’

Swipe or click to see more

Gov. Tina Kotek vis­ited Estacada High School to hear how her cell phone ban has been go­ing. (Staff photo: Christopher Keizur)

Swipe or click to see more

Gov. Tina Kotek said she chose Estacada High for her visit be­cause of the pos­i­tive things hap­pen­ing within the dis­trict. (Staff photo: Christopher Keizur)

Swipe or click to see more

Swipe or click to see more

Swipe or click to see more

Swipe or click to see more

There was plenty of un­cer­tainty and de­bate about the ef­fec­tive­ness of a cell phone ban de­creed by ex­ec­u­tive or­der last sum­mer.

But at least in Estacada, the pol­icy has earned two thumbs up, in­clud­ing ap­proval from a grumpy old teacher.”

Jeff Mellema is a lan­guage arts teacher at Estacada High School. He has worked in the build­ing for 24 years, and he said the new pol­icy that pro­hibits stu­dents from us­ing their phones dur­ing the day has been a breath of fresh air.

There is so much bet­ter dis­course in my class­room, be it per­sonal or aca­d­e­mic,” Mellema said. Students can’t avoid those con­ver­sa­tions any­more with their phones.”

This ban has brought joy back to this old, grumpy teacher,” he added with a smile.

That is the kind of feed­back Gov. Tina Kotek was hop­ing for as she vis­ited Estacada High School on Wednesday af­ter­noon, March 18. Her goal was to visit class­rooms, speak with ad­min­is­tra­tors, and meet with stu­dents one-on-one to hear about the ef­fec­tive­ness of her phone pol­icy.

I knew when I put out the or­der, not every­one would love it from day one,” Gov. Kotek said. I ap­pre­ci­ate all the feed­back to­day.”

You have an amaz­ing school. Go Rangers,” she added with a smile.

For years, ed­u­ca­tors had re­ported that cell phones were dis­rup­tive in class­rooms and hin­dered ef­fec­tive teach­ing. Research sup­ported those anec­do­tal claims, show­ing that phones un­der­mine stu­dents’ abil­ity to fo­cus, even when they are just sit­ting on the desk, un­used.

So the gov­er­nor is­sued her ex­ec­u­tive or­der pro­hibit­ing cell phone use by stu­dents dur­ing the school day in Oregon’s K-12 pub­lic schools. To help im­ple­ment the ban, her of­fice worked with the Oregon Department of Education to share model poli­cies for schools that al­ready have pro­hi­bi­tions in place, as well as guid­ance on im­ple­men­ta­tion flex­i­bil­ity.

We are grate­ful to have Governor Kotek here to­day to see with her own eyes the pos­i­tive ef­fect this has had,” said Estacada Superintendent Ryan Carpenter. We can now de­mand this ex­pec­ta­tion for our stu­dents’ well-be­ing and suc­cess.”

Since that or­der was is­sued, every sin­gle pub­lic school dis­trict in Oregon is in com­pli­ance with the band.

The goal is for every stu­dent to have the best op­por­tu­nity to be suc­cess­ful,” Kotek said. They need to know how to talk to peo­ple, learn, and go out into the world.”

The gov­er­nor vis­ited two class­rooms dur­ing her trip to Estacada High — Mr. Schaenman’s his­tory class and Mrs. Hannet’s al­ge­bra class. Gov. Kotek as­sured the kids that she still uses al­ge­bra in her day-to-day life, no mat­ter how un­likely that may seem to the young­sters.

In the class­rooms, she was able to take a straw poll around the cell phone ban and then get spe­cific, di­rect feed­back from the kids.

Overall, it was pos­i­tive. The Rangers said they no­ticed changes in how they in­ter­act with teach­ers and peers. They don’t feel that siren’s song” tug of their phones as of­ten, and the changes are bleed­ing into every­day life as well — think less re­minders to put phones away dur­ing fam­ily din­ners. Phones also led to is­sues around bul­ly­ing and on­line tox­i­c­ity dur­ing the school day.

There are some hic­cups. The stu­dents spoke about dif­fi­cul­ties in track­ing busy sched­ules. Many ath­letes re­lied on their phones for prac­tice times and lo­ca­tions. Some ad­vanced place­ment kids said the overzeal­ous pro­grams mon­i­tor­ing school lap­tops blocked ac­cess to needed re­sources for study­ing/​re­search­ing school­work. There is even a strange quirk with school-pro­vided tech that pre­vents them from ac­cess­ing their cal­cu­la­tors.

Maybe the fil­ters are too strong right now,” Gov. Kotek said. That is why we are work­ing with the dis­tricts to best im­ple­ment the pol­icy.”

The kids also weighed in on the de­bate around the ex­tent of the ban. The two op­tions bandied in Salem were a bell-to-bell” pol­icy or just in­side class­rooms. The lat­ter would al­low kids to use their phones dur­ing pass­ing pe­riod and lunch. Several ad­vo­cated for that change.

That mir­rored the de­bate within the Oregon leg­is­la­ture. It ul­ti­mately led to a stale­mate and the need for Gov. Kotek’s ex­ec­u­tive rul­ing.

When you make a de­ci­sion like this, you don’t know how it will ul­ti­mately work,” Kotek told the stu­dents. I ap­pre­ci­ate you adapt­ing to the sit­u­a­tion and mak­ing it work for you.”

While things could change in the fu­ture, the gov­er­nor is pleased with the early re­sults. The phone ban is here to stay.

Estacada School District is rev­el­ing in its sta­tus of pub­lic school Golden Child.”

The visit from the gov­er­nor is the lat­est feather in the cap of the rural, small dis­trict. In 2025, Estacada High had a 92.5% grad­u­a­tion rate. That is a stun­ning turn­around from a record-low of 38.5% in 2015. The dis­trict cred­its poli­cies aimed at re­tain­ing tal­ented teach­ers and em­pow­er­ing stu­dents to take a more ac­tive role in their learn­ing.

We are proud of these re­sults,” Carpenter said. This is a re­flec­tion and re­ward for a ton of hard work. This dis­trict lit­er­ally changed its stars to be seen as a true aca­d­e­mic pow­er­house in Oregon.”

That mind­set con­tin­ues with the cell phone ban. Like many oth­ers, Estacada had a ver­sion of this in place be­fore the of­fi­cial edict. But with the gov­er­nor’s push, it em­pow­ered ad­min­is­tra­tors and teach­ers to learn fully em­brace the ban.

Any pol­icy is only as good as the teach­ers who en­force it,” Carpenter said.

In craft­ing its pol­icy, Estacada in­cor­po­rated feed­back from par­ents. That led to some key de­ci­sions around the cell phone ban. Rather than use pouches or lock­ers, stu­dents are al­lowed to keep their phones safely stored in their back­packs. That was for two rea­sons — it al­lows stu­dents to con­tact loved ones dur­ing emer­gen­cies, and many par­ents use phone track­ers to keep tabs on their kids.

The dis­trict has also leaned on di­rect, im­me­di­ate com­mu­ni­ca­tion. The flow of in­for­ma­tion reaches par­ents di­rectly, avoid­ing some of the mis­com­mu­ni­ca­tion that oc­curred in the past.

Even I’m sur­prised by the im­pact this has had,” Kotek said. I’m thank­ful for the ed­u­ca­tors who took up the charge when I said we’ve got to do this.”

We can model what Estacada is do­ing for other dis­tricts across the state,” she added.

Oregon vot­ers have re­jected most laws in ref­er­en­dums, with many de­cided out­side November

Now’s the time to see Portland cherry blos­soms in peak bloom

...

Read the original on portlandtribune.com »

9 186 shares, 12 trendiness

Java Is Fast. Your Code Might Not Be.

Part 1 of 3 in the Java Performance Optimization se­ries. Parts 2 and 3 com­ing soon.

I built a Java or­der-pro­cess­ing app for a talk I gave at DevNexus a cou­ple of weeks ago. The app worked. Tests passed. I ran a load test and col­lected a Java Flight Recording (JFR).

Before any changes: 1,198ms elapsed time, 85,000 or­ders per sec­ond, peak heap sit­ting at just over 1GB, 19 GC pauses.

After: 239ms. 419,000 or­ders per sec­ond. 139MB heap. 4 GC pauses.

Same app. Same tests. Same JDK. No ar­chi­tec­tural changes. And those num­bers get a lot more mean­ing­ful when you con­sider that code like this does­n’t run on a sin­gle box in pro­duc­tion. It runs across a fleet.

In Part 2 I’ll walk through the pro­fil­ing data be­hind those num­bers: the flame graph, which meth­ods were ac­tu­ally hot, and what changed when we fixed them. Before we get there, you need to know what kinds of things we were ac­tu­ally fix­ing.

The prob­lems were pat­terns that show up in real code­bases. They com­pile fine, they sneak through code re­view, and they’re the kind of thing you could miss with­out pro­fil­ing data telling you where to look. Here are eight of them.

TL;DR: Fixing anti-pat­terns like these turned a Java app that took 1,198ms into one that took 239ms. Here are some to look for and fix:

Too-broad syn­chro­niza­tion — one lock be­comes the bot­tle­neck

After fix­ing: 5x through­put, 87% less heap, 79% fewer GC pauses. Same app, same tests, same JDK.

String re­port = ”;

for (String line : log­Lines) {

re­port = re­port + line + \n”;

This code looks good, right? The prob­lem is what String im­mutabil­ity means in prac­tice.

Every time you use +, Java cre­ates a brand new String ob­ject, a full copy of all pre­vi­ous con­tent with the new bit ap­pended. The old one gets dis­carded. This hap­pens every sin­gle it­er­a­tion.

The char­ac­ters be­ing copied scale as O(n²). If you have 10,000 lines, it­er­a­tion 1 copies roughly noth­ing, it­er­a­tion 5,000 copies 5,000 char­ac­ters worth of ac­cu­mu­lated con­tent, it­er­a­tion 10,000 copies all of it. BellSoft ran JMH bench­marks on ex­actly this and showed that when n grows by 4x, the loop-con­cate­na­tion ver­sion slows down by more than 7x, much worse than lin­ear growth.

StringBuilder sb = new StringBuilder();

for (String line : log­Lines) {

sb.ap­pend(line).ap­pend(“\n”);

String re­port = sb.toString();

StringBuilder works off a sin­gle mu­ta­ble char­ac­ter buffer. One al­lo­ca­tion. Every ap­pend writes into that buffer. One toString() at the end.

Note: Since JDK 9, the com­piler is smart enough to op­ti­mize Order: + id + to­tal: + amount on a sin­gle line. But that op­ti­miza­tion does­n’t carry into loops. Inside a loop, you still get a new StringBuilder cre­ated and thrown away on every it­er­a­tion. You have to de­clare it be­fore the loop your­self, like the fix above shows.

for (Order or­der : or­ders) {

int hour = or­der.time­stamp().at­Zone(ZoneId.sys­temDe­fault()).getH­our();

long count­ForHour = or­ders.stream()

.filter(o -> o.time­stamp().at­Zone(ZoneId.sys­temDe­fault()).getH­our() == hour)

.count();

or­der­s­By­Hour.put(hour, count­ForHour);

This looks rea­son­able. You’re group­ing or­ders by hour. But look at what’s hap­pen­ing: for each or­der, you’re stream­ing over the en­tire list to count how many or­ders share that hour. If you have 10,000 or­ders, that’s 10,000 it­er­a­tions times 10,000 stream el­e­ments. That’s 100 mil­lion com­par­isons for what should be a sin­gle pass.

In my demo app, this ex­act pat­tern was the sin­gle largest CPU hotspot. It ac­counted for nearly 71% of CPU stack sam­ples in the JFR record­ing.

for (Order or­der : or­ders) {

int hour = or­der.time­stamp().at­Zone(ZoneId.sys­temDe­fault()).getH­our();

or­der­s­By­Hour.merge(hour, 1L, Long::sum);

One pass. O(n). Each or­der in­cre­ments its hour’s count di­rectly. You could also use Collectors.groupingBy(… Collectors.counting()) to do it in a sin­gle stream pipeline, but the merge ap­proach is clear and avoids the over­head of cre­at­ing a stream at all.

If you see a .stream() call in­side a loop body, that’s a sig­nal to pause and check whether you’re do­ing re­dun­dant work.

pub­lic String buil­dOrder­Sum­mary(String or­derId, String cus­tomer, dou­ble amount) {

re­turn String.format(“Order %s for %s: $%.2f”, or­derId, cus­tomer, amount);

String.format() tends to get rec­om­mended as the clean, read­able way to build strings. Yep, it’s read­able and it’s also the slow­est string-build­ing op­tion in Java when you’re call­ing it fre­quently.

Baeldung ran JMH bench­marks across every string con­cate­na­tion ap­proach in Java. String.format() came in last in every cat­e­gory. It has to parse the for­mat string every call, run regex-based to­ken match­ing, and dis­patch through the full java.util. Formatter ma­chin­ery. StringBuilder was con­sis­tently the fastest.

re­turn Order + or­derId + for + cus­tomer + : $” + String.format(“%.2f”, amount);

Use String.format() for the nu­meric for­mat­ting where you need it, and let the com­piler op­ti­mize the rest. Or just use a StringBuilder if you need full con­trol.

String.format() is fine for con­fig load­ing, startup code, er­ror mes­sages, any­where that runs in­fre­quently. Move it out of any­thing your pro­filer says is hot.

Long sum = 0L;

for (Long value : val­ues) {

sum += value;

What’s ac­tu­ally hap­pen­ing at the JVM level:

Long sum = Long.valueOf(0L);

for (Long value : val­ues) {

sum = Long.valueOf(sum.longValue() + value.long­Value());

Each it­er­a­tion un­boxes sum to get a long, adds, then boxes the re­sult back into a new Long ob­ject. With a mil­lion el­e­ments, you’ve cre­ated a mil­lion Long ob­jects that the GC has to clean up. Each Long on a 64-bit JVM takes roughly 16 bytes on the heap. That’s 16MB of heap churn for what should be a sim­ple ad­di­tion loop.

long sum = 0L; // prim­i­tive, not the wrap­per

for (long value : val­ues) {

sum += value;

Where this tends to sneak in: ag­gre­ga­tion and pro­cess­ing loops. Summing met­rics, ac­cu­mu­lat­ing coun­ters, build­ing stats. Boxed types creep in be­cause some­one used Long in a col­lec­tion sig­na­ture some­where up­stream and no­body thought about what it costs down­stream in the loop. That can be le­git­i­mately easy to miss.

Watch for Integer, Long, or Double used as lo­cal loop vari­ables or ac­cu­mu­la­tors. Also watch for List and Map in fre­quently-called code. Every .get() and .put() in­volves a box/​un­box round trip that you’re pay­ing for silently.

pub­lic int parse­OrDe­fault(String value, int de­fault­Value) {

try {

re­turn Integer.parseInt(value);

} catch (NumberFormatException e) {

re­turn de­fault­Value;

If this method is called in a tight loop with a mean­ing­ful per­cent­age of non-nu­meric in­puts, you have a per­for­mance prob­lem that might not look like one.

The ex­pen­sive part is Throwable.fillInStackTrace(), which runs in­side the Throwable con­struc­tor every time an ex­cep­tion is cre­ated. It walks the en­tire call stack via a na­tive method and ma­te­ri­al­izes it into StackTraceElement ob­jects. The deeper your call stack, the more ex­pen­sive this is. Imagine a sit­u­a­tion in a frame­work like Spring where this can get very deep. Norman Maurer from the Netty pro­ject bench­marked this and the dif­fer­ence is sig­nif­i­cant. Baeldung’s JMH re­sults show that throw­ing an ex­cep­tion makes a method run hun­dreds of times slower than a nor­mal re­turn path.

This is­n’t the­o­ret­i­cal. There’s a real pro­duc­tion story of a Scala/JVM tem­plat­ing sys­tem that cut re­sponse time by 3x af­ter dis­cov­er­ing that a NumberFormatException was be­ing thrown on every field of every tem­plate ren­der. Every time a field name was be­ing tested to see if it was a nu­meric in­dex, it threw.

pub­lic int parse­OrDe­fault(String value, int de­fault­Value) {

if (value == null || value.is­Blank()) re­turn de­fault­Value;

for (int i = 0; i < value.length(); i++) {

char c = value.charAt(i);

if (i == 0 && c == -’) con­tinue;

if (!Character.isDigit(c)) re­turn de­fault­Value;

try {

re­turn Integer.parseInt(value);

} catch (NumberFormatException e) {

re­turn de­fault­Value;

Or use NumberUtils.isParsable() from Apache Commons Lang if it’s al­ready on your class­path.

Updated: sev­eral HN com­menters cor­rectly pointed out that the fix above orig­i­nally did­n’t in­clude the try-catch, which meant over­flow val­ues and edge cases like a bare -” would throw an un­han­dled ex­cep­tion. Updated to keep a try-catch around the fi­nal par­seInt as a safety net. The pre-val­i­da­tion still avoids the ex­pen­sive ex­cep­tion path for the vast ma­jor­ity of bad in­puts, which is the point.

The prin­ci­ple: if in­valid in­put is a rou­tine case in your ap­pli­ca­tion, user-pro­vided data, ex­ter­nal feeds, any­thing you don’t fully con­trol, pre-val­i­date ex­plic­itly. Exceptions are for gen­uinely un­ex­pected con­di­tions, not for this might be in the wrong for­mat.”

pub­lic class MetricsCollector {

pri­vate fi­nal Map

Shared mu­ta­ble state needs pro­tec­tion. But syn­chro­nized on the whole method means only one thread can call ei­ther method at any given time. In a ser­vice han­dling real con­cur­rency, every thread call­ing in­cre­ment() queues up wait­ing for every other thread to fin­ish. The lock it­self be­comes the bot­tle­neck.

pri­vate fi­nal ConcurrentHashMap

ConcurrentHashMap han­dles con­cur­rent reads and writes with­out lock­ing the whole struc­ture. LongAdder is pur­pose-built for high-con­cur­rency in­cre­ment­ing. It dis­trib­utes the counter across in­ter­nal cells and out­per­forms AtomicLong un­der con­tention.

Worth call­ing out sep­a­rately: Collections.synchronizedMap() wrap­pers have the same broad-lock prob­lem, one lock for the en­tire map. ConcurrentHashMap is al­most al­ways the right re­place­ment.

pub­lic String se­ri­al­ize­Order(Or­der or­der) throws JsonProcessingException {

re­turn new ObjectMapper().writeValueAsString(order);

ObjectMapper is one of the most com­mon ex­am­ples of an ob­ject that looks cheap to cre­ate but is­n’t. Constructing one in­volves mod­ule dis­cov­ery, se­ri­al­izer cache ini­tial­iza­tion, and con­fig­u­ra­tion load­ing. It’s real work hap­pen­ing on every call here.

Same pat­tern with DateTimeFormatter.ofPattern(”…“), new Gson(), new XmlMapper(). They’re all de­signed to be con­structed once and reused. Creating them in a hot method means pay­ing that setup cost on every in­vo­ca­tion.

pri­vate sta­tic fi­nal ObjectMapper MAPPER = new ObjectMapper();

pub­lic String se­ri­al­ize­Order(Or­der or­der) throws JsonProcessingException {

re­turn MAPPER.writeValueAsString(order);

ObjectMapper is thread-safe once con­fig­ured, so shar­ing a sta­tic fi­nal in­stance is fine. The DateTimeFormatter built-ins like DateTimeFormatter. ISO_LOCAL_DATE are al­ready sin­gle­tons. If you’re call­ing DateTimeFormatter.ofPattern(”…“) in a hot method, move it to a con­stant.

The heuris­tic: if an ob­jec­t’s con­struc­tor does sub­stan­tial setup work and the ob­ject is state­less (or safely share­able) af­ter con­struc­tion, it should be a field or a con­stant, not a lo­cal vari­able.

This one is worth in­clud­ing if you’ve started us­ing vir­tual threads, in­tro­duced as a pro­duc­tion fea­ture in Java 21.

Virtual threads work by mount­ing onto a small pool of plat­form (OS) threads called car­rier threads. When a vir­tual thread blocks, wait­ing on I/O for ex­am­ple, the sched­uler un­mounts it from the car­rier, free­ing that car­rier to run some­thing else. That’s the whole scal­a­bil­ity story with vir­tual threads.

But there’s a catch. When a vir­tual thread en­ters a syn­chro­nized block and hits a block­ing op­er­a­tion while in­side it, it can’t be un­mounted. It pins the car­rier thread. That plat­form thread is now stuck wait­ing, un­able to serve other vir­tual threads, for as long as the block­ing op­er­a­tion takes.

// This pat­tern can pin a car­rier thread on JDK 21

pub­lic syn­chro­nized String fetch­Data(String key) throws IOException {

re­turn Files.readString(Path.of(“/data/” + key)); // block­ing I/O in­side syn­chro­nized

If this hap­pens fre­quently enough, all your car­rier threads get pinned and your ap­pli­ca­tion stalls, even with thou­sands of vir­tual threads wait­ing to do work. Netflix ran into ex­actly this in pro­duc­tion and wrote a post about de­bug­ging it.

JFR ac­tu­ally tells you when this is hap­pen­ing. The jdk. VirtualThreadPinned event fires when­ever a vir­tual thread blocks while pinned, and by de­fault it only trig­gers when the op­er­a­tion takes longer than 20ms, so it’s al­ready fil­tered to the cases that ac­tu­ally mat­ter.

pri­vate fi­nal ReentrantLock lock = new ReentrantLock();

...

Read the original on jvogel.me »

10 178 shares, 9 trendiness

28 April 2025 Blackout

The fi­nal re­port of the Expert Panel on the 28 April 2025 black­out in con­ti­nen­tal Spain and Portugal iden­ti­fies the causes of the black­out and out­lines rec­om­men­da­tions to strengthen the re­silience of Europe’s in­ter­con­nected elec­tric­ity sys­tem. It was pre­pared by a tech­ni­cal Expert Panel of 49 mem­bers, in­clud­ing rep­re­sen­ta­tives from Transmission System Operators (TSOs), Regional Coordination Centres (RCCs), ACER and National Regulatory Authorities (NRAs), and was chaired by ex­perts from two un­af­fected TSOs. The in­ves­ti­ga­tion con­cludes that the black­out re­sulted from a com­bi­na­tion of many in­ter­act­ing fac­tors, in­clud­ing os­cil­la­tions, gaps in volt­age and re­ac­tive power con­trol, dif­fer­ences in volt­age reg­u­la­tion prac­tices, rapid out­put re­duc­tions and gen­er­a­tor dis­con­nec­tions in Spain, and un­even sta­bil­i­sa­tion ca­pa­bil­i­ties. These fac­tors led to fast in­creases of volt­age and cas­cad­ing gen­er­a­tion dis­con­nec­tions in Spain, re­sult­ing in the black­out in con­ti­nen­tal Spain and Portugal.Based on these find­ings, the Expert Panel sets out rec­om­men­da­tions ad­dress­ing each of the fac­tors iden­ti­fied in the re­port to help pre­vent sim­i­lar events in the fu­ture. These in­clude strength­ened op­er­a­tional prac­tices, im­proved mon­i­tor­ing of sys­tem be­hav­iour and closer co­or­di­na­tion and data ex­change among power sys­tem ac­tors. The find­ings of the in­ves­ti­ga­tion also un­der­score the need for reg­u­la­tory frame­works to adapt in or­der to sup­port the evolv­ing na­ture of the power sys­tem.The 28 April black­out was a first of its kind event, and the rec­om­men­da­tions aim to strengthen sys­tem re­silience with so­lu­tions that are al­ready tech­no­log­i­cally de­ploy­able. This black­out high­lights how de­vel­op­ments at the lo­cal level can have sys­tem-wide im­pli­ca­tions and un­der­lines the im­por­tance of main­tain­ing strong links be­tween lo­cal and European sys­tem be­hav­iour and co­or­di­na­tion, while en­sur­ing that mar­ket mech­a­nisms, reg­u­la­tory frame­works and en­ergy poli­cies re­main aligned with the phys­i­cal lim­its of the sys­tem.Down­load the fi­nal re­port of the Expert Panel

On 28 April 2025, at 12:33 CEST, the power sys­tems of con­ti­nen­tal Spain and Portugal ex­pe­ri­enced a to­tal black­out. A small area in Southwest France close to the Spanish bor­der ex­pe­ri­enced dis­rup­tions for a very short du­ra­tion and sev­eral in­dus­trial con­sumers and gen­er­a­tors were af­fected. The rest of the European power sys­tem did not ex­pe­ri­ence any sig­nif­i­cant dis­tur­bance as a re­sult of the in­ci­dent.This was the most se­vere black­out in­ci­dent on the European power sys­tem in over 20 years, and the first ever of its kind. Figure 1 — Geographic area af­fected by the in­ci­dent of 28 April 2025.

Following the black­out, on 12 May 2025, ENTSO-E set up an Expert Panel in line with Article 15(5) of the Commission Regulation (EU) 2017/1485 of 2 August 2017 es­tab­lish­ing a guide­line on elec­tric­ity trans­mis­sion sys­tem op­er­a­tion (SO GL) and the Incident Classification Scale (ICS) Methodology. The ICS Methodology is the frame­work for clas­si­fy­ing and re­port­ing in­ci­dents in the power sys­tem and for or­gan­is­ing the in­ves­ti­ga­tion of such in­ci­dents and is es­pe­cially rel­e­vant to the work of the Expert Panel. It is noted that the in­ves­ti­ga­tion of the Expert Panel was per­formed in line with the ver­sion of the ICS Methodology ap­plic­a­ble at the time of the in­ci­dent. Under the le­gal re­quire­ments of both SO GL and the ICS Methodology, when the in­ci­dent is clas­si­fied ac­cord­ing to the ICS Methodology cri­te­ria as scale 3 in­ci­dent — black­out — the Expert Panel is tasked to in­ves­ti­gate the root causes of the in­ci­dent, pro­duce a com­pre­hen­sive analy­sis, and make rec­om­men­da­tions in a fi­nal re­port, pub­lished on 20 March 2026.The Expert Panel con­sists of rep­re­sen­ta­tives from TSOs, the Agency for the Cooperation of Energy Regulators (ACER), National Regulatory Authorities (NRAs), and Regional Coordination Centres (RCCs).The Panel is led by ex­perts from TSOs not di­rectly af­fected by the in­ci­dent and in­cludes ex­perts from both af­fected and non-af­fected TSOs. The Expert Panel is led by Klaus Kaschnitz (APG, Austria) and Richard Balog (MAVIR, Hungary).Olivier Arrivé — as chair of the System Operation CommitteesRobert Koch — as con­venor of the Steering Group Resilient OperationRafal Kuczynski — as con­venor of the Regional Group Continental EuropeExperts from TSOs and RCCs par­tic­i­pat­ing in the Expert PanelExperts from ACER and NRAs par­tic­i­pat­ing in the Expert Panel

On 12 May 2025, the Expert Panel ini­ti­ated its in­ves­ti­ga­tion into the causes of the black­out. In ac­cor­dance with the ICS method­ol­ogy, the in­ves­ti­ga­tion is con­ducted in two phases:In the first phase of the in­ves­ti­ga­tion, the Expert Panel col­lected and analysed all data on the in­ci­dent avail­able at the time to re­con­struct the events of 28 April. At the end of this first phase, the Expert Panel de­liv­ered its fac­tual re­port, re­leased on 3 October 2025. It de­scribes the sys­tem con­di­tions that pre­vailed on 28 April 2025, pro­vides a de­tailed se­quence of events dur­ing the in­ci­dent and de­scribes how the sys­tem was re­stored af­ter the in­ci­dent. The re­port high­lights the ex­cep­tional and un­prece­dented na­ture of this in­ci­dent - the first time a cas­cad­ing se­ries of dis­con­nec­tions of gen­er­a­tion com­po­nents along with volt­age in­creases has been part of the se­quence of events lead­ing to a black­out in the Continental Europe Synchronous Area.Download the fac­tual re­port of the Expert PanelIn the sec­ond phase, which started im­me­di­ately af­ter the fi­nal­i­sa­tion of the fac­tual re­port, the Panel fo­cused on the iden­ti­fi­ca­tion and analy­sis of the root causes of the in­ci­dent. The Expert Panel specif­i­cally eval­u­ated cas­cad­ing dis­con­nec­tions of gen­er­a­tion in the sys­tem, the volt­age con­trol and the os­cil­la­tions’ mit­i­ga­tion mea­sures. The Panel also as­sessed the per­for­mance of gen­er­a­tors in re­gard to pro­tec­tion set­tings and con­tri­bu­tion to volt­age con­trol, as well as the per­for­mance of the sys­tem de­fence plans and analysed the var­i­ous steps of the restora­tion phase.Based on the find­ings, the Panel sets out rec­om­men­da­tions in its fi­nal re­port ad­dress­ing each of the fac­tors iden­ti­fied to help pre­vent sim­i­lar events in the fu­ture.Down­load the fi­nal re­port of the Expert Panel

A ded­i­cated joint work­shop of the System Operations European Stakeholder Committee (SO ESC) and of the Grid Connection European Stakeholder Committee (GC ESC), chaired by ACER, was or­gan­ised on 18 July 2025 to in­form the stake­hold­ers on the progress of the in­ves­ti­ga­tion of the Expert Panel. A sec­ond joint work­shop of SO ESC and GC ESC took place on 13 October 2025 to dis­cuss the fac­tual re­port. Detailed in­for­ma­tion about the role, com­po­si­tion and work of these two Committees is avail­able on the ENTSO-E web­site here.

...

Read the original on www.entsoe.eu »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.