10 interesting stories served every morning and every evening.




1 653 shares, 68 trendiness

Chuck Norris, Action Icon and ‘Walker, Texas Ranger’ Star, Dies at 86

Chuck Norris, the mar­tial arts cham­pion who be­came an iconic ac­tion star and led the hit se­ries Walker, Texas Ranger,” has died. He was 86.

Norris was hos­pi­tal­ized in Hawaii on Thursday, and his fam­ily posted a state­ment Friday say­ing that he died that morn­ing. While we would like to keep the cir­cum­stances pri­vate, please know that he was sur­rounded by his fam­ily and was at peace,” his fam­ily wrote.

The Peacock and the Sparrow,’ Award-Winning Spy Thriller, Being Made Into Film by Producers Scott Delman, Zanne Devine (EXCLUSIVE)

To the world, he was a mar­tial artist, ac­tor, and a sym­bol of strength. To us, he was a de­voted hus­band, a lov­ing fa­ther and grand­fa­ther, an in­cred­i­ble brother, and the heart of our fam­ily,” the state­ment con­tin­ued. He lived his life with faith, pur­pose, and an un­wa­ver­ing com­mit­ment to the peo­ple he loved. Through his work, dis­ci­pline, and kind­ness, he in­spired mil­lions around the world and left a last­ing im­pact on so many lives.”

As an ac­tion star, Norris had a de­gree of cred­i­bil­ity that most oth­ers could not match.. Not only did he ap­pear op­po­site the leg­endary Bruce Lee in 1972 film The Way of the Dragon” (aka Return of the Dragon”), but he was a gen­uine mar­tial arts cham­pion who was a black belt in judo, 3rd de­gree black belt in Brazilian Jiu-Jitsu, 5th de­gree black belt in Karate, 8th de­gree black belt in Taekwondo, 9th de­gree black belt in Tang Soo Do and 10th de­gree black belt in Chun Kuk Do.

Norris was ex­tremely pro­lific in the late 1970s and 80s, star­ring in The Delta Force” and Missing in Action” films, Good Guys Wear Black” (1978), The Octagon” (1980), Lone Wolf McQuade” (1983), “Code of Silence” (1985) and Firewalker” (1986).

Norris joined a bevy of other ac­tion stars in the Sylvester Stallone-directed The Expendables 2” in 2012 af­ter an ab­sence from the screen of seven years.

While he scored high on cred­i­bil­ity, Norris did not leaven his work with hu­mor the way Arnold Schwarzenegger, Bruce Willis and Jackie Chan did. He was nev­er­the­less the ac­tion star of choice for those seek­ing an all-Amer­i­can icon.

In 1984, Norris starred in Missing in Action,” the first in a se­ries of films cen­tered around the res­cue of American POWs pur­port­edly still held af­ter be­ing cap­tured dur­ing the Vietnam War. (Norris’ younger brother Wieland had been killed while serv­ing in Vietnam, and the ac­tor ded­i­cated his Missing in Action” films to his broth­er’s mem­ory, but crit­ics of Norris and pro­ducer Cannon Films main­tained that the films bor­rowed too heav­ily from the cen­tral con­ceit of Stallone’s highly suc­cess­ful Rambo” films.)

As Norris’ movie ca­reer be­gan to wane, he made a timely move to tele­vi­sion, star­ring in the CBS se­ries Walker, Texas Ranger,” in­spired by his film Lone Wolf McQuade.” The pro­gram ran from 1993-2001, and the ac­tor reprised the role of Cordell Walker in the TV movies Walker Texas Ranger 3: Deadly Reunion” (1994) and Walker, Texas Ranger: Trial by Fire” (2005). (Also in 2005 Norris made the last film in which he starred, the straight-to-DVD The Cutter.”)

In his later years, Norris was por­trayed in memes doc­u­ment­ing fic­tional, fre­quently ab­surd feats as­so­ci­ated with him, such as Chuck Norris kills 100% of germs” and Paper beats rock, rock beats scis­sors, and scis­sors beats pa­per, but Chuck Norris beats all 3 at the same time.” In his later years Norris ap­peared in in­fomer­cials for work­out equip­ment and be­came in­creas­ingly out­spo­ken as a po­lit­i­cal con­ser­v­a­tive.

Carlos Ray Norris was born in Ryan, Okla.; his fa­ther served as a sol­dier in World War II. In 1958 he joined the Air Force as an Air Policeman (AP, anal­o­gous to the Army’s MPs). While serv­ing at Osan Air Base in South Korea, Norris first ac­quired the nick­name Chuck” and be­gan his train­ing in Tang Soo Do (aka tang­sudo), lead­ing to his achieve­ments in other mar­tial arts and to his de­vel­op­ment of hy­brid style Chun Kuk Do (“The Universal Way”). He re­turned to the U. S. and served as an AP at March Air Force Base in California.

After his 1962 dis­charge, Norris worked for aero­space com­pany Northrop and opened a chain of karate schools; celebrity clients at the schools in­cluded Steve McQueen, Chad McQueen, Bob Barker, Priscilla Presley, Donny Osmond and Marie Osmond.

Norris made his act­ing de­but in an un­cred­ited role in the 1969 cult Matt Helm film The Wrecking Crew,” star­ring Dean Martin. Norris met Bruce Lee at a mar­tial arts demon­stra­tion in Long Beach, Calif., and played the neme­sis of Lee’s char­ac­ter in 1972 movie The Way of the Dragon” (retitled Return of the Dragon” for U. S. dis­tri­b­u­tion). In 1974 McQueen spurred Norris to be­gin tak­ing act­ing classes at MGM.

Norris first starred in the 1977 ac­tion film Breaker! Breaker!,” in which he played a trucker search­ing for his brother, who’s dis­ap­peared in a town with a judge who’s cor­rupt.

The ac­tor proved his box of­fice met­tle with his sub­se­quent films, “Good Guys Wear Black” (1978), The Octagon” (1980), An Eye for an Eye” (1981) and Lone Wolf McQuade.”

Norris be­gan star­ring in movies for Cannon Films in 1984. Over the next four years, he be­came Cannon’s most promi­nent star, ap­pear­ing in eight films, in­clud­ing the three Missing in Action” films; Code of Silence” — qual­i­ta­tively, one of his best films — the two Delta Force” films and Firewalker.” Norris’ brother Aaron Norris pro­duced sev­eral of these films, and also be­came a pro­ducer on “Walker, Texas Ranger.”

A long­time sup­porter of con­ser­v­a­tive politi­cians, he wrote sev­eral books with Christian and pa­tri­otic themes.

Norris was twice mar­ried, the first time to Di­anne Holechek from 1958 un­til their di­vorce in 1988.

He is sur­vived by sec­ond wife Gena O’Kelley, whom he mar­ried in 1998; three sons, Eric, Mike and Dakota, daugh­ters Danilee and Dina; and a num­ber of grand­chil­dren.

...

Read the original on variety.com »

2 398 shares, 81 trendiness

Fake Compliance as a Service

The ver­sion that ar­rived in your mail­box is trun­cated. Visit the full ar­ti­cle here to view the rest.

Reporters who want to get in touch: Drop an email at aicp­nay@pro­ton.me. In case my Proton ac­count gets blocked or you don’t get an an­swer, tweet us­ing the hash­tag #AICPNAY with con­tact de­tails and I’ll do my best to get in touch with you.

At its core, this ar­ti­cle ar­gues that Delve fakes com­pli­ance while cre­at­ing the ap­pear­ance of com­pli­ance with­out the un­der­ly­ing sub­stance.

Delve achieves its claim of be­ing the fastest plat­form by pro­duc­ing fake ev­i­dence, gen­er­at­ing au­di­tor con­clu­sions on be­half of cer­ti­fi­ca­tion mills that rub­ber stamp re­ports, and skip­ping ma­jor frame­work re­quire­ments while telling clients they have achieved 100% com­pli­ance. Their US-based au­di­tors” are Indian cer­ti­fi­ca­tion mills op­er­at­ing through empty US shells and mail­box agents. Auditors breach in­de­pen­dence rules by sign­ing off any­way, leav­ing com­pa­nies un­know­ingly ex­posed to crim­i­nal li­a­bil­ity un­der HIPAA and hefty fines un­der GDPR.

Delve hands cus­tomers fab­ri­cated ev­i­dence of board meet­ings, tests, and processes that never hap­pened. The plat­form forces com­pa­nies to choose be­tween adopt­ing fake ev­i­dence or per­form­ing mostly man­ual work with lit­tle real au­toma­tion or AI. It pro­duces au­dit re­ports that falsely claim in­de­pen­dent ver­i­fi­ca­tion while Delve it­self ef­fec­tively wears the au­di­tor hat, gen­er­at­ing iden­ti­cal re­ports for all clients, in­clud­ing Lovable, Bland, Cluely, and NASDAQ-traded Duos Edge. It hosts trust pages that list se­cu­rity mea­sures that were never im­ple­mented.

When clients ask hard ques­tions, Delve dodges. They de­mand calls where founders charm, promise, and name­drop Lovable, Bland, and every Fortune 500” as proof. When that fails, donuts ar­rive.

Preface - How it all start­ed­What the leak and our re­search re­veals

Two months ago, an email went out to a few hun­dred Delve clients in­form­ing them that Delve had leaked their au­dit re­ports, along­side other con­fi­den­tial in­for­ma­tion, through a Google spread­sheet that was pub­licly ac­ces­si­ble. This email also claimed that Delve’s au­dit re­ports were fraud­u­lent. The com­pany I work for was one of those clients that re­ceived that email.

Instead of pro­vid­ing clar­i­fi­ca­tion and be­ing trans­par­ent, Delve’s lead­er­ship de­cided to go into deny and de­flect mode. When di­rectly ask­ing them for clar­i­fi­ca­tion, they flat-out de­nied every­thing.

This was se­ri­ous, as it po­ten­tially in­volved nearly all of Delve’s clients and raised ques­tions about the va­lid­ity of the com­pli­ance re­ports Delve’s clients had re­ceived.

Multiple col­leagues in my net­work re­ceived the same email. Having the shared ex­pe­ri­ence of be­ing un­der­whelmed with the Delve ex­pe­ri­ence, and hav­ing the over­all sense that some­thing fishy was go­ing on, we de­cided to pool re­sources and in­ves­ti­gate to­gether. This ar­ti­cle is the re­sult of that col­lab­o­ra­tion.

The rea­son we felt the way we did was due to how lit­tle ac­tual work any of us had to per­form to be­come compliant’, com­bined with a prod­uct prac­ti­cally de­void of any real AI. It mostly felt like a SOC 2 tem­plate pack with a thin SAAS plat­form wrap­per where you sim­ply adopt and sign all tem­plated doc­u­ments. No cus­tom tai­lor­ing, no AI guid­ance, no real au­toma­tion. Just pre-pop­u­lated forms that re­quired you to click save”.

Some of us have gone through com­pli­ance be­fore and felt there was a huge mis­match be­tween our past ex­pe­ri­ence and our ex­pe­ri­ence with Delve.

In this ar­ti­cle I will walk you through a typ­i­cal ex­pe­ri­ence with Delve, the leak that ex­posed their op­er­a­tion, and how it re­vealed the fraud we un­cov­ered. I will show, among other things, how for each of the be­low cat­e­gories:

* Delve breaches AICPA/ISO rules by act­ing as au­di­tor, gen­er­at­ing pre-drafted as­sess­ments, tests, and con­clu­sions

* Delve re­lies on au­dit firms that rub­ber stamp re­ports be­cause gen­uine in­de­pen­dent ver­i­fi­ca­tion would ex­pose the ev­i­dence as fab­ri­cated or de­fi­cient

* Delve mis­leads clients by claim­ing re­ports are pro­duced by US-based CPA firms, when in re­al­ity they are pro­duced by Delve and rub­ber stamped by Indian cer­ti­fi­ca­tion mills

* Delve leads clients to be­lieve they are com­pli­ant when they are not

* Delve helps clients mis­lead the pub­lic by host­ing trust pages that con­tain se­cu­rity mea­sures that were never im­ple­mented

* Delve lies to clients when di­rectly ques­tioned, deny­ing doc­u­mented facts about the leak and re­port gen­er­a­tion

* Delve mar­kets AI-driven au­toma­tion while the prod­uct is prac­ti­cally de­void of AI, re­ly­ing on pre-pop­u­lated tem­plates, man­ual forms, and fab­ri­cated ev­i­dence

* Delve’s prod­uct is un­able to get com­pa­nies truly com­pli­ant

* Delve’s plat­form forces com­pa­nies to choose be­tween adopt­ing fake ev­i­dence or per­form­ing mostly man­ual work with lit­tle real au­toma­tion

* Unable to de­liver real com­pli­ance through its plat­form, Delve de­pends on fraud­u­lent au­di­tors who rub­ber stamp re­ports for clients, falling back on off-plat­form man­ual work with ex­ter­nal vCISOs and good au­di­tors only when com­plaints or pro­file threaten its busi­ness in­ter­ests

* Delve’s process re­sults in clients vi­o­lat­ing GDPR and HIPAA re­quire­ments, ex­pos­ing them to crim­i­nal li­a­bil­ity un­der HIPAA and fines up to 4% of global rev­enue un­der GDPR

For any prospect con­sid­er­ing Delve or any cur­rent Delve client:

Delve loves claim­ing this is all an at­tempt by their jealous com­peti­tors” to fraud­u­lently dis­credit them. When clients ask con­crete ques­tions, they dodge an­swer­ing the ques­tion and in­stead coax you into get­ting on a call with them, where they charm you and tell you every­thing you want to hear. They’ll even throw in some donuts.

One other tac­tic they fre­quently em­ploy is to promise is­sues won’t arise again be­cause their old process, prod­uct or au­di­tor is about to be or has al­ready been re­placed with some­thing bet­ter, that they are about to see the re­sults of. This is a de­flec­tion and de­lay­ing tech­nique. Whenever they start be­ing fre­quntly called out on us­ing a par­tic­u­lar au­di­tor, they’ll switch to an­other equally dodgy one that has­n’t been flagged yet. If they’re called out on their de­fi­cient prod­uct, they point to a su­pe­rior tier or new fea­ture on the time­line that will solve every­thing.

Expressing un­hap­pi­ness and a de­sire to leave will lead to Delve pair­ing you with an ex­ter­nal vir­tual CISO that will help you do com­pli­ance right. You should know that this means you will have to do all the work man­u­ally.

If you are con­cerned about Delve’s con­duct and prac­tices, ask them ques­tions in writ­ing. Do not al­low them to de­flect. Do not get on a call with them. In the clos­ing words at the end of this ar­ti­cle you’ll find more ad­vice.

All in­for­ma­tion con­tained in this ar­ti­cle can be re­pro­duced by con­sult­ing pub­lic sources and hav­ing ac­cess to the Delve plat­form.

All screen­shots and in­for­ma­tion are ac­tual as of mid January 2026.

Here we set the stage. We’ll quickly list all par­ties in­volved, and will pro­vide some back­ground con­text that is use­ful to un­der­stand the rest of this ar­ti­cle.

High pro­file com­pa­nies like those listed in the im­age above, and hun­dreds of oth­ers are af­fected by this. Companies ad­di­tion­ally af­fected are those that part­ner with Delve’s clients, hav­ing been mis­in­formed of the risks in­volved in part­ner­ing with those clients.

Many of those com­pa­nies process PHI of mil­lions of US cit­i­zens on a daily ba­sis. Some of those even serve na­tional de­fense in­ter­ests.

The au­dit firms listed in the il­lus­tra­tion above were iden­ti­fied dur­ing our re­search process, but are not nec­es­sar­ily all au­dit firms used by Delve.

From what we were able to es­tab­lish, 99%+ of Delve’s clients went through ei­ther Accorp or Gradient over the past 6 months.

In the wake of Delve’s leak in December, Delve is re­ported to have switched to Glocert as their pri­mary ISO 27001 au­dit­ing firm.

* Karun Kaushik and Selin Kocalar - The founders of Delve

The above in­di­vid­u­als know­ingly par­tic­i­pated in Delve’s de­lib­er­ate mis­con­duct re­gard­ing au­dit prac­tices.

Delve is a com­pli­ance com­pany. They help busi­nesses get cer­ti­fied for frame­works like SOC 2, ISO 27001, HIPAA, and GDPR. Companies need these cer­ti­fi­ca­tions to prove they han­dle data in a se­cure way and to un­lock deals with larger cus­tomers who re­quire them.

Compliance has tra­di­tion­ally been a time-con­sum­ing process that in­volved lots of spread­sheets. It used to be man­ual, ex­pen­sive and slow.

To give you an idea of what this is all about, we will pri­mar­ily fo­cus on SOC 2 in this ar­ti­cle. SOC 2 is the most com­monly pur­sued frame­work in the US. Practically all tech com­pa­nies that sell to en­ter­prises are ex­pected to be SOC 2 com­pli­ant’, which ba­si­cally means they’ve had to have a SOC 2 au­dit per­formed in the last year.

Getting a clean SOC 2 re­port means hir­ing a CPA firm to re­view your se­cu­rity con­trols. If they suc­cess­fully ver­ify the se­cu­rity you claim to have, through a lot of ev­i­dence you pro­vide them, they is­sue a re­port say­ing your se­cu­rity mea­sures are sound. This re­port be­comes proof you can show cus­tomers and in­vestors.

SOC 2 and ISO 27001, the European coun­ter­part to SOC 2, are vol­un­tary frame­works. HIPAA and GDPR are not.

HIPAA ap­plies to any com­pany han­dling health records in the US. Penalties are se­vere, with will­ful ne­glect pun­ish­able by crim­i­nal charges and prison time.

GDPR cov­ers any com­pany pro­cess­ing data of EU res­i­dents, re­gard­less of where the com­pany is based. Fines run up to 4% of global an­nual rev­enue or 20 mil­lion eu­ros, whichever is higher.

These frame­works carry the force of law be­cause they pro­tect in­for­ma­tion peo­ple can­not eas­ily pro­tect them­selves: med­ical his­to­ries, ge­netic data, bio­met­ric iden­ti­fiers, lo­ca­tion pat­terns, and the full record of their dig­i­tal lives.

The com­pa­nies that help other com­pa­nies be­come com­pli­ant through au­toma­tion are called GRC au­toma­tion plat­forms.

Delve was founded in 2023 by Karun Kaushik and Selin Kocalar, both Forbes 30 Under 30 mem­bers and MIT dropouts who met as fresh­men. They started with a med­ical AI scribe, piv­oted to com­pli­ance af­ter hit­ting HIPAA headaches them­selves, and went through Y Combinator in 2024.

In July 2025, Delve raised $32 mil­lion in Series A fund­ing led by Insight Partners. Before that they had raised a $3.3 mil­lion seed round and went through Y Combinator.

Delve’s pitch is speed through AI. They claim to get com­pa­nies com­pli­ant in days rather than months, us­ing what they call agentic AI through an AI-native” plat­form.

Their mar­ket­ing promises AI agents that au­to­mat­i­cally col­lect ev­i­dence, write re­ports, and mon­i­tor com­pli­ance gaps with­out hu­man busy­work.

The re­al­ity, as this ar­ti­cle will show, is dif­fer­ent.

SOC 2 au­dits op­er­ate un­der strict in­de­pen­dence re­quire­ments de­signed to pre­serve trust in the at­tes­ta­tion process. These rules ex­ist pre­cisely to pre­vent the kind of con­duct this ar­ti­cle ex­poses. While Delve of­fers com­pli­ance ser­vices for HIPAA, GDPR, and ISO 27001, this ar­ti­cle fo­cuses pri­mar­ily on SOC 2. The rules sur­round­ing those other frame­works would re­quire their own de­tailed treat­ment; the goal here is to es­tab­lish a clear pat­tern in Delve’s be­hav­ior, not to cat­a­log every pos­si­ble reg­u­la­tory vi­o­la­tion.

The fun­da­men­tal prin­ci­ple is sim­ple: the party im­ple­ment­ing con­trols can­not be the party at­test­ing to their ef­fec­tive­ness. AICPAs Code of Professional Conduct states that mem­bers must accept the oblig­a­tion to act in a way that will serve the pub­lic in­ter­est, honor the pub­lic trust, and demon­strate a com­mit­ment to pro­fes­sion­al­ism.” When ac­coun­tants can­not be ex­pected to make truth­ful rep­re­sen­ta­tions, we lose the abil­ity to as­sess any pub­lic or pri­vate com­pa­ny’s ac­tual per­for­mance.”

For SOC 2 specif­i­cally, AT-C Section 205 re­quires prac­ti­tion­ers to main­tain in­de­pen­dence in both fact and ap­pear­ance through­out the en­gage­ment. The prac­ti­tioner must not as­sume man­age­ment re­spon­si­bil­i­ties or act as an ad­vo­cate for the sub­ject mat­ter be­ing ex­am­ined.

Under AT-C Section 315, au­di­tors must seek to ob­tain rea­son­able as­sur­ance that the en­tity com­plied with the spec­i­fied re­quire­ments, in all ma­te­r­ial re­spects, in­clud­ing de­sign­ing the ex­am­i­na­tion to de­tect both in­ten­tional and un­in­ten­tional ma­te­r­ial non­com­pli­ance.” This re­quires in­de­pen­dent de­sign of test pro­ce­dures, in­de­pen­dent eval­u­a­tion of ev­i­dence, and in­de­pen­dent for­ma­tion of con­clu­sions.

The au­di­tor’s re­port must rep­re­sent their own pro­fes­sional judg­ment, not pre-writ­ten con­clu­sions pro­vided by the en­tity be­ing au­dited or its plat­form ven­dor.

Delve’s model in­verts this struc­ture. By gen­er­at­ing au­di­tor con­clu­sions, test pro­ce­dures, and fi­nal re­ports be­fore any in­de­pen­dent re­view oc­curs, Delve places it­self in the role of both im­ple­menter and ex­am­iner. This is not a tech­ni­cal­ity. It is a struc­tural fraud that in­val­i­dates the en­tire at­tes­ta­tion.

For read­ers who wish to ver­ify these re­quire­ments:

This story has many threads, and I strug­gled with where to start. Do I lead with the leaked doc­u­ments? The au­di­tor shell game? The fake AI?

In the end, I felt that start­ing with my com­pa­ny’s ex­pe­ri­ence, which il­lus­trates many of the points I make and prove later on, was the most ef­fec­tive way to get the pic­ture across.

I col­lab­o­rated on this ar­ti­cle with users from other Delve clients. We com­pared notes. The pat­terns I de­scribe be­low showed up across all of our ac­counts. Unless men­tioned oth­er­wise, noth­ing here is cherry-picked. Later sec­tions dis­sect spe­cific mech­a­nisms.

This sec­tion shows what it is like to be a Delve client.

Delve goes all out dur­ing the sales process. Through their mar­ket­ing and sales process they con­tin­u­ously em­pha­size be­ing the fastest to get com­pa­nies com­pli­ant, thanks to their AI. They re­peat­edly em­pha­size the im­pres­sive com­pa­nies they work with, and how they part­ner with the best and most re­spected US-based au­dit firms, and that their work is ac­cepted by Fortune 500 com­pa­nies.

Within the first few min­utes of the demo call we did with Delve, they had al­ready men­tioned how com­pa­nies like Lovable, Bland, WisprFlow and hun­dreds of oth­ers are choos­ing Delve over com­peti­tors. The main point they kept dri­ving home was that their rev­o­lu­tion­ary tech, sup­pos­edly way ahead of any other com­pa­ny’s, en­abled com­pli­ance to be achieved with just 10 hours of work in­stead of the hun­dreds it used to take.

Their demo was short but did a good job show­ing how au­to­mated every­thing sup­pos­edly was. They clicked through every­thing pretty quickly, and stopped at every place that made the process look easy or au­to­mated. They showed an in­te­gra­tion pulling in­for­ma­tion out of AWS, the ready to go” de­fault poli­cies you could just adopt with­out mod­i­fi­ca­tion, the rea­son­ably short list of tasks, the AI ques­tion­naire au­toma­tion, the beau­ti­ful trust page you could pub­lish and their AI-copilot’ chat­bot.

What they did­n’t show, how­ever, was that most in­te­gra­tions were fake and re­quired man­ual screen­shots, that tons of forms need to be man­u­ally filled out, that the trust page was­n’t ac­cu­rate and they did­n’t show any of the tasks that had pre-cre­ated fake ev­i­dence.

Their best of­fer” for SOC 2 started at $15,000, which we were able to ne­go­ti­ate down to $13,000 on the call. They kept em­pha­siz­ing how it was a great deal, that we were get­ting so much value that other com­pa­nies paid more for. They re­mained pretty in­flex­i­ble on pric­ing un­til we made clear we were con­sid­er­ing a com­peti­tor. We must have been told at least four times that they could­n’t go any lower be­cause they’d make a loss. There was a lot of pres­sure and pos­tur­ing, but the price quickly dropped to just $6,000 when they re­al­ized we were se­ri­ous about go­ing else­where, and they would throw in ISO 27001 and a 200 hour pen­e­tra­tion test as well.

Pushing us to sign within 24 hours or lose that good a deal, we de­cided to just move for­ward and get it over with. In hind­siht, there were many red flags, but we just wanted to get the job done quickly and move on. If Delve was good enough for com­pa­nies like Lovable, they had to be do­ing some­thing right. Right?

Once you’ve been in­vited to the Delve plat­form and log in for the first time, you’ll be greeted with an in­ter­face that re­veals there are four cat­e­gories for ac­tiv­i­ties:

But even be­fore you get started do­ing any real work, you can go into the trust tab and ac­ti­vate and pub­lish the trust page for your com­pany. You’d think it would prob­a­bly be a very min­i­mal trust page with every­thing fail­ing at first, since you haven’t done any work, right?

Nope, you im­me­di­ately get a fully pop­u­lated trust page that would have you be­lieve you’re run­ning the most se­cure com­pany on earth. Delve’s trust page pre­sented our com­pany as fully se­cure be­fore we had com­pleted any com­pli­ance work, en­abling us to close deals based on mis­rep­re­sented se­cu­rity claims.

Noteworthy is that the list had­n’t changed af­ter we fin­ished com­pli­ance in any way, but still was­n’t truth­ful. My ex­pec­ta­tion was that the list re­flected the se­cu­rity you’d get at the end of Delve’s process, but it took get­ting there to learn that that was­n’t true ei­ther.

It says we did vul­ner­a­bil­ity scan­ning and a pen­test, when we only ever did the scan. It says we did data re­cov­ery sim­u­la­tions, which we never did. It says we re­me­di­ated vul­ner­a­bil­i­ties, which we never did.

It is lit­er­ally a made-up list of se­cu­rity mea­sures of which more than half are not im­ple­mented, or even sup­ported or ad­dressed by Delve’s process and plat­form.

In short, this prod­uct is built to help com­pa­nies fake se­cu­rity rather than com­mu­ni­cate their real se­cu­rity.

When you do ac­tu­ally de­cide to get started and do the work, you’ll find that the plat­form ex­pects you to per­form a num­ber of tasks across four cat­e­gories:

Policies - These are all pre-cre­ated, and Delve rec­om­mends adopt­ing them as they are.

Sadly, un­less you spend a whole week of man­u­ally re­vis­ing them to be ac­cu­rate, you will have in­ac­cu­rate poli­cies full of false promises. Every sin­gle one of Delve’s poli­cies claims to have mea­sures in place that Delve’s process and plat­form do not ad­dress. Even though we knew we’d tech­ni­cally be ly­ing about our se­cu­rity to any­one we sent these poli­cies to for re­view (clients, au­di­tors, in­vestors), we de­cided to adopt these poli­cies be­cause we sim­ply did­n’t have the band­width to rewrite them all man­u­ally. Team - Background checks, se­cu­rity train­ing and man­ual de­vice se­cu­rity through screen­shots for every em­ployee.

This is a lot of man­ual work. it took each of our em­loy­ees 2 hours to their in­di­vid­ual tasks done.30 min­utes to man­u­ally se­cure and screen­shot lap­top se­cu­rity con­fig­u­ra­tion. It is a lot of work if you have a big com­pany.

Also, we did­n’t have disk en­cryp­tion sup­port on a few lap­tops, but that did­n’t stop Delve from sign­ing off on HIPAA:

30 min­utes (or more) to man­u­ally write up a per­for­mance eval­u­a­tion:30 min­utes to do all train­ing (watching Delve’s Youtube videos) and quizzes.

Tech - You add ven­dors here. This is what Delve calls in­te­gra­tions, but most of these are just forms where you are asked to sub­mit screen­shots:Com­pany - Security pro­ce­dures and com­pany-wide tasks are live here. This is just a list of forms that are all pre-filled with fake ev­i­dence. You can speedrun through this by just click­ing ac­cept on every one of them.

One thing that stands out as you go through all of Delve’s tasks is that there is­n’t any AI to speed up any of the work. The only thing avail­able is an AI Co-pilot’ that can pro­vide generic ad­vice, and that does­n’t seem to have much con­text be­yond what form you’re in. More than half of the time the AI co-pi­lot would tell me the ev­i­dence in the plat­form was not suf­fi­cient, and it would re­fer to links of other GRC plat­forms.

As op­posed to what is promised dur­ing the sales process, the tasks are not cus­tomized to the com­pany. You are ba­si­cally dumped into an in­ter­face and told by the cus­tomer suc­cess per­son that you can ask ques­tions on Slack if you get stuck.

The re­al­ity is that there will be very few times you’ll ever need any help from Delve’s team, pri­mar­ily be­cause every­thing in their plat­form is pre-pop­u­lated. You won’t have to ask how to do a risk as­sess­ment, be­cause you get the same pre-gen­er­ated risks that every other Delve cus­tomer gets. You won’t have to com­plain about hav­ing to do board meet­ings, af­ter you were ex­plic­itly promised dur­ing the sales call that this is some­thing all other providers but Delve ask for, be­cause you get pre-cre­ated fake board meet­ing notes that you can adopt as is.

Seriously, be­com­ing com­pli­ant with Delve is noth­ing more than click­ing through a bunch of pre-pop­u­lated forms and ac­cept­ing every­thing. Unless you want to do com­pli­ance the proper way, in which case Dropbox is as good a tool as Delve since you need to then man­u­ally col­lect and write every­thing.

Ok, you sit down to get crack­ing on com­pli­ance. What do you do next?

You ac­cept the de­fault poli­cies that are in­ac­cu­rate out of the box. Like this pol­icy that claims to have an MDM in place when the Delve process con­sists of mak­ing a man­ual screen­shot of your Mac fire­wall set­tings:

You ac­cept the pre-cre­ated con­tents for se­cu­rity sim­u­la­tion by ac­cept­ing the three security in­ci­dents”:

You do the risk as­sess­ment by adopt­ing the ten de­fault risks:

One of our ob­vi­ous con­cerns was that this ap­proach would never pass an au­dit, but we were ex­plic­itly told Delve never failed a sin­gle au­dit in the past, and that au­di­tors have never flagged a sin­gle is­sue with their process. They tried to put our minds at ease by telling us about all the amaz­ing Delve clients that sold to Fortune 500 com­pa­nies us­ing the ex­act same process.

Delve con­tin­u­ously re­mind­ing us that they serve clients like Lovable, Bland, WisprFlow and many oth­ers ended up wear­ing us down, so we just took their word for it and moved on.

When the time comes to ac­tu­ally hook up your stack to Delve, so that Delve can do that continuous mon­i­tor­ing’ thing, you’ll find that the vast ma­jor­ity of their in­te­gra­tions don’t in­te­grate with any­thing at all. They are just con­tain­ers for screen­shots you’ll have to go out and man­u­ally col­lect.

Imagine my sur­prise when I learned that AI-native com­pli­ance would mean I’d have to spend many hours man­u­ally col­lect­ing screen­shots and fill­ing out forms. I truly feel like a mind­less agent in what Delve calls the agen­tic ex­pe­ri­ence”.

Here you can see how the Linear integration’ con­sists of Manual Tests and Forms:

On the em­ployee tab, you man­u­ally do back­ground checks through Certn, fill out more forms, watch use­less YouTube videos, and man­u­ally screen­shot lap­top se­cu­rity. For 100 em­ploy­ees, all 100 of them have to man­u­ally se­cure lap­tops once and up­load screen­shots.

You also do man­ual per­for­mance re­views for every em­ployee with no way to pull data from other so­lu­tions. Lots of typ­ing if you have 20+ em­ploy­ees.

...

Read the original on substack.com »

3 344 shares, 9 trendiness

Safety Impact

It may seem like the miles dri­ven by Waymo (in the 100’s of mil­lions of miles) pales in com­par­i­son to the bil­lions of miles dri­ven in the cities where Waymo dri­ves, or tril­lions of miles dri­ven an­nu­ally in the en­tire United States. When com­par­ing the rates of two pop­u­la­tions, how­ever, the con­clu­sions you can draw from data are gov­erned by what is called sta­tis­ti­cal power. The ques­tion be­ing an­swered by the Safety Impact Data Hub is are the Waymo and bench­mark crash rates dif­fer­ent? The in­put to this cal­cu­la­tion is the num­ber of crashes and the num­ber of miles dri­ven by Waymo and the bench­mark pop­u­la­tions and is mod­eled us­ing a Poisson dis­tri­b­u­tion, the most com­mon dis­tri­b­u­tion for han­dling count data.

An ex­am­ple of this prob­lem would be to ex­am­ine the num­ber of stu­dents that do not pass an exam. In a school dis­trict, say that 300 out of 1,000 stu­dents that take the same test do not pass (3 do not pass per 10 test­tak­ers). One could ask whether a Class A of 20 stu­dents per­formed dif­fer­ently than the over­all pop­u­la­tion on this test (note we are as­sum­ing pass­ing or not pass­ing the test is in­de­pen­dent of be­ing in Class A for the sake of this sim­pli­fied ex­am­ple). Say Class A had 10 out of 20 stu­dents that did not pass the exam (5 do not pass per 10 test tak­ers). Class A had a not pass rate that is dou­ble the rate of the school dis­trict. When we use a Poisson con­fi­dence in­ter­val, how­ever, the rate of not pass­ing in the class of 20 is not sta­tis­ti­cally dif­fer­ent from the school dis­trict av­er­age at the 95% con­fi­dence level. If we in­stead com­pare Class A to the en­tire state of 100,000 stu­dents (with the same 3 not pass per 10 test tak­ers rate, or 30,000 out of 100,000 to not pass), the 95% con­fi­dence in­ter­vals of this com­par­i­son are al­most iden­ti­cal to the com­par­i­son to the county (300 out of 1000 test tak­ers). This means that for this com­par­i­son, the un­cer­tainty in the small num­ber of ob­ser­va­tions in Class A (only 20 stu­dents) is much more than the un­cer­tainty in the larger pop­u­la­tion. Take an­other class, Class B, that had only 1 out of 20 stu­dents not pass the test (0.5 do not pass per 10 test tak­ers). When ap­ply­ing the 95% con­fi­dence in­ter­vals, this Class B does have a sta­tis­ti­cally dif­fer­ent pass rate from the county av­er­age (as well when com­pared to the state). This ex­am­ple shows that when com­par­ing rates of events in two pop­u­la­tions where one pop­u­la­tion is much larger than the other (measured by test tak­ers, or miles dri­ven), the two things that drive sta­tis­ti­cal sig­nif­i­cance are: (a) the num­ber of ob­ser­va­tions in the smaller pop­u­la­tion (more ob­ser­va­tions = sig­nif­i­cance sooner) and (b) big­ger dif­fer­ences in the rates of oc­cur­rence (bigger dif­fer­ence = sig­nif­i­cance sooner).

Now con­sider an­other ex­per­i­ment with Waymo data. Consider the fig­ure be­low that keeps the num­ber of Waymo airbag de­ploy­ment in any ve­hi­cle crashes (34) and VMT (71.1 mil­lion miles) con­stant while as­sum­ing dif­fer­ent or­ders of mag­ni­tude of miles dri­ven in the hu­man bench­mark pop­u­la­tion (benchmark rate of 1.649 in­ci­dents per mil­lion miles with 17.8 bil­lion miles trav­eled). The point es­ti­mate is that Waymo has 71% fewer of these crashes than the bench­mark. The con­fi­dence in­ter­vals (also some­times called er­ror bars) show un­cer­tainty for this re­duc­tion at a 95% con­fi­dence level (95% con­fi­dence is the stan­dard in most sta­tis­ti­cal test­ing). If the er­ror bars do not cross 0%, that means that from a sta­tis­ti­cal stand­point we are 95% con­fi­dent the re­sult is not due to chance, which we also re­fer to as sta­tis­ti­cal sig­nif­i­cance. This simulation” shows the ef­fect on sta­tis­ti­cal sig­nif­i­cance when vary­ing the VMT of the bench­mark pop­u­la­tion. This com­par­i­son would be sta­tis­ti­cally sig­nif­i­cant even if the bench­mark pop­u­la­tion had fewer miles dri­ven than the Waymo pop­u­la­tion (10 mil­lion miles). Furthermore, as long as the hu­man bench­mark has more than 100 mil­lion miles, there is al­most no dis­cern­able dif­fer­ence in the con­fi­dence in­ter­vals of the com­par­i­son. This means that com­par­isons in large US cities (based on bil­lions of miles) are no dif­fer­ent from a sta­tis­ti­cal per­spec­tive than a com­par­i­son to the en­tire US an­nual dri­ving (trillions of miles). Like the school test ex­am­ple, Waymo has dri­ven enough miles (tens to hun­dred of mil­lions of miles) and the re­duc­tions are large enough (70%-90% re­duc­tions) that sta­tis­ti­cal sig­nif­i­cance can be achieved.

...

Read the original on waymo.com »

4 307 shares, 13 trendiness

Wayland set the Linux Desktop back by 10 years

Wayland has been a broad mis­di­rec­tion and mis­al­lo­ca­tion of time and de­vel­oper re­sources at the ex­pense of users. With more mi­gra­tion from other op­er­at­ing sys­tems, the pres­sure to fix fun­da­men­tal prob­lems has be­come more promi­nent. After 17 years of de­vel­op­ment, now is a good time to re­flect on some of the larger promises that have been made around the de­vel­op­ment of Wayland as a re­place­ment for the X11 dis­play pro­to­col.

If you’re not in this space, hope­fully it will still be in­ter­est­ing as an en­gi­neer­ing post-mortem on tak­ing on new green­field pro­jects. Namely: What are the is­sues with what ex­ists, why can they not be fixed, what do we hope to achieve with a new pro­ject, and how long do we ex­pect it to take?

If you’re al­ready fa­mil­iar with X11 and Wayland feel free to skip to the next part.

For peo­ple not fa­mil­iar with Linux, here’s a quick run­down of the terms in this space, roughly in the or­der of high­est-level to low­est-level:

* Applications

These are the things you want to run

* These are the things you want to run

* Desktop Environment (DE)

This is what man­ages things like win­dow them­ing, no­ti­fi­ca­tions, task bars, etc.

* This is what man­ages things like win­dow them­ing, no­ti­fi­ca­tions, task bars, etc.

* Compositor

This is what lay­ers win­dows on top of each other and does an­i­ma­tions, graph­i­cal ef­fects, etc.

* This is what lay­ers win­dows on top of each other and does an­i­ma­tions, graph­i­cal ef­fects, etc.

* Display Server

This is the thing that man­ages the dis­play, it also ab­stracts away some of the hard­ware de­tails so all the above work on NVidia, AMD, Intel, etc.

* This is the thing that man­ages the dis­play, it also ab­stracts away some of the hard­ware de­tails so all the above work on NVidia, AMD, Intel, etc.

* Kernel / Operating System

The lower-layer thing that man­ages hard­ware re­sources, for us this is Linux

* The lower-layer thing that man­ages hard­ware re­sources, for us this is Linux

The above is not a com­plete list, but it’s enough to give some fram­ing for un­der­stand­ing that X11 is a fun­da­men­tal piece of most Linux en­vi­ron­ments.

X11 is cur­rently still the most com­mon pop­u­lar dis­play server in the Linux ecosys­tem. It was de­vel­oped in the mid-1980s and, as legacy pro­jects tend to do, has ac­cu­mu­lated func­tion­al­ity that makes it dif­fi­cult to main­tain, ac­cord­ing to the de­vel­op­ers.

So, in 2008, Kristian Høgsberg started the pro­ject that be­came known as Wayland. Wayland (in the­ory) re­places the dis­play server, as well as some parts of the com­pos­i­tor and desk­top en­vi­ron­ment with a sim­pler dis­play pro­to­col and ref­er­ence im­ple­men­ta­tion. The orig­i­nal con­ceit be­hind Wayland is to only im­ple­ment what is needed for a sim­ple Linux desk­top. The orig­i­nal im­ple­men­ta­tion was a lit­tle over 3,000 lines of code.

It’s 2026, and Wayland has reached a mar­ket share of around 40-50%, or closer to 50-60% de­pend­ing on your source. I would ar­gue a prod­uct that has taken 17 years to gain sub­stan­tial mar­ket­share has is­sues hin­der­ing adop­tion. Compare the de­vel­op­ment of Wayland to a sim­i­lar pro­ject for man­ag­ing au­dio: PipeWire. Within ~8 years, every al­ter­na­tive has been mostly re­placed. It’s been adopted as the de­fault in Ubuntu since 22.04, roughly 4 years af­ter it first launched!

These are the most com­mon is­sues I have seen from the per­spec­tive of a user, so I will try to stay light on what I con­sider to be mostly ir­rel­e­vant tech­ni­cal de­tails and in­stead fo­cus on larger is­sues around the roll­out and de­sign of Wayland.

The rea­son I use Linux and other Unix-likes is that they give me the abil­ity to do what­ever I want on my sys­tem, in­clud­ing mak­ing mis­takes! So why is my dis­play server telling me that cer­tain ap­pli­ca­tions that I in­stalled and chose to run aren’t al­lowed to talk to each other in the name of se­cu­rity?

There are mul­ti­ple cases of this: OBS can’t screen record (it seg­faults in­stead), I can’t copy-paste, and I can’t see win­dow pre­views un­less every­thing im­ple­ments a spe­cific ex­ten­sion to the core pro­to­col.

The ac­tual threat model” here is baf­fling and does­n’t seem to re­flect a need for users. Applications are not able to see each oth­er’s win­dows, but they’re not able to in­ter­act in any other way that could po­ten­tially cause prob­lems?

I also don’t care for the security” ar­gu­ment when parts of the core ref­er­ence im­ple­men­ta­tion are writ­ten in a mem­ory-un­safe lan­guage. To be clear, I am not say­ing that soft­ware writ­ten in C is bad, I’m specif­i­cally call­ing out that mak­ing a se­cu­rity arug­ment about soft­ware then re­peat­ing de­ci­sions of the pre­vi­ous (40-year-old) im­ple­men­ta­tion is a bad look.

Several of the de­sign de­ci­sions that Wayland makes are claimed in the name of per­for­mance. Namely, col­laps­ing many lay­ers is sup­posed to re­duce the num­ber of copies when mov­ing data be­tween dif­fer­ent com­po­nents.

However, what­ever the rea­son, these per­for­mance gains haven’t ma­te­ri­al­ized, or are so anec­do­tal in ei­ther di­rec­tion that it’s dif­fi­cult to claim a clear win over X11. In fact, you can find ex­am­ples show­ing roughly a 40% slow­down when us­ing Wayland over X11! I’m sure there are sim­i­lar bench­marks claim­ing Wayland wins and vice versa (happy to link them as well if pro­vided).

The prob­lem is that, even if Wayland was twice as fast, it does­n’t com­pare to im­prove­ments in hard­ware over the same pe­riod. It would’ve been bet­ter to just wait! The per­for­mance im­prove­ments would have to be much more sub­stan­tial for this to be a rea­son­able ar­gu­ment in fa­vor of Wayland. The fact that the ques­tion even ex­ists as to whether one or the other is faster af­ter a sub­stan­tial pe­riod is an ob­vi­ous fail­ure.

Additionally, those per­for­mance gains don’t mat­ter if I’m not able to make use of them. For ex­am­ple, if I’m us­ing the most pop­u­lar graph­ics card ven­dor in my sys­tem, I should­n’t ex­pect things to work out of the box.

One re­but­tal I’ve heard is that it’s not an is­sue with Wayland, it’s an is­sue with the com­pos­i­tor/​ex­ten­sion/​ap­pli­ca­tion. After all, Wayland” is­n’t a piece of soft­ware, it’s a sim­ple pro­to­col that other soft­ware chooses to im­ple­ment!

Of course, what this means in prac­tice is that there are mul­ti­ple (usually in­com­pat­i­ble) im­ple­men­ta­tions of mul­ti­ple dif­fer­ent stan­dards. Maybe this would be fine if the con­cept of a desk­top op­er­at­ing sys­tem was com­pletely new and un­known, but users balk when dis­cov­er­ing things like drag and drop or screen shar­ing are not na­tively sup­ported and are es­sen­tially still in beta” sta­tus.

Instead of pro­vid­ing a bet­ter way of do­ing some­thing, com­mon fea­tures are not sup­ported at all, and in­stead it’s the job of every­one else in the ecosys­tem to agree on a stan­dard. That’s not a stun­ning ar­gu­ment in fa­vor of re­plac­ing some­thing that al­ready ex­ists and that has al­ready been stan­dard­ized in X11!

Wayland has been around for only 17 years, while X11 has closer to 40 years of de­vel­op­ment be­hind it. Things are still un­der de­vel­op­ment and ob­vi­ously will get bet­ter, so why com­plain about is­sues that will in­evitably get fixed?

Because it’s been 17 years and peo­ple are still run­ning into ma­jor is­sues!

I was un­pleas­antly sur­prised when us­ing KDE Plasma that the de­fault dis­play server had been changed to Wayland. I no­ticed very quickly on startup when I en­coun­tered enough graph­i­cal hitches to re­al­ize I was run­ning Wayland and quickly switch back. Anecdotal ex­pe­ri­ence is not enough to say this is a broad is­sue, but my point is that when an av­er­age user en­coun­ters graph­i­cal is­sues within 60 sec­onds of us­ing it, maybe it’s not ready to be made the de­fault! It was only within the last 6 months that OBS stopped seg­fault­ing when try­ing to launch on Wayland. I as­sume I’m in de­cent com­pany when even the de­vel­oper of a ma­jor com­pos­i­tor is still not able to use Wayland in 2026.

The num­ber of simple” util­i­ties that seem par­tially sup­ported or half-baked is in­cred­i­ble and seems to be a mas­sive du­pli­ca­tion of ef­fort. The tool­ing around X11 that has been de­vel­oped over the last 40 years seems to have been com­pletely dropped and no al­ter­na­tive has been pro­vided. Instead of pro­vid­ing an ob­vi­ous tran­si­tion path, Wayland has in­tro­duced even more frag­men­ta­tion.

Older soft­ware that has a ton of legacy cruft” has been tested and bugs have long since been fixed. I fully be­lieve with an­other 20 years of de­vel­op­ment things will be bet­ter. The prob­lem is that I am be­ing forced to make the switch now. See: The push from KDE and RedHat to Wayland and drop­ping sup­port for older tech­nolo­gies.

This post prob­a­bly best en­cap­su­lates the de­vel­oper opin­ion to­wards users try­ing to mi­grate to the next it­er­a­tion of the Linux desk­top:

Maybe Wayland does­n’t work for your pre­cious use-case. More likely, it does work, and you swal­lowed some pro­pa­ganda based on an as­sump­tion which might have been cor­rect 7 years ago. Regardless, I sim­ply don’t give a shit about you any­more.

We’ve sac­ri­ficed our spare time to build this for you for free. If you turn around and ha­rass us based on some ut­terly non­sen­si­cal con­spir­acy the­o­ries, then you’re a fuck­ing ass­hole.

It’s even more ironic com­pared to the post made a week later, ex­press­ing the same frus­tra­tion with the Rust com­mu­nity that peo­ple have with Wayland!

Drew has since deleted this post, so I un­der­stand if he no longer stands by those opin­ions. However, it’s a rep­re­sen­ta­tive slice of de­vel­oper sen­ti­ment to­wards users that are now be­ing forced to use un­fin­ished soft­ware. Entitlement and bul­ly­ing of open-source main­tain­ers is not ap­pro­pri­ate, and it’s un­der­stand­able that the de­vel­op­ers lash out af­ter feel­ing beaten down by en­ti­tled users. However, to have some sym­pa­thy on the user side, it’s likely born out of frus­tra­tion of be­ing forced to use the new hot­ness and then en­coun­ter­ing break­ing bugs that are im­pos­si­ble for the av­er­age user to work around.

It is not the fault of the orig­i­nal de­vel­op­ers for build­ing what they wanted to build. I think it’s im­por­tant to keep in mind that they did­n’t nec­es­sar­ily choose for Wayland to be­come as pop­u­lar as it has or the foun­da­tion for the desk­top of the fu­ture. See the di­a­gram be­low:

Having Wayland as a de­vel­op­ers-only play­ground is fine! Have fun build­ing stuff! But the sec­ond ac­tual users are forced to use it ex­pect them to be frus­trated! At this point I con­sider Wayland to be a fun toy built en­tirely to pacify de­vel­op­ers tired of work­ing on a fin­ished legacy pro­ject.

Since most of this post has been over­whelm­ingly neg­a­tive against the de­vel­op­ment of Wayland, it’s in­stead bet­ter to learn as much as pos­si­ble and look for­ward to­wards what would I want to be able to do”. Windowing tech­nol­ogy is ab­solutely not done”, and in­stead of fol­low­ing other op­er­at­ing sys­tems, it would be fan­tas­tic if Linux could do things no other en­vi­ron­ment could do.

For ex­am­ple, be­ing able im­ple­ment non-rec­tan­gu­lar win­dows, ex­pos­ing con­text ac­tions (similar to MacOS), or mak­ing it eas­ier to au­to­mate or script parts of the desk­top en­vi­ron­ment would be in­cred­i­bly ex­cit­ing!

It’s dif­fi­cult to over­state the amount of progress in sup­port for gam­ing, new (and old) hard­ware, as well as the amount of over­all polish”. Every de­vel­oper should be proud to be a part of that!

After 17 years, Wayland is still not ready for prime time. Notable break­age is be­ing doc­u­mented, and adop­tion has been cor­re­spond­ingly slow.

For some users the switch is seam­less. For oth­ers (including my­self), they tend to bounce off af­ter en­coun­ter­ing work­flow-break­ing is­sues. I think it’s ob­vi­ous at this point that the trade-offs have not been worth the has­sle.

My pre­dic­tion is that within the next 5 years the fol­low­ing will be true:

Projects will drop Wayland sup­port and go back to X11

There will be a new dis­play pro­to­col that dis­places both X11 and Wayland

The new dis­play pro­to­col will be a drop-in re­place­ment (similar to XWayland)

Fragmentation will still be an is­sue (this one’s a free­bie)

See you in 2030 for the year of the Linux Desktop.

Included are some of the links ref­er­enced in this post as well as some ad­di­tional read­ing.

...

Read the original on omar.yt »

5 285 shares, 36 trendiness

HP realizes that mandatory 15-minute support call wait times isn’t good support

In an odd ap­proach to try­ing to im­prove cus­tomer tech sup­port, HP al­legedly im­ple­mented manda­tory, 15-minute wait times for peo­ple call­ing the ven­dor for help with their com­put­ers and print­ers in cer­tain ge­o­gra­phies.

Callers from the United Kingdom, France, Germany, Ireland, and Italy were met with the forced hold­ing pe­ri­ods, The Register re­ported on Thursday. The pub­li­ca­tion cited in­ter­nal com­mu­ni­ca­tions it saw from February 18 that re­port­edly said the wait times aimed to influence cus­tomers to in­crease their adop­tion of dig­i­tal self-solve, as a faster way to ad­dress their sup­port ques­tion. This in­volves in­sert­ing a mes­sage of high call vol­umes, to ex­pect a de­lay in con­nect­ing to an agent and of­fer­ing dig­i­tal self-solve so­lu­tions as an al­ter­na­tive.”

Even if HPs tele­phone sup­port cen­ter was­n’t busy, callers would re­port­edly hear:

We are ex­pe­ri­enc­ing longer wait­ing times and we apol­o­gize for the in­con­ve­nience. The next avail­able rep­re­sen­ta­tive will be with you in about 15 min­utes.

To quickly re­solve your is­sue, please visit our web­site sup­port.hp.com to check out other sup­port op­tions or find help­ful ar­ti­cles and as­sis­tant to get a guided help by vis­it­ing vir­tu­ala­gent.hp­cloud.hp.com.

Callers were then told to please stay on the line” if they wanted to speak to a rep­re­sen­ta­tive. The phone sys­tem was also set to re­mind cus­tomers of their other sup­port op­tions and to apol­o­gize for the long (HP-induced) wait times upon the fifth, 10th, and 13th minute of the call.

The manda­tory sup­port call times have been lifted, per a com­pany state­ment shared by HP spokesper­son Katie Derkits:

We’re al­ways look­ing for ways to im­prove our cus­tomer ser­vice ex­pe­ri­ence. This sup­port of­fer­ing was in­tended to pro­vide more dig­i­tal op­tions with the goal of re­duc­ing time to re­solve in­quiries. We have found that many of our cus­tomers were not aware of the dig­i­tal sup­port op­tions we pro­vide. Based on ini­tial feed­back, we know the im­por­tance of speak­ing to live cus­tomer ser­vice agents in a timely fash­ion is para­mount. As a re­sult, we will con­tinue to pri­or­i­tize timely ac­cess to live phone sup­port to en­sure we are de­liv­er­ing an ex­cep­tional cus­tomer ex­pe­ri­ence.

HP did­n’t im­me­di­ately clar­ify when it re­moved the wait times. Some HP work­ers were re­port­edly un­happy with the manda­tory hold times, with an anony­mous insider” in HPs European op­er­a­tions re­port­edly telling The Register, per its Thursday re­port: Many within HP are pretty un­happy [about] the mea­sures be­ing taken and the fact those mak­ing de­ci­sions don’t have to deal with the cus­tomers who their de­ci­sions im­pact.”

...

Read the original on arstechnica.com »

6 277 shares, 14 trendiness

A Third (and Fourth) Azure Sign-In Log Bypass Found

Nyxgeek here. It’s 2026 and I’ve got two more Azure Entra ID sign-in log by­passes to share with you. Don’t get too ex­cited…these by­passes were re­cently fixed, but I think it’s im­por­tant that peo­ple know.

By send­ing a spe­cially crafted lo­gin at­tempt to the Azure au­then­ti­ca­tion end­point, it was pos­si­ble to re­trieve valid to­kens with­out the ac­tiv­ity ap­pear­ing in the Entra ID sign-in logs. This is crit­i­cal log­ging…log­ging that ad­min­is­tra­tors across the world rely on to de­tect in­tru­sions…log­ging that could be made op­tional.

Today I will walk you through the third and fourth Azure sign-in log by­passes that I have found in the last three years. I will also look at how sign-in log by­passes can be de­tected us­ing KQL queries. By know­ing about Microsoft’s past mis­takes, we can try to pre­pare for their fu­ture fail­ures.

Since 2023, I’ve un­cov­ered four Azure Entra ID sign-in log by­passes. This means I’ve found four com­pletely dif­fer­ent ways to val­i­date an Azure ac­coun­t’s pass­word with­out it show­ing up in the Azure Entra ID sign-in logs. While the first two of these merely con­firmed whether a pass­word was valid with­out gen­er­at­ing a log, my lat­est log­ging by­passes re­turned fully func­tion­ing to­kens.

Previously, I had writ­ten about GraphNinja and GraphGhost — two log­ging by­passes where a user could iden­tify valid pass­words with­out gen­er­at­ing any successful’ events in the sign-in logs. Neither were overly com­pli­cated. You can find blog posts de­scrib­ing them in de­tail here and here.

Real quick — a point of clar­i­fi­ca­tion on the names: while I’ve used Graph- pre­fix to des­ig­nate these dif­fer­ent by­passes, per­haps it would have been more ap­pro­pri­ate to pre­fix them Entra-, as they were not lim­ited to only Graph sign-ins.

In each of these, the log­ging be­ing by­passed is for the Azure Entra ID sign-in logs. Logon method is via an HTTP POST to the Entra ID to­ken end­point, lo­gin.mi­crosoft­on­line.com, us­ing the OAuth2 ROPC flow, with the Graph API as our in­tended re­source/​scope. We sub­mit a user­name and pass­word, an Application ID, and a tar­get re­source/​scope, and we’ll get a bearer to­ken or re­fresh to­ken for the Graph API in re­turn.

An ex­am­ple curl com­mand per­form­ing a normal” au­then­ti­ca­tion can be seen be­low:

curl -X POST https://​lo­gin.mi­crosoft­on­line.com/​00000000-1234-1234-1234-000000000000/​oau­th2/​v2.0/​to­ken \

-H Content-Type: ap­pli­ca­tion/​x-www-form-ur­len­coded” \

–data-urlencode client_id=f05ff7c9-f75a-4acd-a3b5-f4b6a870245d” \

–data-urlencode client_info=1” \

–data-urlencode grant_type=password” \

–data-urlencode [email protected]” \

–data-urlencode password=secretpassword123” \

–data-urlencode scope=https://​graph.mi­crosoft.com/.​de­fault

When a valid user­name and pass­word are sup­plied, a to­ken is re­turned that can be used to ac­cess the Graph API.

In the GraphNinja by­pass, it was only nec­es­sary to tar­get an­other ten­ant with the au­then­ti­ca­tion at­tempt (e.g., https://​lo­gin.mi­crosoft­on­line.com/​00000000-1234-1234-1234-000000000000/​oau­th2/​v2.0/​to­ken). Any other valid ten­ant GUID would do, as long as it was­n’t your vic­tim’s. The au­then­ti­ca­tion re­sponse would still in­di­cate if a valid pass­word was found, but the lo­gin would fail be­cause it was per­formed against a for­eign ten­ant where the user did­n’t ex­ist. No failed or suc­cess­ful au­then­ti­ca­tion log was gen­er­ated within the par­ent ten­ant of the ac­tual user, as the au­then­ti­ca­tion was tar­get­ing the for­eign ten­ant. No logs were gen­er­ated on the for­eign ten­ant be­cause only logs for valid users within that ten­ant are gen­er­ated, and the tar­get user did not ex­ist within the for­eign ten­ant. While no to­ken was re­turned by GraphNinja, it would in­di­cate to an at­tacker whether the pass­word was valid with­out the at­tempt ap­pear­ing in logs. Additional log­ging was added by Microsoft to re­me­di­ate this over­sight.

With the GraphGhost by­pass, pro­vid­ing an in­valid Client ID value would cause the over­all au­then­ti­ca­tion flow to fail, but not un­til af­ter cre­den­tial val­i­da­tion had oc­curred. By pro­vid­ing an in­valid value for the Client ID, it would fail a post-pass­word-val­i­da­tion step, the over­all au­then­ti­ca­tion flow would fail, and this would show to ad­min­is­tra­tors as a failed lo­gin, with no in­di­ca­tion in logs that the pass­word had been suc­cess­fully guessed. Like GraphNinja, no to­ken was re­turned, but the pass­word was val­i­dated with­out any in­di­ca­tion to the ad­min. This is­sue was fixed by Microsoft with the ad­di­tion of de­tails in the sign-in logs to in­di­cate whether the pass­word was suc­cess­ful.

Now that you’re caught up on what’s been found in 2023 and 2024, let’s look at the find­ings from 2025.

Let’s start with what I’m terming GraphGoblin. I stum­bled across this by­pass while pok­ing at the pa­ra­me­ters in the Microsoft au­then­ti­ca­tion POST. Testing the scope pa­ra­me­ter, I first tried some sim­ple things like sup­ply­ing in­valid scope val­ues. However, I found that the scope value would be re­jected if it was­n’t a valid scope name, or did­n’t match an ex­pected for­mat.

AADSTS70011: The pro­vided re­quest must in­clude a scope’ in­put pa­ra­me­ter. The pro­vided value for the in­put pa­ra­me­ter scope’ is not valid. The scope [scope] is not valid. The scope for­mat is in­valid. Scope must be in a valid URI form

This er­ror mes­sage is­n’t 100% hon­est. You can also just spec­ify spe­cific scopes, such as openid or Directory. Read.All. If the URL or GUID for­mat is used, it will val­i­date the re­source be­ing tar­geted, and then the scope. This val­i­da­tion of the scope val­ues pre­vented ar­bi­trary long strings from be­ing eval­u­ated.

Or did it? What if the string we sub­mit­ted WAS valid, but re­peat­ing? For in­stance, in­stead of spec­i­fy­ing a value like openid as the scope, what if we sub­mit­ted a bunch of the same value, like openid openid openid?

It got through! But did it work at by­pass­ing the logs? I set an alarm for 15 min­utes, came back, and checked the Azure Entra ID sign-in logs, and found NO NEW SIGN-INS! W00t! I waited an­other 15 min­utes be­fore re­ally cel­e­brat­ing. Then I tested it with a friend’s ten­ant, just to be sure. It was a solid by­pass.

To show how dead-sim­ple this is, a demon­stra­tion of this by­pass us­ing curl and Bash ex­pan­sion can be found be­low:

ex­port TENANT_ID=“[tenant-guid-goes-here]”

curl -X POST https://​lo­gin.mi­crosoft­on­line.com/${​TEN­AN­T_ID}/​oau­th2/​v2.0/​to­ken \

-H Content-Type: ap­pli­ca­tion/​x-www-form-ur­len­coded” \

–data-urlencode client_id=f05ff7c9-f75a-4acd-a3b5-f4b6a870245d” \

–data-urlencode client_info=1” \

–data-urlencode grant_type=password” \

–data-urlencode [email protected]” \

–data-urlencode password=secretpassword123” \

–data-urlencode scope=$(for num in {1..10000}; do echo -n openid ;done)”

There is no way of know­ing 100% why this worked with­out Microsoft pub­lish­ing in­for­ma­tion about these is­sues, but I can take a guess.

Having done a fair bit of log­ging to data­bases with var­i­ous scripts, I be­lieve this was a sim­ple mat­ter of over­flow­ing the SQL col­umn length for a field, caus­ing the en­tire INSERT to fail. This is a com­mon be­gin­ner mis­take when you first start to work with data­bases.

It’s likely that the parser it­er­ated over the list of scopes in­cluded, did not find any in­valid ones, and so al­lowed the rep­e­ti­tious en­try to pass and at­tempted to log the raw list of openid openid openid…, in its en­tirety, over­flow­ing the limit of that SQL col­umn. In test­ing, a rea­son­able max length could be as­sumed to be the sum of the length of all pos­si­ble scope names. Perhaps they tested for that sce­nario but never an­tic­i­pated the re­peats. At any rate, if this was the case, they failed to per­form sim­ple tests against user-sup­plied data.

If Microsoft wants to speak pub­licly about this, I’d love to hear more on it, and to know more about how they ap­proach in­ter­nal se­cu­rity re­views of their most-crit­i­cal prod­ucts.

It’s not of­ten that you see a demo of an ac­tual Azure vul­ner­a­bil­ity, as they get patched and are gone for­ever. However, be­cause Microsoft was hav­ing trou­ble repli­cat­ing this com­pli­cated by­pass, and asked for a video, I come bear­ing re­ceipts.

For your view­ing plea­sure, I pre­sent to you what is prob­a­bly the only Azure sign-in log by­pass you’ll ever see:

In the fol­low­ing demo I am about to per­form a se­ries of three lo­gin at­tempts.

The first at­tempt will be a failed lo­gin that gen­er­ates a nor­mal failed sign-in log. This failed sign-in also gen­er­ates a Correlation ID that we can use as a ref­er­ence point in our logs. In the sec­ond at­tempt, I’ll au­then­ti­cate us­ing the GraphGoblin sign-in log by­pass tech­nique.Then, I’ll make one fi­nal nor­mal failed at­tempt, so we have an­other Correlation ID as a marker.

If all of the fol­low­ing at­tempts are prop­erly logged, the sign-in logs should show:

Below is a screen­shot of the Entra ID sign-in logs. Note the Correlation ID of the last log, and the time. It is dated 9/20/2025, at 1:49:20 PM, with a Correlation ID start­ing with 1dfe62e9-.

In the screen­shot be­low, you can see the source of that failed lo­gin; note that the time and Correlation ID match.

In the above im­age, be­low the failed lo­gin, you can see a valid lo­gin and a time­stamp. The valid lo­gin oc­curs on 9/20/2025 at 13:52:38. The dif­fer­ence is, this time, I’m us­ing the GraphGoblin by­pass that will make this lo­gin event in­vis­i­ble. The lo­gin is suc­cess­ful and we can see a bearer to­ken is re­turned.

In the fol­low­ing screen­shot, I ex­port the to­ken to the vari­able TOKEN so we can eas­ily use it with curl. I then demon­strate that the to­ken is valid by mak­ing a Graph API re­quest with it.

Now, to make it clear when re­view­ing logs, I’m go­ing to make an in­valid lo­gin at­tempt, with­out the by­pass. This will give us a Correlation ID to use as an­other marker point.

Note that the at­tempt is from 9/20/2025 at 19:01:45 and has a Correlation ID that starts with 0d5ea4f0-. As a re­minder, the first failed lo­gin had a Correlation ID that starts with 1dfe62e9-.

If we wait 10 min­utes for logs to in­gest then re­view the logs, we see our Correlation ID start­ing with 0d5ea4f0- is di­rectly af­ter our pre­vi­ous failed lo­gin with an ID that starts with 1dfe62e9-. No suc­cess­ful lo­gin is shown.

I then make a NORMAL valid lo­gin, with­out the log­ging by­pass, to show that logs are still flow­ing.

And, we can see the reg­u­lar, suc­cess­ful au­then­ti­ca­tion come in, but still no sign of the GraphGoblin method.

After post­ing a video of the by­pass to Twitter/X a while back, peo­ple were cu­ri­ous if it was a mat­ter of the sign-in logs not dis­play­ing them in the Azure por­tal GUI, or if it was truly be­ing dropped.

Reviewing the log an­a­lyt­ics, I can con­fi­dently say that these were dropped en­tirely from the sign-in logs. In the demo video, I had sent a nor­mal re­quest with a user-agent of MARKER 1 — BEFORE THE BYPASS. After per­form­ing mul­ti­ple au­then­ti­ca­tions us­ing the log­ging by­passes, I sent an­other nor­mal re­quest with a user-agent of MARKER 2 — AFTER THE BYPASS. In my Log Analytics work­space, we can see that none of the by­passed sign-in logs made it to Log Analytics. Only our MARKER 1 and MARKER 2 en­tries are vis­i­ble from our tests.

Okay, so I men­tioned that I had found a FOURTH by­pass. This one was ridicu­lously sim­ple, and it’s re­lated to GraphGoblin. Can you guess what lo­gon pa­ra­me­ter was vul­ner­a­ble?

I’ll give you a hint: This field is cus­tomized reg­u­larly.

Another hint: If you were go­ing to per­form fuzzing of a crit­i­cal web-based au­then­ti­ca­tion sys­tem and its log­ging, you would DEFINITELY want to in­clude this field in your test­ing.

Did you guess it? If you guessed the USER-AGENT field, you’re RIGHT!

The se­cret to abus­ing this field so that it would­n’t gen­er­ate an au­then­ti­ca­tion log? You make the user-agent string re­ally long. A to­tal of 50,000 char­ac­ters in the user-agent worked re­li­ably. That’s it. No spe­cial trick, just a long string.

Here is a screen­shot of the by­pass in ac­tion:

Again, I would guess this was the re­sult of over­flow­ing a SQL col­umn limit.

Here is a screen­shot where I made a failed lo­gin to gen­er­ate a Correlation ID that starts with c542178e:

And here is a screen­shot of a search in my logs from October 09, 2025, look­ing for that Correlation ID in the last mon­th’s logs. As you can see, no log with the Correlation ID was found, in­di­cat­ing the by­pass was suc­cess­ful.

I dis­cov­ered this on 9/28/2025, and a week later when I went back to write up a re­port on it, Microsoft had al­ready fixed it! I’m not sure how they no­ticed and fixed this is­sue with­out also notic­ing and fix­ing GraphGoblin, but they did.

* 1/7/2025 Microsoft is un­able to re­pro­duce this so I make them a video

* 11/21/2025Finally re­pro­duce and start to roll out fix

* 12/1/2025Re-escalated within MSRC re: bounty and sever­ity, no change

What’s go­ing on here, Microsoft?

To re­view, here are the parts of a lo­gon POST that had flaws that en­abled the Azure Entra ID sign-in log by­passes. I’ve com­piled them into one screen­shot so you can see how many lo­gin pa­ra­me­ters have had is­sues iden­ti­fied.

This is the gate­house that se­cures hun­dreds, even thou­sands, of or­ga­ni­za­tions. How is it pos­si­ble that so many parts of this crit­i­cal fea­ture were so woe­fully untested? None of the by­passes that I’ve sub­mit­ted these last few years were com­pli­cated. Yet, some­how, Microsoft’s se­cu­rity re­view of Entra ID missed all of them.

Were is­sues in­tro­duced with AI cod­ing? Anybody who has worked with AI for cod­ing knows that it 100% can (and of­ten will) drop por­tions of your code when re­vis­ing it. Or did these is­sues get in­tro­duced slowly over the years? Or, have these prob­lems all been there since the start of Azure over a decade ago? Unfortunately, we will never know. Again, I in­vite Microsoft to speak pub­licly on these re­peated fail­ings.

Four sign-in log by­passes in the last three years, for what is ar­guably the most im­por­tant log of all of Azure. This does­n’t bode well for ad­mins who rely on these logs as a source of truth. So what can you do, short of mov­ing back to on prem? Well, if you shell out the cash for an E5 li­cense, you can still de­tect ma­li­cious ac­tiv­ity, in spite of Microsoft’s fail­ures.

After find­ing these last two by­passes, I started to see if I could iden­tify traf­fic from these by­passed ses­sions. I had been col­lect­ing Graph ac­tiv­ity in a Log Analytics work­space along with Sign-In logs. While re­view­ing logs I no­ticed that the Sign-In logs and the Graph Activity logs both had a Session ID field. Perfect! It should be pos­si­ble to take a list of all unique Session IDs from the Graph Activity logs and find a cor­re­spond­ing Session ID in the sign-in logs. Any Session IDs that only show up in the Graph Activity logs, and don’t ex­ist in any sign-in logs, must have by­passed the sign-in logs. Note for de­fend­ers: you will need an E5 li­cense to col­lect the Graph Activity logs.

I started off with a sim­ple query, ran it on my small test ten­ant, and voilà! I had a list of Graph ac­tiv­ity be­long­ing to my by­passed ses­sions.

However, it soon be­came ap­par­ent, when test­ing the de­tec­tion with a client at a larger or­ga­ni­za­tion, that there was a lot of pos­si­ble noise’ to ac­count for. My sim­ple checks did­n’t take into ac­count non­in­ter­ac­tive lo­gins, nor ser­vice prin­ci­pal lo­gins.

For a while, I was at an im­passe. However, I soon found that some­body else had al­ready thought of this, and had im­ple­mented it! Fabian Bader has a post cov­er­ing ex­actly this in his fan­tas­tic write-up, Detect threats us­ing Microsoft Graph ac­tiv­ity logs - Part 2.

There is an en­tire sec­tion on find­ing miss­ing sign-in logs!

This method in­cor­po­rates ad­di­tional sources of sign-ins that I had not, in­clud­ing from ser­vice prin­ci­pals, man­aged iden­ti­ties, and non­in­ter­ac­tive user sign-ins. Instead of join­ing on SessionId val­ues, this KQL per­forms the JOIN on the more fine-grained prop­er­ties — GraphActivity. SignInActivityId SignInLogs.UniqueTokenIdentifier.

I still had a lit­tle bit of noise, but I added the MicrosoftServicePrincipalSignInLogs to the list of Sign-In sources and it mostly cleared up.

MicrosoftGraphActivityLogs

| where TimeGenerated > ago(8d)

| join kind=lef­t­anti (union is­fuzzy=true

SigninLogs,

AADNonInteractiveUserSignInLogs,

AADServicePrincipalSignInLogs,

AADManagedIdentitySignInLogs,

MicrosoftServicePrincipalSignInLogs

| where TimeGenerated > ago(90d)

| sum­ma­rize arg_­max(TimeGen­er­ated, *) by UniqueTokenIdentifier

on $left.SignInActivityId == $right.UniqueTokenIdentifier

If the above query re­turns false-pos­i­tives, you might want to ex­per­i­ment with match­ing on the broader match­ing vari­able, SessionId in­stead:

MicrosoftGraphActivityLogs

| where TimeGenerated > ago(8d)

| join kind=lef­t­anti (union is­fuzzy=true

SigninLogs,

AADNonInteractiveUserSignInLogs,

AADServicePrincipalSignInLogs,

AADManagedIdentitySignInLogs,

MicrosoftServicePrincipalSignInLogs

...

Read the original on trustedsec.com »

7 260 shares, 13 trendiness

Drugwars for the TI-82/83/83+ Calculators

Skip to con­tent

You signed in with an­other tab or win­dow. Reload to re­fresh your ses­sion.

You signed out in an­other tab or win­dow. Reload to re­fresh your ses­sion.

You switched ac­counts on an­other tab or win­dow. Reload to re­fresh your ses­sion.

You must be signed in to star a gist

You must be signed in to fork a gist

Embed this gist in your web­site.

Save mattman­ning/​1002653 to your com­puter and use it in GitHub Desktop.

Embed this gist in your web­site.

Save mattman­ning/​1002653 to your com­puter and use it in GitHub Desktop.

Sign up for free

to join this con­ver­sa­tion on GitHub.

Already have an ac­count?

Sign in to com­ment

You can’t per­form that ac­tion at this time.

...

Read the original on gist.github.com »

8 241 shares, 21 trendiness

Oregon School Cell Phone Ban: ‘Engaged students, joyful teachers’

Swipe or click to see more

Gov. Tina Kotek vis­ited Estacada High School to hear how her cell phone ban has been go­ing. (Staff photo: Christopher Keizur)

Swipe or click to see more

Gov. Tina Kotek said she chose Estacada High for her visit be­cause of the pos­i­tive things hap­pen­ing within the dis­trict. (Staff photo: Christopher Keizur)

Swipe or click to see more

Swipe or click to see more

Swipe or click to see more

Swipe or click to see more

There was plenty of un­cer­tainty and de­bate about the ef­fec­tive­ness of a cell phone ban de­creed by ex­ec­u­tive or­der last sum­mer.

But at least in Estacada, the pol­icy has earned two thumbs up, in­clud­ing ap­proval from a grumpy old teacher.”

Jeff Mellema is a lan­guage arts teacher at Estacada High School. He has worked in the build­ing for 24 years, and he said the new pol­icy that pro­hibits stu­dents from us­ing their phones dur­ing the day has been a breath of fresh air.

There is so much bet­ter dis­course in my class­room, be it per­sonal or aca­d­e­mic,” Mellema said. Students can’t avoid those con­ver­sa­tions any­more with their phones.”

This ban has brought joy back to this old, grumpy teacher,” he added with a smile.

That is the kind of feed­back Gov. Tina Kotek was hop­ing for as she vis­ited Estacada High School on Wednesday af­ter­noon, March 18. Her goal was to visit class­rooms, speak with ad­min­is­tra­tors, and meet with stu­dents one-on-one to hear about the ef­fec­tive­ness of her phone pol­icy.

I knew when I put out the or­der, not every­one would love it from day one,” Gov. Kotek said. I ap­pre­ci­ate all the feed­back to­day.”

You have an amaz­ing school. Go Rangers,” she added with a smile.

For years, ed­u­ca­tors had re­ported that cell phones were dis­rup­tive in class­rooms and hin­dered ef­fec­tive teach­ing. Research sup­ported those anec­do­tal claims, show­ing that phones un­der­mine stu­dents’ abil­ity to fo­cus, even when they are just sit­ting on the desk, un­used.

So the gov­er­nor is­sued her ex­ec­u­tive or­der pro­hibit­ing cell phone use by stu­dents dur­ing the school day in Oregon’s K-12 pub­lic schools. To help im­ple­ment the ban, her of­fice worked with the Oregon Department of Education to share model poli­cies for schools that al­ready have pro­hi­bi­tions in place, as well as guid­ance on im­ple­men­ta­tion flex­i­bil­ity.

We are grate­ful to have Governor Kotek here to­day to see with her own eyes the pos­i­tive ef­fect this has had,” said Estacada Superintendent Ryan Carpenter. We can now de­mand this ex­pec­ta­tion for our stu­dents’ well-be­ing and suc­cess.”

Since that or­der was is­sued, every sin­gle pub­lic school dis­trict in Oregon is in com­pli­ance with the band.

The goal is for every stu­dent to have the best op­por­tu­nity to be suc­cess­ful,” Kotek said. They need to know how to talk to peo­ple, learn, and go out into the world.”

The gov­er­nor vis­ited two class­rooms dur­ing her trip to Estacada High — Mr. Schaenman’s his­tory class and Mrs. Hannet’s al­ge­bra class. Gov. Kotek as­sured the kids that she still uses al­ge­bra in her day-to-day life, no mat­ter how un­likely that may seem to the young­sters.

In the class­rooms, she was able to take a straw poll around the cell phone ban and then get spe­cific, di­rect feed­back from the kids.

Overall, it was pos­i­tive. The Rangers said they no­ticed changes in how they in­ter­act with teach­ers and peers. They don’t feel that siren’s song” tug of their phones as of­ten, and the changes are bleed­ing into every­day life as well — think less re­minders to put phones away dur­ing fam­ily din­ners. Phones also led to is­sues around bul­ly­ing and on­line tox­i­c­ity dur­ing the school day.

There are some hic­cups. The stu­dents spoke about dif­fi­cul­ties in track­ing busy sched­ules. Many ath­letes re­lied on their phones for prac­tice times and lo­ca­tions. Some ad­vanced place­ment kids said the overzeal­ous pro­grams mon­i­tor­ing school lap­tops blocked ac­cess to needed re­sources for study­ing/​re­search­ing school­work. There is even a strange quirk with school-pro­vided tech that pre­vents them from ac­cess­ing their cal­cu­la­tors.

Maybe the fil­ters are too strong right now,” Gov. Kotek said. That is why we are work­ing with the dis­tricts to best im­ple­ment the pol­icy.”

The kids also weighed in on the de­bate around the ex­tent of the ban. The two op­tions bandied in Salem were a bell-to-bell” pol­icy or just in­side class­rooms. The lat­ter would al­low kids to use their phones dur­ing pass­ing pe­riod and lunch. Several ad­vo­cated for that change.

That mir­rored the de­bate within the Oregon leg­is­la­ture. It ul­ti­mately led to a stale­mate and the need for Gov. Kotek’s ex­ec­u­tive rul­ing.

When you make a de­ci­sion like this, you don’t know how it will ul­ti­mately work,” Kotek told the stu­dents. I ap­pre­ci­ate you adapt­ing to the sit­u­a­tion and mak­ing it work for you.”

While things could change in the fu­ture, the gov­er­nor is pleased with the early re­sults. The phone ban is here to stay.

Estacada School District is rev­el­ing in its sta­tus of pub­lic school Golden Child.”

The visit from the gov­er­nor is the lat­est feather in the cap of the rural, small dis­trict. In 2025, Estacada High had a 92.5% grad­u­a­tion rate. That is a stun­ning turn­around from a record-low of 38.5% in 2015. The dis­trict cred­its poli­cies aimed at re­tain­ing tal­ented teach­ers and em­pow­er­ing stu­dents to take a more ac­tive role in their learn­ing.

We are proud of these re­sults,” Carpenter said. This is a re­flec­tion and re­ward for a ton of hard work. This dis­trict lit­er­ally changed its stars to be seen as a true aca­d­e­mic pow­er­house in Oregon.”

That mind­set con­tin­ues with the cell phone ban. Like many oth­ers, Estacada had a ver­sion of this in place be­fore the of­fi­cial edict. But with the gov­er­nor’s push, it em­pow­ered ad­min­is­tra­tors and teach­ers to learn fully em­brace the ban.

Any pol­icy is only as good as the teach­ers who en­force it,” Carpenter said.

In craft­ing its pol­icy, Estacada in­cor­po­rated feed­back from par­ents. That led to some key de­ci­sions around the cell phone ban. Rather than use pouches or lock­ers, stu­dents are al­lowed to keep their phones safely stored in their back­packs. That was for two rea­sons — it al­lows stu­dents to con­tact loved ones dur­ing emer­gen­cies, and many par­ents use phone track­ers to keep tabs on their kids.

The dis­trict has also leaned on di­rect, im­me­di­ate com­mu­ni­ca­tion. The flow of in­for­ma­tion reaches par­ents di­rectly, avoid­ing some of the mis­com­mu­ni­ca­tion that oc­curred in the past.

Even I’m sur­prised by the im­pact this has had,” Kotek said. I’m thank­ful for the ed­u­ca­tors who took up the charge when I said we’ve got to do this.”

We can model what Estacada is do­ing for other dis­tricts across the state,” she added.

Oregon vot­ers have re­jected most laws in ref­er­en­dums, with many de­cided out­side November

Now’s the time to see Portland cherry blos­soms in peak bloom

...

Read the original on portlandtribune.com »

9 240 shares, 32 trendiness

The Los Angeles Aqueduct is Wild — Practical Engineering

[Note that this ar­ti­cle is a tran­script of the video em­bed­ded above.]

On the north­ern edge of Los Angeles, fresh wa­ter spills down two stark con­crete chutes perched on the foothills of the San Gabriel Mountains, a place sim­ply called The Cascades. It’s a de­cep­tively sim­ple-look­ing fin­ish line: the end of a roughly 300-mile (or 500 km) jour­ney from the east­ern slopes of the Sierra Nevada into the city.

On November 5, 1913, tens of thou­sands of peo­ple climbed these hills to watch the first wa­ter ar­rive. When the gates fi­nally opened, wa­ter trick­led through, but that trickle quickly be­came a tor­rent. The pro­jec­t’s chief en­gi­neer, William Mulholland, leaned over to the mayor and shouted the line that’s been re­peated ever since: There it is, Mr. Mayor. Take it!”

That mo­ment was pro­found for a lot of rea­sons, de­pend­ing on where you live and how you feel about wa­ter rights. LA did­n’t be­come LA by liv­ing within the lim­its of its lo­cal re­sources. Its me­te­oric growth into the me­trop­o­lis we know was en­abled by an early and ex­tra­or­di­nary de­ci­sion to reach far be­yond its own wa­ter­shed and pull a whole new river into town. Today, roughly a third of LAs wa­ter comes from the Eastern Sierra through the Los Angeles Aqueduct sys­tem. That share swings with snow­pack, drought, and en­vi­ron­men­tal con­straints, but this one piece of in­fra­struc­ture helped turn a wa­ter-lim­ited town into a world city. It’s one of the most im­pres­sive and con­tro­ver­sial en­gi­neer­ing pro­jects in American his­tory.

But to re­ally ap­pre­ci­ate that wa­ter in the cas­cades, you have to look way up­stream and see what it took to get it there. It’s grav­ity, ge­ol­ogy, pol­i­tics, and hu­man am­bi­tion all in a part of the state that most peo­ple never see. Let’s take a lit­tle tour so you can see what I mean. I’m Grady and this is Practical Engineering.

When most peo­ple think about aque­ducts, this is what they pic­ture: a bridge car­ry­ing wa­ter over a val­ley or river. And, just to be clear, these are aque­ducts. But en­gi­neers of­ten use the term more broadly to de­scribe any type of con­veyance sys­tem that car­ries wa­ter over a long dis­tance from a source to a dis­tri­b­u­tion point. Could be a canal, a pipe, a tun­nel, or even just a ditch. In the case of the LA aque­duct, it’s all of them, plus a lot of sup­port­ing in­fra­struc­ture as well.

From the cen­ter of the city, it’s about a four hour drive to the Owens River Diversion Weir. It’s not ac­ces­si­ble to the pub­lic, but it is the of­fi­cial start of the LA Aqueduct, at least when it was orig­i­nally built. Here, all the snowmelt and rain from a huge drainage sys­tem be­tween the Sierra Nevada and Inyo Mountains fun­nel down into the Owens River, where a large con­crete di­ver­sion weir peels nearly all of it out of its nat­ural course and into a canal. This point is roughly 2,500 feet (or 750 me­ters) higher in el­e­va­tion than the bot­tom of the Cascades at the down­stream end, which makes it ob­vi­ous why LA chose it as a source. The en­tire aque­duct is a grav­ity ma­chine. There are no pumps push­ing the wa­ter to­ward the city. Half a mile of el­e­va­tion change feels like a lot un­til you re­al­ize you have to spread it out over 300 miles. It’s all achieved through care­ful grad­ing and man­ag­ing el­e­va­tions along the way to keep the flow mov­ing.

That care is par­tic­u­larly im­por­tant in this up­per sec­tion of the aque­duct, where the wa­ter flows in an open canal. To do this ef­fi­ciently, you need a rel­a­tively con­stant slope from start to fin­ish. That’s a tough thing to achieve on the sur­face of a bumpy earth. Following a river val­ley makes this eas­ier, but you can see the twists and turns nec­es­sary to keep the aque­duct on its gen­tle slope to­ward LA.

If it seems kind of wild that a city would buy up the land and wa­ter rights from some­where so far away, it did to a lot of the peo­ple who lived in the Owens Valley, too. A lot of the ac­qui­si­tions and pol­i­tics of the orig­i­nal LA Aqueduct were car­ried out in bad faith, sour­ing re­la­tion­ships with landown­ers, ranch­ers, farm­ers, and com­mu­ni­ties in the area. The saga is full of bro­ken promises and shady deal­ings. Then when the di­ver­sion started, the area dried up, dis­rupt­ing the ecol­ogy of the re­gion, mak­ing agri­cul­ture more dif­fi­cult and res­i­dents even more re­sent­ful. Many re­sorted to vi­o­lence, not against peo­ple but against the in­fra­struc­ture. They van­dal­ized parts of the aque­duct, a con­flict that later be­came known as the California Water Wars. In one case in 1924, ranch­ers used dy­na­mite to blow up a part of the canal. Later that year, they seized the Alabama Gates.

About 20 miles or 35 kilo­me­ters down­stream from the di­ver­sion weir, a set of gates sits on the east­ern bank of the aque­duct canal. Because it runs be­side the river val­ley, the aque­duct cap­tures some of the wa­ter that flows down from the sur­round­ing moun­tains in ad­di­tion to what’s di­verted out of the Owens River, par­tic­u­larly dur­ing strong storms. That means it’s ac­tu­ally pos­si­ble for the canal to over­fill. The Alabama Gates serve as a spill­way, al­low­ing op­er­a­tors to di­vert wa­ter back down to the river. This also helps drain the canal for main­te­nance or re­pairs when needed.

Those Owens Valley ranch­ers un­der­stood ex­actly what the Alabama Gates con­trolled. Open them, and the wa­ter would run back where it had al­ways run, down the Owens River, in­stead of south to Los Angeles. The re­sis­tance sim­mered and flared for years, but it did­n’t end in the dra­matic show­down at the aque­duct. Instead, it ended at a bank counter. The Inyo County Bank was run by two broth­ers who were also key or­ga­niz­ers and fi­nanciers of the re­sis­tance cam­paign. In August 1927, an au­dit re­vealed ma­jor short­falls and on­go­ing em­bez­zle­ment, and the bank quickly col­lapsed. Residents across the val­ley saw their sav­ings wiped out or frozen overnight, shat­ter­ing what was left of the com­mu­ni­ty’s abil­ity to keep fight­ing.

The Alabama Gates weren’t just a po­lit­i­cal flash­point though. They also marked an im­por­tant di­vid­ing line in the aque­duc­t’s de­sign. LA knew that even if the ranch­ers did­n’t re­lease the wa­ter to the river in protests, a lot of it would end up there any­way through seep­age. As the canal climbed away from the val­ley floor and crossed more porous soil, it would nat­u­rally lose its wa­ter through the ground. So, at the Alabama Gates, the aque­duct tran­si­tions from an un­lined canal to a con­crete-lined chan­nel. It’s still open to the air, so there’s no pro­tec­tion against evap­o­ra­tion or con­t­a­m­i­na­tion, but the losses to the ground are a lot less.

This de­sign con­tin­ues for about 35 miles (or 55 kilo­me­ters) through the val­ley. Along the way, the aque­duct passes the re­mains of Owens Lake. Once a large body of wa­ter, it quickly dried up with the di­ver­sion of the Owens River. Of course, there were im­pacts to wildlife from the loss of wa­ter, but the big­ger prob­lem came later: dust. All the fine sed­i­ment that set­tled on the lakebed over thou­sands of years was now ex­posed to the hot desert sun. When the wind picked up, it filled the air with fine par­tic­u­lates that are dan­ger­ous to breathe. Over the years, there have been times when Owens Lake is the sin­gle largest source of dust pol­lu­tion in the en­tire coun­try, and LA has spent more than a bil­lion dol­lars just try­ing to fix this prob­lem alone. The aque­duct pass­ing along the hill­side past the lake and its chal­lenges is a re­minder that the true cost of wa­ter is of­ten a lot more than the in­fra­struc­ture it takes to de­liver it.

So far, it might be ob­vi­ous that this aque­duct sys­tem is pretty frag­ile to be mak­ing up a ma­jor part of a city’s fresh wa­ter sup­ply. Even be­yond the van­dal­ism and po­lit­i­cal re­sis­tance, there are a lot of things that could go wrong along the way, from bank col­lapses, earth­quakes, di­ver­sion fail­ures, and more. That’s why Haiwee Reservoir was orig­i­nally built in a nar­row sad­dle be­tween two hills as a kind of buffer. With a dam on ei­ther side, it stored wa­ter up so the aque­duct could keep run­ning even dur­ing a dis­rup­tion up­stream. It also slowed the wa­ter down, ex­pos­ing it to the hot desert sun as a nat­ural form of UV dis­in­fec­tion. In the 1960s, the reser­voir was re­con­fig­ured into two basins to add some flex­i­bil­ity. That’s be­cause, around that time, the LA aque­duct be­came two. While the open-topped canal sec­tion was large enough to meet de­mands, the un­der­ground con­duit in the next sec­tion was­n’t. So, LA built a sec­ond one in 1970 to in­crease the flow. If you look at this map of the Haiwee Reservoirs, you can see that wa­ter has two paths: it can flow into the sec­ond aque­duct here from the north basin, or it can pass through the Merritt Cut to the south reser­voir, through the in­take there, and into the first aque­duct. This setup al­lows for some re­dun­dancy, along with reg­u­la­tion and bal­anc­ing of the flows be­tween the two aque­ducts. Haiwee marks the start of the long desert run, with both sys­tems no longer in open-topped lined canals, but run­ning un­der­ground in con­crete con­duits.

There are a lot of ad­van­tages to run­ning an aque­duct in a closed con­duit un­der­ground, es­pe­cially one this long through a desert land­scape. There’s far less evap­o­ra­tion and less po­ten­tial for con­t­a­m­i­na­tion. It does­n’t di­vide the land­scape at the sur­face level, so there’s no need for bridges, cul­verts, and wildlife cross­ings. Going un­der­ground also of­fers more flex­i­bil­ity when it comes to topog­ra­phy. You don’t have to fol­low the con­tours of the sur­face so care­fully be­cause if you come to a hill, you can just dig a lit­tle deeper to keep the con­stant slope.

Of course, those ben­e­fits come with a cost. An un­der­ground con­duit is more ex­pen­sive than a sim­ple chan­nel on the sur­face, and not all the prob­lems with topog­ra­phy are solved. This is Jawbone Canyon, one of the biggest drops for the first aque­duct. Rather than tak­ing a ma­jor de­tour around it, the aque­duct de­scends 850 feet (or 250 me­ters) and then as­cends back up. This type of struc­ture is of­ten called an in­verted siphon. I’ve done a video on how these work for sewer sys­tems, and I’ve also done a video on flood tun­nels that work in a sim­i­lar way, if you want to learn more af­ter this.

Unlike the con­crete con­duit, which re­ally just acts like an un­der­ground canal with a roof, this is one of the places where the wa­ter in the aque­duct is pres­sur­ized. 850 feet of wa­ter col­umn is about 370 psi, 26 bar, or two-and-a-half Megapascals. It’s a lot of pres­sure. These sec­tions of pipe had to be spe­cially man­u­fac­tured on the East Coast, where the ma­jor steel fa­cil­i­ties were, and trans­ported by ship be­cause of their size. They trav­elled all the way around Cape Horn, since the Panama Canal was still un­der con­struc­tion. There are ac­tu­ally quite a few of these siphons cross­ing canyons in this sec­tion of the aque­duct, but Jawbone Canyon is the biggest one.

A lit­tle fur­ther down­stream, the LA aque­duct crosses the California Aqueduct, part of the State Water Project. That sys­tem has a con­nec­tion to LA as well, but this branch at the cross­ing ac­tu­ally heads to Silverwood Lake. However, there is a trans­fer fa­cil­ity, re­cently com­pleted, that can pump wa­ter out of the California Aqueduct di­rectly into the first LA aque­duct. This cre­ates op­por­tu­ni­ties for LA to buy wa­ter that moves through the state sys­tem and of­fers some flex­i­bil­ity in where that wa­ter ends up. There’s also a turn-in that can move wa­ter from the LA aque­duct into the California aque­duct for sit­u­a­tions where trades make sense. The sec­ond LA aque­duct passes un­der­neath the state canal here. And this is a good ex­am­ple of the dif­fer­ences be­tween the first pro­ject (built in the 1910s) and the sec­ond one, built in the 1960s. Over that time, the price of la­bor went up a lot more than the price of ma­te­ri­als. Where the first one care­fully fol­lowed the ex­ist­ing topog­ra­phy with bends and turns to min­i­mize the need for ex­pen­sive pres­sur­ized pipe, the sec­ond one could take a more di­rect path, re­duc­ing la­bor in re­turn for the more spe­cial­ized con­duit ma­te­ri­als.

After wan­der­ing more than a hun­dred miles (or 160 kilo­me­ters) apart, the two Los Angeles Aqueducts come back to­gether at Fairmont Reservoir, in the north­ern foothills of the Sierra Pelona Mountains. This is the last ma­jor topo­graphic bar­rier on the way to Los Angeles. There was no way to go up and over with­out pumps, so in­stead they went straight through. The largest pro­ject was the Elizabeth tun­nel.

Here, the two aque­ducts come to­gether again into a sin­gle wa­ter­course. About 5 miles or 8 kilo­me­ters of ex­ca­va­tion through every­thing from hard rock to loose, wet ground be­came one of the most dif­fi­cult parts of the en­tire pro­ject. The tun­nel re­quired con­tin­u­ous tem­po­rary sup­ports along most of its length, fol­lowed by a per­ma­nent con­crete lin­ing. It was a mon­u­men­tal ef­fort for its time and es­sen­tial not only to cross the range. The Elizabeth Tunnel also de­liv­ers that wa­ter un­der pres­sure to the San Francisquito Power Plant Number 1.

This is the largest of the eight hy­dro­elec­tric plants that run along the aque­duct, cap­tur­ing some of the en­ergy from the wa­ter as it flows down­ward to­ward LA. These plants are a ma­jor part of how the pro­ject paid for it­self, and they con­tinue to serve as an im­por­tant source of elec­tric­ity in the re­gion to­day.

Continuing down­stream, Bouquet Canyon reser­voir adds an­other layer of op­er­a­tional flex­i­bil­ity. It helps reg­u­late flow through the power plants and pro­vides ad­di­tional stor­age, a sort of in­sur­ance pol­icy since this whole reach de­pends on a sin­gle ma­jor tun­nel cross­ing the San Andreas Fault. In case of a ma­jor earth­quake, it’d be best if Angelinos could avoid a si­mul­ta­ne­ous wa­ter short­age.

The aque­duct splits again just up­stream of the San Francisquito Plant Number 2, which was fa­mously de­stroyed by the St. Francis Dam fail­ure. That reser­voir pro­ject was de­signed to sup­ple­ment the stor­age ca­pac­ity along the aque­duct, but the dam failed cat­a­stroph­i­cally in 1928, just 2 years af­ter it was com­pleted, killing more than 400 peo­ple and de­stroy­ing sev­eral parts of the aque­duct as well. The tragedy was one of the worst en­gi­neer­ing dis­as­ters in American his­tory. It put an­other stain on the aque­duct pro­ject, and it ef­fec­tively ru­ined the rep­u­ta­tion of William Mulholland, who was largely con­sid­ered a hero in LA for all his work on the aque­duct and the city’s wa­ter sys­tem. The dam was never re­built, but work­ers re­stored the aque­duct to func­tion­ing ser­vice in only 12 days.

At Drinkwater Reservoir, the two aque­ducts run roughly par­al­lel through the Santa Clarita area, some­times above­ground and some­times be­low, be­fore fi­nally reach­ing the ter­mi­nal struc­tures that carry wa­ter into LA. Usually, the wa­ter stays in the con­duits, which feed the two hy­dropower plants at the foot of the moun­tains. If the plants are out of ser­vice or there’s more flow than they can han­dle, you see ex­cess wa­ter thun­der­ing through the cas­cade struc­tures in­stead.

From here, the aque­duct drops out of the moun­tains and into the north end of the San Fernando Valley, where the wa­ter is treated and pre­pared for dis­tri­b­u­tion. After fil­tra­tion and dis­in­fec­tion, it’s stored in the Los Angeles Reservoir, the sys­tem’s ter­mi­nal reser­voir, so the city can smooth out day-to-day swings in de­mand even while the aque­duc­t’s in­flow stays rel­a­tively steady.

For most of Los Angeles’ his­tory, that finished wa­ter stor­age” was out in the open air. But in the 2000s, drink­ing-wa­ter rules pushed util­i­ties to add stronger pro­tec­tion for treated wa­ter held in un­cov­ered reser­voirs. There’s a good chance you’ve seen their so­lu­tion on the Veritasium chan­nel or else­where: 96 mil­lion plas­tic shade balls that act like a float­ing cover, block­ing sun­light to pre­vent wa­ter-chem­istry prob­lems and help­ing keep wildlife out. They’re the fi­nal pro­tec­tion for this wa­ter that trav­eled so long to reach the city. While the LA Reservoir is, in a sense, the end of the jour­ney for this wa­ter, the orig­i­nal di­ver­sion way back at Owen’s River is­n’t even tech­ni­cally the start any­more!

In 1940, LA ex­tended the aque­duct sys­tem up­stream north­ward by con­nect­ing the Mono basin and fun­nel­ing its wa­ter through tun­nels to the Owens River basin. Like Owens Lake down­stream, Mono Lake be­gan dry­ing out as well. And also like Owens Lake, law­suits, court or­ders, and en­vi­ron­men­tal reg­u­la­tions have tem­pered the value of this wa­ter source, forc­ing LA to sig­nif­i­cantly re­duce di­ver­sions and im­ple­ment costly restora­tion pro­jects.

That’s kind of the story of the LA aque­duct in a nut­shell. The pro­ject seemed ob­vi­ous from an en­gi­neer­ing per­spec­tive. There was lots of snowmelt in the moun­tains; the city had the tech­ni­cal prowess, the fund­ing, the el­e­va­tion, and the po­lit­i­cal power to reach out and take it. The re­sult was one of the most im­pres­sive works of in­fra­struc­ture of the early 20th cen­tury. And con­tin­ued ef­forts to ex­pand and im­prove the sys­tem have made it even more ef­fi­cient, flex­i­ble, and valu­able to the many mil­lions of peo­ple who live in one of the most pop­u­lous cities in America, de­liv­er­ing not only wa­ter but also hun­dreds of megawatts of hy­dropower.

But it many ways, it was not only un­scrupu­lous, but also short-sighted. Residents of the Owens Valley watched ranch­land and farm­land dry up as the wa­ter that had shaped their home was rerouted south. Native com­mu­ni­ties saw their home­land trans­formed with ac­cess to gath­er­ing ar­eas dis­rupted, places made un­rec­og­niz­able, and cul­tural ties strained by changes they did­n’t choose. Wind picked up al­ka­line dust from dried lakebeds. Habitats were dis­rupted, and the birds that de­pended on these wa­ters and wet­lands lost part of what made this mi­gra­tion cor­ri­dor work. It’s easy to see why the aque­duct re­mains con­tro­ver­sial, and why what we some­times dis­miss as red tape” around ma­jor in­fra­struc­ture is of­ten com­pletely jus­ti­fied due dili­gence. As en­gi­neers, and re­ally, as hu­mans, we have to try and ac­count for costs that don’t show up on a bal­ance sheet, but can come back later as decades of law­suits, mit­i­ga­tion, and restora­tion.

And even the aque­duc­t’s orig­i­nal the­sis (that there’s re­li­able snowmelt up there, and a grow­ing city down here) is start­ing to fal­ter. In re­cent decades, the moun­tains have de­liv­ered less pre­dictable runoff: more swings, more years when the tim­ing is wrong, and more un­cer­tainty about what normal” even means any­more. California’s cli­mate has al­ways moved in long cy­cles, but the mar­gin for er­ror is thin­ner now, and no one can say with much con­fi­dence when or if the mois­ture the state de­pends on will re­turn to its old pat­tern.

The hope­ful part is that this is ex­actly where en­gi­neer­ing makes a dif­fer­ence: at the messy in­ter­sec­tion of ge­ol­ogy, cli­mate, cul­ture, pol­i­tics, and hu­man need. The Los Angeles Aqueduct is a case study in what we can build when we’re am­bi­tious, but also what hap­pens when we treat a land­scape like a ma­chine with only one out­put. The next era of wa­ter en­gi­neers can learn a lot from it.

...

Read the original on practical.engineering »

10 226 shares, 14 trendiness

The FSF doesn't usually sue for copyright infringement, but when we do, we settle for freedom — Free Software Foundation — Working together for free software

The Free Software Foundation (FSF), like many oth­ers, re­ceived a no­tice re­gard­ing set­tle­ment in the copy­right in­fringe­ment law­suit Bartz v. Anthropic. It is a class ac­tion law­suit claim­ing that Anthropic in­fringed copy­right by down­load­ing works in Library Genesis and Pirate Library Mirror datasets for pur­poses of train­ing large lan­guage mod­els (LLMs). According to the no­tice, the dis­trict court ruled that us­ing the books to train LLMs was fair use but left for trial the ques­tion of whether down­load­ing them for this pur­pose was le­gal. Apparently, the par­ties agreed to set­tle in­stead of wait­ing for the trial and they are now reach­ing out to po­ten­tial copy­right hold­ers to of­fer money in lieu of po­ten­tial dam­ages.

The FSF holds copy­rights to many pro­grams in the GNU Project, as well as to sev­eral books. We pub­lish all works that we hold copy­rights to un­der free (as in free­dom) li­censes. Among the works we hold copy­rights over is Sam Williams and Richard Stallman’s Free as in free­dom: Richard Stallman’s cru­sade for free soft­ware, which was found in datasets used by Anthropic as train­ing in­puts for their LLMs. It was pub­lished by O’Reilly and by the FSF un­der the GNU Free Documentation License (GNU FDL). This is a free li­cense al­low­ing use of the work for any pur­pose with­out pay­ment.

Obviously, the right thing to do is pro­tect com­put­ing free­dom: share com­plete train­ing in­puts with every user of the LLM, to­gether with the com­plete model, train­ing con­fig­u­ra­tion set­tings, and the ac­com­pa­ny­ing soft­ware source code. Therefore, we urge Anthropic and other LLM de­vel­op­ers that train mod­els us­ing huge datasets down­loaded from the Internet to pro­vide these LLMs to their users in free­dom. We are a small or­ga­ni­za­tion with lim­ited re­sources and we have to pick our bat­tles, but if the FSF were to par­tic­i­pate in a law­suit such as Bartz v. Anthropic and find our copy­right and li­cense vi­o­lated, we would cer­tainly re­quest user free­dom as com­pen­sa­tion.

...

Read the original on www.fsf.org »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.