10 interesting stories served every morning and every evening.




1 812 shares, 56 trendiness

The open source AI coding agent

Zen gives you ac­cess to a hand­picked set of AI mod­els that OpenCode has tested and bench­marked specif­i­cally for cod­ing agents. No need to worry about in­con­sis­tent per­for­mance and qual­ity across providers, use val­i­dated mod­els that work.

...

Read the original on opencode.ai »

2 721 shares, 25 trendiness

Chuck Norris, Action Icon and ‘Walker, Texas Ranger’ Star, Dies at 86

Chuck Norris, the mar­tial arts cham­pion who be­came an iconic ac­tion star and led the hit se­ries Walker, Texas Ranger,” has died. He was 86.

Norris was hos­pi­tal­ized in Hawaii on Thursday, and his fam­ily posted a state­ment Friday say­ing that he died that morn­ing. While we would like to keep the cir­cum­stances pri­vate, please know that he was sur­rounded by his fam­ily and was at peace,” his fam­ily wrote.

Ryan Coogler Makes Actor Awards History With Sinners’ as First Person to Direct Two Cast Ensemble Winners

SAGs Actor Awards Winners: Sinners’ Wins Top Prize, The Studio’ and The Pitt’ Lead for TV

To the world, he was a mar­tial artist, ac­tor, and a sym­bol of strength. To us, he was a de­voted hus­band, a lov­ing fa­ther and grand­fa­ther, an in­cred­i­ble brother, and the heart of our fam­ily,” the state­ment con­tin­ued. He lived his life with faith, pur­pose, and an un­wa­ver­ing com­mit­ment to the peo­ple he loved. Through his work, dis­ci­pline, and kind­ness, he in­spired mil­lions around the world and left a last­ing im­pact on so many lives.”

As an ac­tion star, Norris had a de­gree of cred­i­bil­ity that most oth­ers could not match.. Not only did he ap­pear op­po­site the leg­endary Bruce Lee in 1972 film The Way of the Dragon” (aka Return of the Dragon”), but he was a gen­uine mar­tial arts cham­pion who was a black belt in judo, 3rd de­gree black belt in Brazilian Jiu-Jitsu, 5th de­gree black belt in Karate, 8th de­gree black belt in Taekwondo, 9th de­gree black belt in Tang Soo Do and 10th de­gree black belt in Chun Kuk Do.

Norris was ex­tremely pro­lific in the late 1970s and 80s, star­ring in The Delta Force” and Missing in Action” films, Good Guys Wear Black” (1978), The Octagon” (1980), Lone Wolf McQuade” (1983), “Code of Silence” (1985) and Firewalker” (1986).

Norris joined a bevy of other ac­tion stars in the Sylvester Stallone-directed The Expendables 2” in 2012 af­ter an ab­sence from the screen of seven years.

While he scored high on cred­i­bil­ity, Norris did not leaven his work with hu­mor the way Arnold Schwarzenegger, Bruce Willis and Jackie Chan did. He was nev­er­the­less the ac­tion star of choice for those seek­ing an all-Amer­i­can icon.

In 1984, Norris starred in Missing in Action,” the first in a se­ries of films cen­tered around the res­cue of American POWs pur­port­edly still held af­ter be­ing cap­tured dur­ing the Vietnam War. (Norris’ younger brother Wieland had been killed while serv­ing in Vietnam, and the ac­tor ded­i­cated his Missing in Action” films to his broth­er’s mem­ory, but crit­ics of Norris and pro­ducer Cannon Films main­tained that the films bor­rowed too heav­ily from the cen­tral con­ceit of Stallone’s highly suc­cess­ful Rambo” films.)

As Norris’ movie ca­reer be­gan to wane, he made a timely move to tele­vi­sion, star­ring in the CBS se­ries Walker, Texas Ranger,” in­spired by his film Lone Wolf McQuade.” The pro­gram ran from 1993-2001, and the ac­tor reprised the role of Cordell Walker in the TV movies Walker Texas Ranger 3: Deadly Reunion” (1994) and Walker, Texas Ranger: Trial by Fire” (2005). (Also in 2005 Norris made the last film in which he starred, the straight-to-DVD The Cutter.”)

In his later years, Norris was por­trayed in memes doc­u­ment­ing fic­tional, fre­quently ab­surd feats as­so­ci­ated with him, such as Chuck Norris kills 100% of germs” and Paper beats rock, rock beats scis­sors, and scis­sors beats pa­per, but Chuck Norris beats all 3 at the same time.” In his later years Norris ap­peared in in­fomer­cials for work­out equip­ment and be­came in­creas­ingly out­spo­ken as a po­lit­i­cal con­ser­v­a­tive.

Carlos Ray Norris was born in Ryan, Okla.; his fa­ther served as a sol­dier in World War II. In 1958 he joined the Air Force as an Air Policeman (AP, anal­o­gous to the Army’s MPs). While serv­ing at Osan Air Base in South Korea, Norris first ac­quired the nick­name Chuck” and be­gan his train­ing in Tang Soo Do (aka tang­sudo), lead­ing to his achieve­ments in other mar­tial arts and to his de­vel­op­ment of hy­brid style Chun Kuk Do (“The Universal Way”). He re­turned to the U. S. and served as an AP at March Air Force Base in California.

After his 1962 dis­charge, Norris worked for aero­space com­pany Northrop and opened a chain of karate schools; celebrity clients at the schools in­cluded Steve McQueen, Chad McQueen, Bob Barker, Priscilla Presley, Donny Osmond and Marie Osmond.

Norris made his act­ing de­but in an un­cred­ited role in the 1969 cult Matt Helm film The Wrecking Crew,” star­ring Dean Martin. Norris met Bruce Lee at a mar­tial arts demon­stra­tion in Long Beach, Calif., and played the neme­sis of Lee’s char­ac­ter in 1972 movie The Way of the Dragon” (retitled Return of the Dragon” for U. S. dis­tri­b­u­tion). In 1974 McQueen spurred Norris to be­gin tak­ing act­ing classes at MGM.

Norris first starred in the 1977 ac­tion film Breaker! Breaker!,” in which he played a trucker search­ing for his brother, who’s dis­ap­peared in a town with a judge who’s cor­rupt.

The ac­tor proved his box of­fice met­tle with his sub­se­quent films, “Good Guys Wear Black” (1978), The Octagon” (1980), An Eye for an Eye” (1981) and Lone Wolf McQuade.”

Norris be­gan star­ring in movies for Cannon Films in 1984. Over the next four years, he be­came Cannon’s most promi­nent star, ap­pear­ing in eight films, in­clud­ing the three Missing in Action” films; Code of Silence” — qual­i­ta­tively, one of his best films — the two Delta Force” films and Firewalker.” Norris’ brother Aaron Norris pro­duced sev­eral of these films, and also be­came a pro­ducer on “Walker, Texas Ranger.”

A long­time sup­porter of con­ser­v­a­tive politi­cians, he wrote sev­eral books with Christian and pa­tri­otic themes.

Norris was twice mar­ried, the first time to Di­anne Holechek from 1958 un­til their di­vorce in 1988.

He is sur­vived by sec­ond wife Gena O’Kelley, whom he mar­ried in 1998; three sons, Eric, Mike and Dakota, daugh­ters Danilee and Dina; and a num­ber of grand­chil­dren.

...

Read the original on variety.com »

3 683 shares, 33 trendiness

Fake Compliance as a Service

The ver­sion that ar­rived in your mail­box is trun­cated. Visit the full ar­ti­cle here to view the rest.

Reporters who want to get in touch: Drop an email at aicp­nay@pro­ton.me. In case my Proton ac­count gets blocked or you don’t get an an­swer, tweet us­ing the hash­tag #AICPNAY with con­tact de­tails and I’ll do my best to get in touch with you.

At its core, this ar­ti­cle ar­gues that Delve fakes com­pli­ance while cre­at­ing the ap­pear­ance of com­pli­ance with­out the un­der­ly­ing sub­stance.

Delve achieves its claim of be­ing the fastest plat­form by pro­duc­ing fake ev­i­dence, gen­er­at­ing au­di­tor con­clu­sions on be­half of cer­ti­fi­ca­tion mills that rub­ber stamp re­ports, and skip­ping ma­jor frame­work re­quire­ments while telling clients they have achieved 100% com­pli­ance. Their US-based au­di­tors” are Indian cer­ti­fi­ca­tion mills op­er­at­ing through empty US shells and mail­box agents. Auditors breach in­de­pen­dence rules by sign­ing off any­way, leav­ing com­pa­nies un­know­ingly ex­posed to crim­i­nal li­a­bil­ity un­der HIPAA and hefty fines un­der GDPR.

Delve hands cus­tomers fab­ri­cated ev­i­dence of board meet­ings, tests, and processes that never hap­pened. The plat­form forces com­pa­nies to choose be­tween adopt­ing fake ev­i­dence or per­form­ing mostly man­ual work with lit­tle real au­toma­tion or AI. It pro­duces au­dit re­ports that falsely claim in­de­pen­dent ver­i­fi­ca­tion while Delve it­self ef­fec­tively wears the au­di­tor hat, gen­er­at­ing iden­ti­cal re­ports for all clients, in­clud­ing Lovable, Bland, Cluely, and NASDAQ-traded Duos Edge. It hosts trust pages that list se­cu­rity mea­sures that were never im­ple­mented.

When clients ask hard ques­tions, Delve dodges. They de­mand calls where founders charm, promise, and name­drop Lovable, Bland, and every Fortune 500” as proof. When that fails, donuts ar­rive.

Preface - How it all start­ed­What the leak and our re­search re­veals

Two months ago, an email went out to a few hun­dred Delve clients in­form­ing them that Delve had leaked their au­dit re­ports, along­side other con­fi­den­tial in­for­ma­tion, through a Google spread­sheet that was pub­licly ac­ces­si­ble. This email also claimed that Delve’s au­dit re­ports were fraud­u­lent. The com­pany I work for was one of those clients that re­ceived that email.

Instead of pro­vid­ing clar­i­fi­ca­tion and be­ing trans­par­ent, Delve’s lead­er­ship de­cided to go into deny and de­flect mode. When di­rectly ask­ing them for clar­i­fi­ca­tion, they flat-out de­nied every­thing.

This was se­ri­ous, as it po­ten­tially in­volved nearly all of Delve’s clients and raised ques­tions about the va­lid­ity of the com­pli­ance re­ports Delve’s clients had re­ceived.

Multiple col­leagues in my net­work re­ceived the same email. Having the shared ex­pe­ri­ence of be­ing un­der­whelmed with the Delve ex­pe­ri­ence, and hav­ing the over­all sense that some­thing fishy was go­ing on, we de­cided to pool re­sources and in­ves­ti­gate to­gether. This ar­ti­cle is the re­sult of that col­lab­o­ra­tion.

The rea­son we felt the way we did was due to how lit­tle ac­tual work any of us had to per­form to be­come compliant’, com­bined with a prod­uct prac­ti­cally de­void of any real AI. It mostly felt like a SOC 2 tem­plate pack with a thin SAAS plat­form wrap­per where you sim­ply adopt and sign all tem­plated doc­u­ments. No cus­tom tai­lor­ing, no AI guid­ance, no real au­toma­tion. Just pre-pop­u­lated forms that re­quired you to click save”.

Some of us have gone through com­pli­ance be­fore and felt there was a huge mis­match be­tween our past ex­pe­ri­ence and our ex­pe­ri­ence with Delve.

In this ar­ti­cle I will walk you through a typ­i­cal ex­pe­ri­ence with Delve, the leak that ex­posed their op­er­a­tion, and how it re­vealed the fraud we un­cov­ered. I will show, among other things, how for each of the be­low cat­e­gories:

* Delve breaches AICPA/ISO rules by act­ing as au­di­tor, gen­er­at­ing pre-drafted as­sess­ments, tests, and con­clu­sions

* Delve re­lies on au­dit firms that rub­ber stamp re­ports be­cause gen­uine in­de­pen­dent ver­i­fi­ca­tion would ex­pose the ev­i­dence as fab­ri­cated or de­fi­cient

* Delve mis­leads clients by claim­ing re­ports are pro­duced by US-based CPA firms, when in re­al­ity they are pro­duced by Delve and rub­ber stamped by Indian cer­ti­fi­ca­tion mills

* Delve leads clients to be­lieve they are com­pli­ant when they are not

* Delve helps clients mis­lead the pub­lic by host­ing trust pages that con­tain se­cu­rity mea­sures that were never im­ple­mented

* Delve lies to clients when di­rectly ques­tioned, deny­ing doc­u­mented facts about the leak and re­port gen­er­a­tion

* Delve mar­kets AI-driven au­toma­tion while the prod­uct is prac­ti­cally de­void of AI, re­ly­ing on pre-pop­u­lated tem­plates, man­ual forms, and fab­ri­cated ev­i­dence

* Delve’s prod­uct is un­able to get com­pa­nies truly com­pli­ant

* Delve’s plat­form forces com­pa­nies to choose be­tween adopt­ing fake ev­i­dence or per­form­ing mostly man­ual work with lit­tle real au­toma­tion

* Unable to de­liver real com­pli­ance through its plat­form, Delve de­pends on fraud­u­lent au­di­tors who rub­ber stamp re­ports for clients, falling back on off-plat­form man­ual work with ex­ter­nal vCISOs and good au­di­tors only when com­plaints or pro­file threaten its busi­ness in­ter­ests

* Delve’s process re­sults in clients vi­o­lat­ing GDPR and HIPAA re­quire­ments, ex­pos­ing them to crim­i­nal li­a­bil­ity un­der HIPAA and fines up to 4% of global rev­enue un­der GDPR

For any prospect con­sid­er­ing Delve or any cur­rent Delve client:

Delve loves claim­ing this is all an at­tempt by their jealous com­peti­tors” to fraud­u­lently dis­credit them. When clients ask con­crete ques­tions, they dodge an­swer­ing the ques­tion and in­stead coax you into get­ting on a call with them, where they charm you and tell you every­thing you want to hear. They’ll even throw in some donuts.

One other tac­tic they fre­quently em­ploy is to promise is­sues won’t arise again be­cause their old process, prod­uct or au­di­tor is about to be or has al­ready been re­placed with some­thing bet­ter, that they are about to see the re­sults of. This is a de­flec­tion and de­lay­ing tech­nique. Whenever they start be­ing fre­quntly called out on us­ing a par­tic­u­lar au­di­tor, they’ll switch to an­other equally dodgy one that has­n’t been flagged yet. If they’re called out on their de­fi­cient prod­uct, they point to a su­pe­rior tier or new fea­ture on the time­line that will solve every­thing.

Expressing un­hap­pi­ness and a de­sire to leave will lead to Delve pair­ing you with an ex­ter­nal vir­tual CISO that will help you do com­pli­ance right. You should know that this means you will have to do all the work man­u­ally.

If you are con­cerned about Delve’s con­duct and prac­tices, ask them ques­tions in writ­ing. Do not al­low them to de­flect. Do not get on a call with them. In the clos­ing words at the end of this ar­ti­cle you’ll find more ad­vice.

All in­for­ma­tion con­tained in this ar­ti­cle can be re­pro­duced by con­sult­ing pub­lic sources and hav­ing ac­cess to the Delve plat­form.

All screen­shots and in­for­ma­tion are ac­tual as of mid January 2026.

Here we set the stage. We’ll quickly list all par­ties in­volved, and will pro­vide some back­ground con­text that is use­ful to un­der­stand the rest of this ar­ti­cle.

High pro­file com­pa­nies like those listed in the im­age above, and hun­dreds of oth­ers are af­fected by this. Companies ad­di­tion­ally af­fected are those that part­ner with Delve’s clients, hav­ing been mis­in­formed of the risks in­volved in part­ner­ing with those clients.

Many of those com­pa­nies process PHI of mil­lions of US cit­i­zens on a daily ba­sis. Some of those even serve na­tional de­fense in­ter­ests.

The au­dit firms listed in the il­lus­tra­tion above were iden­ti­fied dur­ing our re­search process, but are not nec­es­sar­ily all au­dit firms used by Delve.

From what we were able to es­tab­lish, 99%+ of Delve’s clients went through ei­ther Accorp or Gradient over the past 6 months.

In the wake of Delve’s leak in December, Delve is re­ported to have switched to Glocert as their pri­mary ISO 27001 au­dit­ing firm.

* Karun Kaushik and Selin Kocalar - The founders of Delve

The above in­di­vid­u­als know­ingly par­tic­i­pated in Delve’s de­lib­er­ate mis­con­duct re­gard­ing au­dit prac­tices.

Delve is a com­pli­ance com­pany. They help busi­nesses get cer­ti­fied for frame­works like SOC 2, ISO 27001, HIPAA, and GDPR. Companies need these cer­ti­fi­ca­tions to prove they han­dle data in a se­cure way and to un­lock deals with larger cus­tomers who re­quire them.

Compliance has tra­di­tion­ally been a time-con­sum­ing process that in­volved lots of spread­sheets. It used to be man­ual, ex­pen­sive and slow.

To give you an idea of what this is all about, we will pri­mar­ily fo­cus on SOC 2 in this ar­ti­cle. SOC 2 is the most com­monly pur­sued frame­work in the US. Practically all tech com­pa­nies that sell to en­ter­prises are ex­pected to be SOC 2 com­pli­ant’, which ba­si­cally means they’ve had to have a SOC 2 au­dit per­formed in the last year.

Getting a clean SOC 2 re­port means hir­ing a CPA firm to re­view your se­cu­rity con­trols. If they suc­cess­fully ver­ify the se­cu­rity you claim to have, through a lot of ev­i­dence you pro­vide them, they is­sue a re­port say­ing your se­cu­rity mea­sures are sound. This re­port be­comes proof you can show cus­tomers and in­vestors.

SOC 2 and ISO 27001, the European coun­ter­part to SOC 2, are vol­un­tary frame­works. HIPAA and GDPR are not.

HIPAA ap­plies to any com­pany han­dling health records in the US. Penalties are se­vere, with will­ful ne­glect pun­ish­able by crim­i­nal charges and prison time.

GDPR cov­ers any com­pany pro­cess­ing data of EU res­i­dents, re­gard­less of where the com­pany is based. Fines run up to 4% of global an­nual rev­enue or 20 mil­lion eu­ros, whichever is higher.

These frame­works carry the force of law be­cause they pro­tect in­for­ma­tion peo­ple can­not eas­ily pro­tect them­selves: med­ical his­to­ries, ge­netic data, bio­met­ric iden­ti­fiers, lo­ca­tion pat­terns, and the full record of their dig­i­tal lives.

The com­pa­nies that help other com­pa­nies be­come com­pli­ant through au­toma­tion are called GRC au­toma­tion plat­forms.

Delve was founded in 2023 by Karun Kaushik and Selin Kocalar, both Forbes 30 Under 30 mem­bers and MIT dropouts who met as fresh­men. They started with a med­ical AI scribe, piv­oted to com­pli­ance af­ter hit­ting HIPAA headaches them­selves, and went through Y Combinator in 2024.

In July 2025, Delve raised $32 mil­lion in Series A fund­ing led by Insight Partners. Before that they had raised a $3.3 mil­lion seed round and went through Y Combinator.

Delve’s pitch is speed through AI. They claim to get com­pa­nies com­pli­ant in days rather than months, us­ing what they call agentic AI through an AI-native” plat­form.

Their mar­ket­ing promises AI agents that au­to­mat­i­cally col­lect ev­i­dence, write re­ports, and mon­i­tor com­pli­ance gaps with­out hu­man busy­work.

The re­al­ity, as this ar­ti­cle will show, is dif­fer­ent.

SOC 2 au­dits op­er­ate un­der strict in­de­pen­dence re­quire­ments de­signed to pre­serve trust in the at­tes­ta­tion process. These rules ex­ist pre­cisely to pre­vent the kind of con­duct this ar­ti­cle ex­poses. While Delve of­fers com­pli­ance ser­vices for HIPAA, GDPR, and ISO 27001, this ar­ti­cle fo­cuses pri­mar­ily on SOC 2. The rules sur­round­ing those other frame­works would re­quire their own de­tailed treat­ment; the goal here is to es­tab­lish a clear pat­tern in Delve’s be­hav­ior, not to cat­a­log every pos­si­ble reg­u­la­tory vi­o­la­tion.

The fun­da­men­tal prin­ci­ple is sim­ple: the party im­ple­ment­ing con­trols can­not be the party at­test­ing to their ef­fec­tive­ness. AICPAs Code of Professional Conduct states that mem­bers must accept the oblig­a­tion to act in a way that will serve the pub­lic in­ter­est, honor the pub­lic trust, and demon­strate a com­mit­ment to pro­fes­sion­al­ism.” When ac­coun­tants can­not be ex­pected to make truth­ful rep­re­sen­ta­tions, we lose the abil­ity to as­sess any pub­lic or pri­vate com­pa­ny’s ac­tual per­for­mance.”

For SOC 2 specif­i­cally, AT-C Section 205 re­quires prac­ti­tion­ers to main­tain in­de­pen­dence in both fact and ap­pear­ance through­out the en­gage­ment. The prac­ti­tioner must not as­sume man­age­ment re­spon­si­bil­i­ties or act as an ad­vo­cate for the sub­ject mat­ter be­ing ex­am­ined.

Under AT-C Section 315, au­di­tors must seek to ob­tain rea­son­able as­sur­ance that the en­tity com­plied with the spec­i­fied re­quire­ments, in all ma­te­r­ial re­spects, in­clud­ing de­sign­ing the ex­am­i­na­tion to de­tect both in­ten­tional and un­in­ten­tional ma­te­r­ial non­com­pli­ance.” This re­quires in­de­pen­dent de­sign of test pro­ce­dures, in­de­pen­dent eval­u­a­tion of ev­i­dence, and in­de­pen­dent for­ma­tion of con­clu­sions.

The au­di­tor’s re­port must rep­re­sent their own pro­fes­sional judg­ment, not pre-writ­ten con­clu­sions pro­vided by the en­tity be­ing au­dited or its plat­form ven­dor.

Delve’s model in­verts this struc­ture. By gen­er­at­ing au­di­tor con­clu­sions, test pro­ce­dures, and fi­nal re­ports be­fore any in­de­pen­dent re­view oc­curs, Delve places it­self in the role of both im­ple­menter and ex­am­iner. This is not a tech­ni­cal­ity. It is a struc­tural fraud that in­val­i­dates the en­tire at­tes­ta­tion.

For read­ers who wish to ver­ify these re­quire­ments:

This story has many threads, and I strug­gled with where to start. Do I lead with the leaked doc­u­ments? The au­di­tor shell game? The fake AI?

In the end, I felt that start­ing with my com­pa­ny’s ex­pe­ri­ence, which il­lus­trates many of the points I make and prove later on, was the most ef­fec­tive way to get the pic­ture across.

I col­lab­o­rated on this ar­ti­cle with users from other Delve clients. We com­pared notes. The pat­terns I de­scribe be­low showed up across all of our ac­counts. Unless men­tioned oth­er­wise, noth­ing here is cherry-picked. Later sec­tions dis­sect spe­cific mech­a­nisms.

This sec­tion shows what it is like to be a Delve client.

Delve goes all out dur­ing the sales process. Through their mar­ket­ing and sales process they con­tin­u­ously em­pha­size be­ing the fastest to get com­pa­nies com­pli­ant, thanks to their AI. They re­peat­edly em­pha­size the im­pres­sive com­pa­nies they work with, and how they part­ner with the best and most re­spected US-based au­dit firms, and that their work is ac­cepted by Fortune 500 com­pa­nies.

Within the first few min­utes of the demo call we did with Delve, they had al­ready men­tioned how com­pa­nies like Lovable, Bland, WisprFlow and hun­dreds of oth­ers are choos­ing Delve over com­peti­tors. The main point they kept dri­ving home was that their rev­o­lu­tion­ary tech, sup­pos­edly way ahead of any other com­pa­ny’s, en­abled com­pli­ance to be achieved with just 10 hours of work in­stead of the hun­dreds it used to take.

Their demo was short but did a good job show­ing how au­to­mated every­thing sup­pos­edly was. They clicked through every­thing pretty quickly, and stopped at every place that made the process look easy or au­to­mated. They showed an in­te­gra­tion pulling in­for­ma­tion out of AWS, the ready to go” de­fault poli­cies you could just adopt with­out mod­i­fi­ca­tion, the rea­son­ably short list of tasks, the AI ques­tion­naire au­toma­tion, the beau­ti­ful trust page you could pub­lish and their AI-copilot’ chat­bot.

What they did­n’t show, how­ever, was that most in­te­gra­tions were fake and re­quired man­ual screen­shots, that tons of forms need to be man­u­ally filled out, that the trust page was­n’t ac­cu­rate and they did­n’t show any of the tasks that had pre-cre­ated fake ev­i­dence.

Their best of­fer” for SOC 2 started at $15,000, which we were able to ne­go­ti­ate down to $13,000 on the call. They kept em­pha­siz­ing how it was a great deal, that we were get­ting so much value that other com­pa­nies paid more for. They re­mained pretty in­flex­i­ble on pric­ing un­til we made clear we were con­sid­er­ing a com­peti­tor. We must have been told at least four times that they could­n’t go any lower be­cause they’d make a loss. There was a lot of pres­sure and pos­tur­ing, but the price quickly dropped to just $6,000 when they re­al­ized we were se­ri­ous about go­ing else­where, and they would throw in ISO 27001 and a 200 hour pen­e­tra­tion test as well.

Pushing us to sign within 24 hours or lose that good a deal, we de­cided to just move for­ward and get it over with. In hind­siht, there were many red flags, but we just wanted to get the job done quickly and move on. If Delve was good enough for com­pa­nies like Lovable, they had to be do­ing some­thing right. Right?

Once you’ve been in­vited to the Delve plat­form and log in for the first time, you’ll be greeted with an in­ter­face that re­veals there are four cat­e­gories for ac­tiv­i­ties:

But even be­fore you get started do­ing any real work, you can go into the trust tab and ac­ti­vate and pub­lish the trust page for your com­pany. You’d think it would prob­a­bly be a very min­i­mal trust page with every­thing fail­ing at first, since you haven’t done any work, right?

Nope, you im­me­di­ately get a fully pop­u­lated trust page that would have you be­lieve you’re run­ning the most se­cure com­pany on earth. Delve’s trust page pre­sented our com­pany as fully se­cure be­fore we had com­pleted any com­pli­ance work, en­abling us to close deals based on mis­rep­re­sented se­cu­rity claims.

Noteworthy is that the list had­n’t changed af­ter we fin­ished com­pli­ance in any way, but still was­n’t truth­ful. My ex­pec­ta­tion was that the list re­flected the se­cu­rity you’d get at the end of Delve’s process, but it took get­ting there to learn that that was­n’t true ei­ther.

It says we did vul­ner­a­bil­ity scan­ning and a pen­test, when we only ever did the scan. It says we did data re­cov­ery sim­u­la­tions, which we never did. It says we re­me­di­ated vul­ner­a­bil­i­ties, which we never did.

It is lit­er­ally a made-up list of se­cu­rity mea­sures of which more than half are not im­ple­mented, or even sup­ported or ad­dressed by Delve’s process and plat­form.

In short, this prod­uct is built to help com­pa­nies fake se­cu­rity rather than com­mu­ni­cate their real se­cu­rity.

When you do ac­tu­ally de­cide to get started and do the work, you’ll find that the plat­form ex­pects you to per­form a num­ber of tasks across four cat­e­gories:

Policies - These are all pre-cre­ated, and Delve rec­om­mends adopt­ing them as they are.

Sadly, un­less you spend a whole week of man­u­ally re­vis­ing them to be ac­cu­rate, you will have in­ac­cu­rate poli­cies full of false promises. Every sin­gle one of Delve’s poli­cies claims to have mea­sures in place that Delve’s process and plat­form do not ad­dress. Even though we knew we’d tech­ni­cally be ly­ing about our se­cu­rity to any­one we sent these poli­cies to for re­view (clients, au­di­tors, in­vestors), we de­cided to adopt these poli­cies be­cause we sim­ply did­n’t have the band­width to rewrite them all man­u­ally. Team - Background checks, se­cu­rity train­ing and man­ual de­vice se­cu­rity through screen­shots for every em­ployee.

This is a lot of man­ual work. it took each of our em­loy­ees 2 hours to their in­di­vid­ual tasks done.30 min­utes to man­u­ally se­cure and screen­shot lap­top se­cu­rity con­fig­u­ra­tion. It is a lot of work if you have a big com­pany.

Also, we did­n’t have disk en­cryp­tion sup­port on a few lap­tops, but that did­n’t stop Delve from sign­ing off on HIPAA:

30 min­utes (or more) to man­u­ally write up a per­for­mance eval­u­a­tion:30 min­utes to do all train­ing (watching Delve’s Youtube videos) and quizzes.

Tech - You add ven­dors here. This is what Delve calls in­te­gra­tions, but most of these are just forms where you are asked to sub­mit screen­shots:Com­pany - Security pro­ce­dures and com­pany-wide tasks are live here. This is just a list of forms that are all pre-filled with fake ev­i­dence. You can speedrun through this by just click­ing ac­cept on every one of them.

One thing that stands out as you go through all of Delve’s tasks is that there is­n’t any AI to speed up any of the work. The only thing avail­able is an AI Co-pilot’ that can pro­vide generic ad­vice, and that does­n’t seem to have much con­text be­yond what form you’re in. More than half of the time the AI co-pi­lot would tell me the ev­i­dence in the plat­form was not suf­fi­cient, and it would re­fer to links of other GRC plat­forms.

As op­posed to what is promised dur­ing the sales process, the tasks are not cus­tomized to the com­pany. You are ba­si­cally dumped into an in­ter­face and told by the cus­tomer suc­cess per­son that you can ask ques­tions on Slack if you get stuck.

The re­al­ity is that there will be very few times you’ll ever need any help from Delve’s team, pri­mar­ily be­cause every­thing in their plat­form is pre-pop­u­lated. You won’t have to ask how to do a risk as­sess­ment, be­cause you get the same pre-gen­er­ated risks that every other Delve cus­tomer gets. You won’t have to com­plain about hav­ing to do board meet­ings, af­ter you were ex­plic­itly promised dur­ing the sales call that this is some­thing all other providers but Delve ask for, be­cause you get pre-cre­ated fake board meet­ing notes that you can adopt as is.

Seriously, be­com­ing com­pli­ant with Delve is noth­ing more than click­ing through a bunch of pre-pop­u­lated forms and ac­cept­ing every­thing. Unless you want to do com­pli­ance the proper way, in which case Dropbox is as good a tool as Delve since you need to then man­u­ally col­lect and write every­thing.

Ok, you sit down to get crack­ing on com­pli­ance. What do you do next?

You ac­cept the de­fault poli­cies that are in­ac­cu­rate out of the box. Like this pol­icy that claims to have an MDM in place when the Delve process con­sists of mak­ing a man­ual screen­shot of your Mac fire­wall set­tings:

You ac­cept the pre-cre­ated con­tents for se­cu­rity sim­u­la­tion by ac­cept­ing the three security in­ci­dents”:

You do the risk as­sess­ment by adopt­ing the ten de­fault risks:

One of our ob­vi­ous con­cerns was that this ap­proach would never pass an au­dit, but we were ex­plic­itly told Delve never failed a sin­gle au­dit in the past, and that au­di­tors have never flagged a sin­gle is­sue with their process. They tried to put our minds at ease by telling us about all the amaz­ing Delve clients that sold to Fortune 500 com­pa­nies us­ing the ex­act same process.

Delve con­tin­u­ously re­mind­ing us that they serve clients like Lovable, Bland, WisprFlow and many oth­ers ended up wear­ing us down, so we just took their word for it and moved on.

When the time comes to ac­tu­ally hook up your stack to Delve, so that Delve can do that continuous mon­i­tor­ing’ thing, you’ll find that the vast ma­jor­ity of their in­te­gra­tions don’t in­te­grate with any­thing at all. They are just con­tain­ers for screen­shots you’ll have to go out and man­u­ally col­lect.

Imagine my sur­prise when I learned that AI-native com­pli­ance would mean I’d have to spend many hours man­u­ally col­lect­ing screen­shots and fill­ing out forms. I truly feel like a mind­less agent in what Delve calls the agen­tic ex­pe­ri­ence”.

Here you can see how the Linear integration’ con­sists of Manual Tests and Forms:

On the em­ployee tab, you man­u­ally do back­ground checks through Certn, fill out more forms, watch use­less YouTube videos, and man­u­ally screen­shot lap­top se­cu­rity. For 100 em­ploy­ees, all 100 of them have to man­u­ally se­cure lap­tops once and up­load screen­shots.

You also do man­ual per­for­mance re­views for every em­ployee with no way to pull data from other so­lu­tions. Lots of typ­ing if you have 20+ em­ploy­ees.

...

Read the original on substack.com »

4 523 shares, 33 trendiness

Our commitment to Windows quality

Skip to main con­tent

Skip to main con­tent

I want to speak to you di­rectly, as an en­gi­neer who has spent his ca­reer build­ing tech­nol­ogy that peo­ple de­pend on every day. Windows touches more peo­ple’s lives than al­most any tech­nol­ogy on Earth. Every day, we hear from the com­mu­nity about how you ex­pe­ri­ence Windows. And over the past sev­eral months, the team and I have spent a great deal of time an­a­lyz­ing your feed­back. What came through was the voice of peo­ple who care deeply about Windows and want it to be bet­ter.

Today, I’m shar­ing what we are do­ing in re­sponse. Here are some of the ini­tial changes we will pre­view in builds with Windows Insiders this month and through­out April.

More taskbar cus­tomiza­tion, in­clud­ing ver­ti­cal and top po­si­tions: Repositioning the taskbar is one of the top asks we’ve heard from you. We are in­tro­duc­ing the abil­ity to repo­si­tion it to the top or sides of your screen, mak­ing it eas­ier to per­son­al­ize your work­space.

Integrating AI where it’s most mean­ing­ful, with craft and fo­cus: You will see us be more in­ten­tional about how and where Copilot in­te­grates across Windows, fo­cus­ing on ex­pe­ri­ences that are gen­uinely use­ful and well‑crafted. As part of this, we are re­duc­ing un­nec­es­sary Copilot en­try points, start­ing with apps like Snipping Tool, Photos, Widgets and Notepad.

Reducing dis­rup­tion from Windows Updates: Receiving up­dates should be pre­dictable and easy to plan around, so we’re giv­ing you more con­trol. This in­cludes the abil­ity to skip up­dates dur­ing de­vice setup to get to the desk­top faster, restart or shut down with­out in­stalling up­dates and pause up­dates for longer when needed, all while re­duc­ing up­date noise with fewer au­to­matic restarts and no­ti­fi­ca­tions.

Faster and more de­pend­able File Explorer: File Explorer is one of the most used sur­faces in Windows. Our first round of im­prove­ments will fo­cus on a quicker launch ex­pe­ri­ence, re­duced flicker, smoother nav­i­ga­tion and more re­li­able per­for­mance for every­day file tasks.

More con­trol over wid­gets and feed ex­pe­ri­ences: Widgets should feel help­ful and rel­e­vant, not dis­tract­ing or over­whelm­ing. We’re in­tro­duc­ing qui­eter de­faults, more con­trol over when and how wid­gets ap­pear, and im­proved per­son­al­iza­tion for the Discover feed.

A sim­pler, more trans­par­ent Windows Insider Program: The Windows Insider Program is how you help shape the fu­ture of Windows, and it should be easy to un­der­stand what to ex­pect and how to par­tic­i­pate. We are im­ple­ment­ing changes to make it eas­ier for you to nav­i­gate with clearer chan­nel de­f­i­n­i­tions, eas­ier ac­cess to new fea­tures, higher qual­ity builds, bet­ter vis­i­bil­ity into how your feed­back shapes Windows, and more op­por­tu­ni­ties to en­gage di­rectly with us.

Improved Feedback Hub, avail­able start­ing to­day: Your feed­back is es­sen­tial to im­prov­ing Windows, and it should be easy to share and see what oth­ers are say­ing. Today, we’re rolling out the largest up­date to Feedback Hub yet to our Insiders, with a re­designed ex­pe­ri­ence that makes it faster and eas­ier to sub­mit feed­back and en­gage with the com­mu­nity.

Building on these changes, what fol­lows be­low is our broader plan and ar­eas of fo­cus for the year to raise the bar on Windows 11 qual­ity. The work is un­der­way. You can ex­pect to see tan­gi­ble progress that you’ll be able to feel as you pre­view builds from us through­out the rest of the year.

Last night I had the chance to sit down with a small group of Windows Insiders here in Seattle to lis­ten, to an­swer ques­tions, and to share more about where we’re headed. The Seattle meetup was the first of sev­eral stops our team will be mak­ing to en­gage in per­son, in more cities around the world, to con­nect with the Windows com­mu­nity.

Thank you for hold­ing us to a high stan­dard. Windows is as much yours as it is ours. We’re com­mit­ted to strength­en­ing its foun­da­tion and de­liv­er­ing in­no­va­tion where it mat­ters, for you.

Please keep the feed­back com­ing, to help us shape the fu­ture of Windows to­gether.

What fol­lows is our plan to raise the bar on Windows 11 qual­ity this year, with a fo­cus on per­for­mance, re­li­a­bil­ity and well-crafted ex­pe­ri­ences. These ar­eas have mean­ing­ful im­pact on how you ex­pe­ri­ence Windows: how fast it starts and re­sponds, how sta­ble it is un­der real work­loads, and how con­sis­tent and thought­ful the ex­pe­ri­ence feels.

We are fo­cus­ing on mak­ing Windows 11 more re­spon­sive and con­sis­tent, so per­for­mance feels smooth and re­li­able.

Over the course of the year, we’re im­prov­ing sys­tem per­for­mance, app re­spon­sive­ness, File Explorer, and the Windows Subsystem for Linux, help­ing Windows stay fast as you move be­tween apps and work­loads.

Improving sys­tem per­for­mance: Reducing re­source us­age by Windows to free up more per­for­mance for what you’re do­ing.

Faster and more re­spon­sive Windows ex­pe­ri­ences, with early im­prove­ments al­ready de­liv­er­ing launch time re­duc­tions in apps like File Explorer

Improved mem­ory ef­fi­ciency, low­er­ing the base­line mem­ory foot­print for Windows, free­ing up more ca­pac­ity for the apps you run

More con­sis­tent per­for­mance, even un­der load, so apps stay re­spon­sive through­out the day

More fluid and re­spon­sive app in­ter­ac­tions: Reducing in­ter­ac­tion la­tency by mov­ing core Windows ex­pe­ri­ences to the WinUI3 frame­work.

Improving the shared UI in­fra­struc­ture that Windows ex­pe­ri­ences rely on, re­duc­ing in­ter­ac­tion la­tency and over­head at the plat­form level

Faster re­spon­sive­ness in core Windows ex­pe­ri­ences like the Start menu, by mov­ing more ex­pe­ri­ences to WinUI3

Improving File Explorer fun­da­men­tals: Reducing la­tency and im­prov­ing re­li­a­bil­ity across search, nav­i­ga­tion and file op­er­a­tions.

Copying and mov­ing large files will be faster and more re­li­able

Elevating the Windows Subsystem for Linux (WSL) ex­pe­ri­ence: Improving per­for­mance, re­li­a­bil­ity and in­te­gra­tion for de­vel­op­ers us­ing Linux tools and en­vi­ron­ments on Windows.

Better en­ter­prise man­age­ment with stronger pol­icy con­trol, se­cu­rity and gov­er­nance

Reliability is the bedrock of trust. You should trust that your PC is go­ing to be there and func­tion when you need it most.

Across the op­er­at­ing sys­tem, we will fo­cus on im­prov­ing the base­line re­li­a­bil­ity of ar­eas such as the Windows Insider Program, dri­vers and apps, up­dates and Windows Hello.

Strengthening re­li­a­bil­ity and qual­ity of the Windows Insider Program: Making it clearer what to ex­pect from each Insider chan­nel, rais­ing the qual­ity bar for builds and strength­en­ing feed­back sig­nals to im­prove build qual­ity be­fore broad re­lease.

Clearer vis­i­bil­ity into what fea­tures are in­cluded in each Insider build, so you know what to ex­pect

More con­trol over which new fea­tures you try, with eas­ier switch­ing be­tween Insider chan­nels to match your de­sired level of sta­bil­ity or early ac­cess

Higher qual­ity builds en­ter­ing each chan­nel, with more rig­or­ous val­i­da­tion and feed­back sig­nals be­fore re­lease

Stronger feed­back loops across Windows so is­sues are iden­ti­fied, pri­or­i­tized and ad­dressed faster

Increasing OS, dri­ver and app re­li­a­bil­ity: Delivering a smoother, more de­pend­able Windows 11 ex­pe­ri­ence by strength­en­ing sys­tem sta­bil­ity, dri­ver qual­ity and app re­li­a­bil­ity across our vi­brant ecosys­tem of sil­i­con, ISV and OEM part­ners. Our pri­or­i­ties in­clude:

Strengthening the Windows foun­da­tion by re­duc­ing OS level crashes, im­prov­ing dri­ver qual­ity and app sta­bil­ity across our ecosys­tem so PCs run smoothly and re­li­ably every day

Creating eas­ier, faster and sta­ble con­nec­tions with Bluetooth ac­ces­sories, fewer USB re­lated crashes and con­nec­tion loss, and im­proved printer dis­cov­er­abil­ity and con­nec­tions

More re­li­able cam­era and au­dio con­nec­tions to in­crease your pro­duc­tiv­ity at work and play

More con­sis­tent de­vice wake (including fur­ther wake con­sis­tency im­prove­ments for dock­ing sce­nar­ios) so you can get back to your work faster

Improving the Windows Update ex­pe­ri­ence: Faster, more pre­dictable up­dates with clearer con­trol over restarts and tim­ing.

Less dis­rup­tion from Windows Update, mov­ing de­vices to a sin­gle monthly re­boot, while or­ga­ni­za­tions and users who wish to get new fea­tures and fixes faster re­main able to do so

More di­rect con­trol over up­dates, in­clud­ing the abil­ity to pause up­dates for as long as you need and restart or shut down with­out be­ing forced to in­stall them

Faster, more re­li­able up­date ex­pe­ri­ences, with clearer progress dur­ing up­dates and built‑in re­cov­ery to help keep de­vices sta­ble if some­thing goes wrong

Improving Windows Hello bio­met­ric au­then­ti­ca­tion: We’re strength­en­ing Windows Hello sign‑in so it feels re­li­able, ef­fort­less and se­cure, re­duc­ing fric­tion while in­creas­ing con­fi­dence that your de­vice rec­og­nizes you cor­rectly.

More re­li­able fa­cial recog­ni­tion, so you can trust sign‑in to work when you need it

Faster and more de­pend­able fin­ger­print sign‑in, with fewer re­tries

Easier se­cure sign‑in on gam­ing hand­helds like the ROG Xbox Ally X, with full gamepad sup­port for cre­at­ing a PIN dur­ing setup and in Settings.

To us, craft is the dis­ci­pline that turns func­tional prod­ucts into loved ones through us­abil­ity, pol­ish, co­her­ence and re­fine­ment.

This year, you will see us in­vest in rais­ing the bar on the over­all us­abil­ity of the ex­pe­ri­ence, with more op­por­tu­ni­ties for per­son­al­iza­tion, less noise, less dis­trac­tion and more con­trol across the OS. That in­cludes be­ing thought­ful about how and where we bring AI into Windows, lead­ing with trans­parency, choice and con­trol, so that new ca­pa­bil­i­ties en­hance the ex­pe­ri­ence rather than com­pli­cate it.

Improving the Start and Taskbar ex­pe­ri­ence: Making these core Windows sur­faces more re­li­able, flex­i­ble and per­son­al­ized so you can nav­i­gate your PC in the way that works best for you.

Start and Taskbar de­liver even more con­sis­tent, de­pend­able ac­cess to apps and files, so mov­ing be­tween your con­tent feels fluid through­out the day

Expanded taskbar per­son­al­iza­tion op­tions, in­clud­ing al­ter­nate taskbar po­si­tions and a smaller taskbar, giv­ing you greater con­trol over how this core sur­face fits your work­flow

A more rel­e­vant Recommended sec­tion in Start will sur­face apps and con­tent you care about most, with clear con­trols to cus­tomize the ex­pe­ri­ence or turn it off

More fo­cused user ex­pe­ri­ence with less dis­trac­tions: Making the Windows ex­pe­ri­ence qui­eter, to help you stay fo­cused, min­i­mize dis­trac­tions and stay in your flow.

Device setup on new Windows PCs is qui­eter and more stream­lined, with fewer pages and re­boots so get­ting started is sim­pler

Widgets sur­face in­for­ma­tion more in­ten­tion­ally by de­fault, keep­ing con­tent glance­able and re­duc­ing un­nec­es­sary in­ter­rup­tions

Simpler set­tings make it eas­ier to per­son­al­ize, opt into or turn off Widgets and feed con­tent based on your pref­er­ences

Reduced no­ti­fi­ca­tions so you can stay fo­cused through­out the day

Enhancing Search: Delivering faster, more ac­cu­rate re­sults with con­sis­tent search ex­pe­ri­ence across Windows sur­faces.

Find what mat­ters faster, with search that sur­faces apps, files and set­tings clearly so you can get to the right re­sult quickly

Clearer and more trust­wor­thy re­sults, with re­sults from con­tent on your de­vice easy to un­der­stand and clearly dis­tinct from web re­sults

A more con­sis­tent search ex­pe­ri­ence across the Taskbar, Start, File Explorer and Settings

As part of this ef­fort, we are evolv­ing how Windows is built be­hind the scenes to raise the qual­ity bar and de­liver in­no­va­tion where it mat­ters most, shaped by the feed­back we are hear­ing from you.

This in­cludes deeper val­i­da­tion and broader test­ing across real-world hard­ware and us­age sce­nar­ios be­fore new ex­pe­ri­ences reach Windows Insiders, and a more in­ten­tional ap­proach to where and how new ca­pa­bil­i­ties are in­tro­duced. The re­sult will be higher qual­ity builds, more mean­ing­ful in­no­va­tion and greater flex­i­bil­ity in choos­ing what you want to try. This is how we will con­tinue to build and ship Windows 11, so we can de­liver bet­ter ex­pe­ri­ences with greater con­fi­dence, month af­ter month.

In line with Microsoft’s Secure Future Initiative, we will con­tinue to make Windows more se­cure with every re­lease, build­ing in new ca­pa­bil­i­ties and strength­en­ing se­cu­rity by de­fault to help pro­tect users, de­vices and data.

As we im­prove and in­no­vate, we look for­ward to your con­tin­ued feed­back on where we can keep mak­ing Windows bet­ter.

...

Read the original on blogs.windows.com »

5 360 shares, 16 trendiness

The Los Angeles Aqueduct is Wild — Practical Engineering

[Note that this ar­ti­cle is a tran­script of the video em­bed­ded above.]

On the north­ern edge of Los Angeles, fresh wa­ter spills down two stark con­crete chutes perched on the foothills of the San Gabriel Mountains, a place sim­ply called The Cascades. It’s a de­cep­tively sim­ple-look­ing fin­ish line: the end of a roughly 300-mile (or 500 km) jour­ney from the east­ern slopes of the Sierra Nevada into the city.

On November 5, 1913, tens of thou­sands of peo­ple climbed these hills to watch the first wa­ter ar­rive. When the gates fi­nally opened, wa­ter trick­led through, but that trickle quickly be­came a tor­rent. The pro­jec­t’s chief en­gi­neer, William Mulholland, leaned over to the mayor and shouted the line that’s been re­peated ever since: There it is, Mr. Mayor. Take it!”

That mo­ment was pro­found for a lot of rea­sons, de­pend­ing on where you live and how you feel about wa­ter rights. LA did­n’t be­come LA by liv­ing within the lim­its of its lo­cal re­sources. Its me­te­oric growth into the me­trop­o­lis we know was en­abled by an early and ex­tra­or­di­nary de­ci­sion to reach far be­yond its own wa­ter­shed and pull a whole new river into town. Today, roughly a third of LAs wa­ter comes from the Eastern Sierra through the Los Angeles Aqueduct sys­tem. That share swings with snow­pack, drought, and en­vi­ron­men­tal con­straints, but this one piece of in­fra­struc­ture helped turn a wa­ter-lim­ited town into a world city. It’s one of the most im­pres­sive and con­tro­ver­sial en­gi­neer­ing pro­jects in American his­tory.

But to re­ally ap­pre­ci­ate that wa­ter in the cas­cades, you have to look way up­stream and see what it took to get it there. It’s grav­ity, ge­ol­ogy, pol­i­tics, and hu­man am­bi­tion all in a part of the state that most peo­ple never see. Let’s take a lit­tle tour so you can see what I mean. I’m Grady and this is Practical Engineering.

When most peo­ple think about aque­ducts, this is what they pic­ture: a bridge car­ry­ing wa­ter over a val­ley or river. And, just to be clear, these are aque­ducts. But en­gi­neers of­ten use the term more broadly to de­scribe any type of con­veyance sys­tem that car­ries wa­ter over a long dis­tance from a source to a dis­tri­b­u­tion point. Could be a canal, a pipe, a tun­nel, or even just a ditch. In the case of the LA aque­duct, it’s all of them, plus a lot of sup­port­ing in­fra­struc­ture as well.

From the cen­ter of the city, it’s about a four hour drive to the Owens River Diversion Weir. It’s not ac­ces­si­ble to the pub­lic, but it is the of­fi­cial start of the LA Aqueduct, at least when it was orig­i­nally built. Here, all the snowmelt and rain from a huge drainage sys­tem be­tween the Sierra Nevada and Inyo Mountains fun­nel down into the Owens River, where a large con­crete di­ver­sion weir peels nearly all of it out of its nat­ural course and into a canal. This point is roughly 2,500 feet (or 750 me­ters) higher in el­e­va­tion than the bot­tom of the Cascades at the down­stream end, which makes it ob­vi­ous why LA chose it as a source. The en­tire aque­duct is a grav­ity ma­chine. There are no pumps push­ing the wa­ter to­ward the city. Half a mile of el­e­va­tion change feels like a lot un­til you re­al­ize you have to spread it out over 300 miles. It’s all achieved through care­ful grad­ing and man­ag­ing el­e­va­tions along the way to keep the flow mov­ing.

That care is par­tic­u­larly im­por­tant in this up­per sec­tion of the aque­duct, where the wa­ter flows in an open canal. To do this ef­fi­ciently, you need a rel­a­tively con­stant slope from start to fin­ish. That’s a tough thing to achieve on the sur­face of a bumpy earth. Following a river val­ley makes this eas­ier, but you can see the twists and turns nec­es­sary to keep the aque­duct on its gen­tle slope to­ward LA.

If it seems kind of wild that a city would buy up the land and wa­ter rights from some­where so far away, it did to a lot of the peo­ple who lived in the Owens Valley, too. A lot of the ac­qui­si­tions and pol­i­tics of the orig­i­nal LA Aqueduct were car­ried out in bad faith, sour­ing re­la­tion­ships with landown­ers, ranch­ers, farm­ers, and com­mu­ni­ties in the area. The saga is full of bro­ken promises and shady deal­ings. Then when the di­ver­sion started, the area dried up, dis­rupt­ing the ecol­ogy of the re­gion, mak­ing agri­cul­ture more dif­fi­cult and res­i­dents even more re­sent­ful. Many re­sorted to vi­o­lence, not against peo­ple but against the in­fra­struc­ture. They van­dal­ized parts of the aque­duct, a con­flict that later be­came known as the California Water Wars. In one case in 1924, ranch­ers used dy­na­mite to blow up a part of the canal. Later that year, they seized the Alabama Gates.

About 20 miles or 35 kilo­me­ters down­stream from the di­ver­sion weir, a set of gates sits on the east­ern bank of the aque­duct canal. Because it runs be­side the river val­ley, the aque­duct cap­tures some of the wa­ter that flows down from the sur­round­ing moun­tains in ad­di­tion to what’s di­verted out of the Owens River, par­tic­u­larly dur­ing strong storms. That means it’s ac­tu­ally pos­si­ble for the canal to over­fill. The Alabama Gates serve as a spill­way, al­low­ing op­er­a­tors to di­vert wa­ter back down to the river. This also helps drain the canal for main­te­nance or re­pairs when needed.

Those Owens Valley ranch­ers un­der­stood ex­actly what the Alabama Gates con­trolled. Open them, and the wa­ter would run back where it had al­ways run, down the Owens River, in­stead of south to Los Angeles. The re­sis­tance sim­mered and flared for years, but it did­n’t end in the dra­matic show­down at the aque­duct. Instead, it ended at a bank counter. The Inyo County Bank was run by two broth­ers who were also key or­ga­niz­ers and fi­nanciers of the re­sis­tance cam­paign. In August 1927, an au­dit re­vealed ma­jor short­falls and on­go­ing em­bez­zle­ment, and the bank quickly col­lapsed. Residents across the val­ley saw their sav­ings wiped out or frozen overnight, shat­ter­ing what was left of the com­mu­ni­ty’s abil­ity to keep fight­ing.

The Alabama Gates weren’t just a po­lit­i­cal flash­point though. They also marked an im­por­tant di­vid­ing line in the aque­duc­t’s de­sign. LA knew that even if the ranch­ers did­n’t re­lease the wa­ter to the river in protests, a lot of it would end up there any­way through seep­age. As the canal climbed away from the val­ley floor and crossed more porous soil, it would nat­u­rally lose its wa­ter through the ground. So, at the Alabama Gates, the aque­duct tran­si­tions from an un­lined canal to a con­crete-lined chan­nel. It’s still open to the air, so there’s no pro­tec­tion against evap­o­ra­tion or con­t­a­m­i­na­tion, but the losses to the ground are a lot less.

This de­sign con­tin­ues for about 35 miles (or 55 kilo­me­ters) through the val­ley. Along the way, the aque­duct passes the re­mains of Owens Lake. Once a large body of wa­ter, it quickly dried up with the di­ver­sion of the Owens River. Of course, there were im­pacts to wildlife from the loss of wa­ter, but the big­ger prob­lem came later: dust. All the fine sed­i­ment that set­tled on the lakebed over thou­sands of years was now ex­posed to the hot desert sun. When the wind picked up, it filled the air with fine par­tic­u­lates that are dan­ger­ous to breathe. Over the years, there have been times when Owens Lake is the sin­gle largest source of dust pol­lu­tion in the en­tire coun­try, and LA has spent more than a bil­lion dol­lars just try­ing to fix this prob­lem alone. The aque­duct pass­ing along the hill­side past the lake and its chal­lenges is a re­minder that the true cost of wa­ter is of­ten a lot more than the in­fra­struc­ture it takes to de­liver it.

So far, it might be ob­vi­ous that this aque­duct sys­tem is pretty frag­ile to be mak­ing up a ma­jor part of a city’s fresh wa­ter sup­ply. Even be­yond the van­dal­ism and po­lit­i­cal re­sis­tance, there are a lot of things that could go wrong along the way, from bank col­lapses, earth­quakes, di­ver­sion fail­ures, and more. That’s why Haiwee Reservoir was orig­i­nally built in a nar­row sad­dle be­tween two hills as a kind of buffer. With a dam on ei­ther side, it stored wa­ter up so the aque­duct could keep run­ning even dur­ing a dis­rup­tion up­stream. It also slowed the wa­ter down, ex­pos­ing it to the hot desert sun as a nat­ural form of UV dis­in­fec­tion. In the 1960s, the reser­voir was re­con­fig­ured into two basins to add some flex­i­bil­ity. That’s be­cause, around that time, the LA aque­duct be­came two. While the open-topped canal sec­tion was large enough to meet de­mands, the un­der­ground con­duit in the next sec­tion was­n’t. So, LA built a sec­ond one in 1970 to in­crease the flow. If you look at this map of the Haiwee Reservoirs, you can see that wa­ter has two paths: it can flow into the sec­ond aque­duct here from the north basin, or it can pass through the Merritt Cut to the south reser­voir, through the in­take there, and into the first aque­duct. This setup al­lows for some re­dun­dancy, along with reg­u­la­tion and bal­anc­ing of the flows be­tween the two aque­ducts. Haiwee marks the start of the long desert run, with both sys­tems no longer in open-topped lined canals, but run­ning un­der­ground in con­crete con­duits.

There are a lot of ad­van­tages to run­ning an aque­duct in a closed con­duit un­der­ground, es­pe­cially one this long through a desert land­scape. There’s far less evap­o­ra­tion and less po­ten­tial for con­t­a­m­i­na­tion. It does­n’t di­vide the land­scape at the sur­face level, so there’s no need for bridges, cul­verts, and wildlife cross­ings. Going un­der­ground also of­fers more flex­i­bil­ity when it comes to topog­ra­phy. You don’t have to fol­low the con­tours of the sur­face so care­fully be­cause if you come to a hill, you can just dig a lit­tle deeper to keep the con­stant slope.

Of course, those ben­e­fits come with a cost. An un­der­ground con­duit is more ex­pen­sive than a sim­ple chan­nel on the sur­face, and not all the prob­lems with topog­ra­phy are solved. This is Jawbone Canyon, one of the biggest drops for the first aque­duct. Rather than tak­ing a ma­jor de­tour around it, the aque­duct de­scends 850 feet (or 250 me­ters) and then as­cends back up. This type of struc­ture is of­ten called an in­verted siphon. I’ve done a video on how these work for sewer sys­tems, and I’ve also done a video on flood tun­nels that work in a sim­i­lar way, if you want to learn more af­ter this.

Unlike the con­crete con­duit, which re­ally just acts like an un­der­ground canal with a roof, this is one of the places where the wa­ter in the aque­duct is pres­sur­ized. 850 feet of wa­ter col­umn is about 370 psi, 26 bar, or two-and-a-half Megapascals. It’s a lot of pres­sure. These sec­tions of pipe had to be spe­cially man­u­fac­tured on the East Coast, where the ma­jor steel fa­cil­i­ties were, and trans­ported by ship be­cause of their size. They trav­elled all the way around Cape Horn, since the Panama Canal was still un­der con­struc­tion. There are ac­tu­ally quite a few of these siphons cross­ing canyons in this sec­tion of the aque­duct, but Jawbone Canyon is the biggest one.

A lit­tle fur­ther down­stream, the LA aque­duct crosses the California Aqueduct, part of the State Water Project. That sys­tem has a con­nec­tion to LA as well, but this branch at the cross­ing ac­tu­ally heads to Silverwood Lake. However, there is a trans­fer fa­cil­ity, re­cently com­pleted, that can pump wa­ter out of the California Aqueduct di­rectly into the first LA aque­duct. This cre­ates op­por­tu­ni­ties for LA to buy wa­ter that moves through the state sys­tem and of­fers some flex­i­bil­ity in where that wa­ter ends up. There’s also a turn-in that can move wa­ter from the LA aque­duct into the California aque­duct for sit­u­a­tions where trades make sense. The sec­ond LA aque­duct passes un­der­neath the state canal here. And this is a good ex­am­ple of the dif­fer­ences be­tween the first pro­ject (built in the 1910s) and the sec­ond one, built in the 1960s. Over that time, the price of la­bor went up a lot more than the price of ma­te­ri­als. Where the first one care­fully fol­lowed the ex­ist­ing topog­ra­phy with bends and turns to min­i­mize the need for ex­pen­sive pres­sur­ized pipe, the sec­ond one could take a more di­rect path, re­duc­ing la­bor in re­turn for the more spe­cial­ized con­duit ma­te­ri­als.

After wan­der­ing more than a hun­dred miles (or 160 kilo­me­ters) apart, the two Los Angeles Aqueducts come back to­gether at Fairmont Reservoir, in the north­ern foothills of the Sierra Pelona Mountains. This is the last ma­jor topo­graphic bar­rier on the way to Los Angeles. There was no way to go up and over with­out pumps, so in­stead they went straight through. The largest pro­ject was the Elizabeth tun­nel.

Here, the two aque­ducts come to­gether again into a sin­gle wa­ter­course. About 5 miles or 8 kilo­me­ters of ex­ca­va­tion through every­thing from hard rock to loose, wet ground be­came one of the most dif­fi­cult parts of the en­tire pro­ject. The tun­nel re­quired con­tin­u­ous tem­po­rary sup­ports along most of its length, fol­lowed by a per­ma­nent con­crete lin­ing. It was a mon­u­men­tal ef­fort for its time and es­sen­tial not only to cross the range. The Elizabeth Tunnel also de­liv­ers that wa­ter un­der pres­sure to the San Francisquito Power Plant Number 1.

This is the largest of the eight hy­dro­elec­tric plants that run along the aque­duct, cap­tur­ing some of the en­ergy from the wa­ter as it flows down­ward to­ward LA. These plants are a ma­jor part of how the pro­ject paid for it­self, and they con­tinue to serve as an im­por­tant source of elec­tric­ity in the re­gion to­day.

Continuing down­stream, Bouquet Canyon reser­voir adds an­other layer of op­er­a­tional flex­i­bil­ity. It helps reg­u­late flow through the power plants and pro­vides ad­di­tional stor­age, a sort of in­sur­ance pol­icy since this whole reach de­pends on a sin­gle ma­jor tun­nel cross­ing the San Andreas Fault. In case of a ma­jor earth­quake, it’d be best if Angelinos could avoid a si­mul­ta­ne­ous wa­ter short­age.

The aque­duct splits again just up­stream of the San Francisquito Plant Number 2, which was fa­mously de­stroyed by the St. Francis Dam fail­ure. That reser­voir pro­ject was de­signed to sup­ple­ment the stor­age ca­pac­ity along the aque­duct, but the dam failed cat­a­stroph­i­cally in 1928, just 2 years af­ter it was com­pleted, killing more than 400 peo­ple and de­stroy­ing sev­eral parts of the aque­duct as well. The tragedy was one of the worst en­gi­neer­ing dis­as­ters in American his­tory. It put an­other stain on the aque­duct pro­ject, and it ef­fec­tively ru­ined the rep­u­ta­tion of William Mulholland, who was largely con­sid­ered a hero in LA for all his work on the aque­duct and the city’s wa­ter sys­tem. The dam was never re­built, but work­ers re­stored the aque­duct to func­tion­ing ser­vice in only 12 days.

At Drinkwater Reservoir, the two aque­ducts run roughly par­al­lel through the Santa Clarita area, some­times above­ground and some­times be­low, be­fore fi­nally reach­ing the ter­mi­nal struc­tures that carry wa­ter into LA. Usually, the wa­ter stays in the con­duits, which feed the two hy­dropower plants at the foot of the moun­tains. If the plants are out of ser­vice or there’s more flow than they can han­dle, you see ex­cess wa­ter thun­der­ing through the cas­cade struc­tures in­stead.

From here, the aque­duct drops out of the moun­tains and into the north end of the San Fernando Valley, where the wa­ter is treated and pre­pared for dis­tri­b­u­tion. After fil­tra­tion and dis­in­fec­tion, it’s stored in the Los Angeles Reservoir, the sys­tem’s ter­mi­nal reser­voir, so the city can smooth out day-to-day swings in de­mand even while the aque­duc­t’s in­flow stays rel­a­tively steady.

For most of Los Angeles’ his­tory, that finished wa­ter stor­age” was out in the open air. But in the 2000s, drink­ing-wa­ter rules pushed util­i­ties to add stronger pro­tec­tion for treated wa­ter held in un­cov­ered reser­voirs. There’s a good chance you’ve seen their so­lu­tion on the Veritasium chan­nel or else­where: 96 mil­lion plas­tic shade balls that act like a float­ing cover, block­ing sun­light to pre­vent wa­ter-chem­istry prob­lems and help­ing keep wildlife out. They’re the fi­nal pro­tec­tion for this wa­ter that trav­eled so long to reach the city. While the LA Reservoir is, in a sense, the end of the jour­ney for this wa­ter, the orig­i­nal di­ver­sion way back at Owen’s River is­n’t even tech­ni­cally the start any­more!

In 1940, LA ex­tended the aque­duct sys­tem up­stream north­ward by con­nect­ing the Mono basin and fun­nel­ing its wa­ter through tun­nels to the Owens River basin. Like Owens Lake down­stream, Mono Lake be­gan dry­ing out as well. And also like Owens Lake, law­suits, court or­ders, and en­vi­ron­men­tal reg­u­la­tions have tem­pered the value of this wa­ter source, forc­ing LA to sig­nif­i­cantly re­duce di­ver­sions and im­ple­ment costly restora­tion pro­jects.

That’s kind of the story of the LA aque­duct in a nut­shell. The pro­ject seemed ob­vi­ous from an en­gi­neer­ing per­spec­tive. There was lots of snowmelt in the moun­tains; the city had the tech­ni­cal prowess, the fund­ing, the el­e­va­tion, and the po­lit­i­cal power to reach out and take it. The re­sult was one of the most im­pres­sive works of in­fra­struc­ture of the early 20th cen­tury. And con­tin­ued ef­forts to ex­pand and im­prove the sys­tem have made it even more ef­fi­cient, flex­i­ble, and valu­able to the many mil­lions of peo­ple who live in one of the most pop­u­lous cities in America, de­liv­er­ing not only wa­ter but also hun­dreds of megawatts of hy­dropower.

But it many ways, it was not only un­scrupu­lous, but also short-sighted. Residents of the Owens Valley watched ranch­land and farm­land dry up as the wa­ter that had shaped their home was rerouted south. Native com­mu­ni­ties saw their home­land trans­formed with ac­cess to gath­er­ing ar­eas dis­rupted, places made un­rec­og­niz­able, and cul­tural ties strained by changes they did­n’t choose. Wind picked up al­ka­line dust from dried lakebeds. Habitats were dis­rupted, and the birds that de­pended on these wa­ters and wet­lands lost part of what made this mi­gra­tion cor­ri­dor work. It’s easy to see why the aque­duct re­mains con­tro­ver­sial, and why what we some­times dis­miss as red tape” around ma­jor in­fra­struc­ture is of­ten com­pletely jus­ti­fied due dili­gence. As en­gi­neers, and re­ally, as hu­mans, we have to try and ac­count for costs that don’t show up on a bal­ance sheet, but can come back later as decades of law­suits, mit­i­ga­tion, and restora­tion.

And even the aque­duc­t’s orig­i­nal the­sis (that there’s re­li­able snowmelt up there, and a grow­ing city down here) is start­ing to fal­ter. In re­cent decades, the moun­tains have de­liv­ered less pre­dictable runoff: more swings, more years when the tim­ing is wrong, and more un­cer­tainty about what normal” even means any­more. California’s cli­mate has al­ways moved in long cy­cles, but the mar­gin for er­ror is thin­ner now, and no one can say with much con­fi­dence when or if the mois­ture the state de­pends on will re­turn to its old pat­tern.

The hope­ful part is that this is ex­actly where en­gi­neer­ing makes a dif­fer­ence: at the messy in­ter­sec­tion of ge­ol­ogy, cli­mate, cul­ture, pol­i­tics, and hu­man need. The Los Angeles Aqueduct is a case study in what we can build when we’re am­bi­tious, but also what hap­pens when we treat a land­scape like a ma­chine with only one out­put. The next era of wa­ter en­gi­neers can learn a lot from it.

...

Read the original on practical.engineering »

6 326 shares, 15 trendiness

HP realizes that mandatory 15-minute support call wait times isn’t good support

In an odd ap­proach to try­ing to im­prove cus­tomer tech sup­port, HP al­legedly im­ple­mented manda­tory, 15-minute wait times for peo­ple call­ing the ven­dor for help with their com­put­ers and print­ers in cer­tain ge­o­gra­phies.

Callers from the United Kingdom, France, Germany, Ireland, and Italy were met with the forced hold­ing pe­ri­ods, The Register re­ported on Thursday. The pub­li­ca­tion cited in­ter­nal com­mu­ni­ca­tions it saw from February 18 that re­port­edly said the wait times aimed to influence cus­tomers to in­crease their adop­tion of dig­i­tal self-solve, as a faster way to ad­dress their sup­port ques­tion. This in­volves in­sert­ing a mes­sage of high call vol­umes, to ex­pect a de­lay in con­nect­ing to an agent and of­fer­ing dig­i­tal self-solve so­lu­tions as an al­ter­na­tive.”

Even if HPs tele­phone sup­port cen­ter was­n’t busy, callers would re­port­edly hear:

We are ex­pe­ri­enc­ing longer wait­ing times and we apol­o­gize for the in­con­ve­nience. The next avail­able rep­re­sen­ta­tive will be with you in about 15 min­utes.

To quickly re­solve your is­sue, please visit our web­site sup­port.hp.com to check out other sup­port op­tions or find help­ful ar­ti­cles and as­sis­tant to get a guided help by vis­it­ing vir­tu­ala­gent.hp­cloud.hp.com.

Callers were then told to please stay on the line” if they wanted to speak to a rep­re­sen­ta­tive. The phone sys­tem was also set to re­mind cus­tomers of their other sup­port op­tions and to apol­o­gize for the long (HP-induced) wait times upon the fifth, 10th, and 13th minute of the call.

The manda­tory sup­port call times have been lifted, per a com­pany state­ment shared by HP spokesper­son Katie Derkits:

We’re al­ways look­ing for ways to im­prove our cus­tomer ser­vice ex­pe­ri­ence. This sup­port of­fer­ing was in­tended to pro­vide more dig­i­tal op­tions with the goal of re­duc­ing time to re­solve in­quiries. We have found that many of our cus­tomers were not aware of the dig­i­tal sup­port op­tions we pro­vide. Based on ini­tial feed­back, we know the im­por­tance of speak­ing to live cus­tomer ser­vice agents in a timely fash­ion is para­mount. As a re­sult, we will con­tinue to pri­or­i­tize timely ac­cess to live phone sup­port to en­sure we are de­liv­er­ing an ex­cep­tional cus­tomer ex­pe­ri­ence.

HP did­n’t im­me­di­ately clar­ify when it re­moved the wait times. Some HP work­ers were re­port­edly un­happy with the manda­tory hold times, with an anony­mous insider” in HPs European op­er­a­tions re­port­edly telling The Register, per its Thursday re­port: Many within HP are pretty un­happy [about] the mea­sures be­ing taken and the fact those mak­ing de­ci­sions don’t have to deal with the cus­tomers who their de­ci­sions im­pact.”

...

Read the original on arstechnica.com »

7 277 shares, 18 trendiness

A Japanese Glossary of Chopsticks Faux Pas

From bad man­ners to taboo, there are cer­tain ways of us­ing chop­sticks that are con­sid­ered as go­ing against din­ing eti­quette. These var­i­ous acts, known as ki­raibashi, are listed be­low.

To raise the chop­sticks above the height of one’s mouth.

To clean the chop­sticks in soup or bev­er­ages.

!!! (Serious) To pass food from one pair of chop­sticks to an­other. This is taboo due to the cus­tom af­ter a cre­ma­tion ser­vice of pick­ing up re­mains and pass­ing them be­tween chop­sticks.

To hold out one’s bowl for more while still hold­ing chop­sticks.

To keep putting the chop­sticks into the same side dishes. It is proper eti­quette to first eat rice, move on to eat from a side dish, eat rice again, and then eat from a dif­fer­ent side dish.

To pick up food with the chop­sticks and then put it back with­out tak­ing it.

To hold the chop­sticks be­tween both hands when ex­press­ing thanks for the food. It is con­sid­ered rude to hold ob­jects in your hands when in prayer and it is taboo to hold the chop­sticks while say­ing Itadakimasu, a phrase said be­fore eat­ing, giv­ing thanks for the life of the food.

To use the chop­sticks to push food deep in­side one’s mouth.

To drop the chop­sticks while eat­ing.

To turn the chop­sticks around when serv­ing food so that the tips of the chop­sticks that have touched one’s mouth do not touch the food.

To place one’s mouth against the side of a dish and push food in with the chop­sticks. This can also mean to use the chop­sticks to scratch one’s head or other parts of the body.

To take the tips of the chop­sticks in one’s mouth.

To use the chop­sticks to pick some­thing out from near the bot­tom of the dish.

To rub warib­ashi (disposable chop­sticks) to­gether to re­move splin­ters.

To use the chop­sticks to stir the food around to find some­thing.

To use the chop­sticks to stab food and skewer it.

To point at peo­ple and things us­ing chop­sticks.

To use one’s own chop­sticks in­stead of serv­ing chop­sticks to take food from a large serv­ing dish.

After eat­ing the top half of a fish, to use the chop­sticks to keep eat­ing by pok­ing be­tween the bones in­stead of re­mov­ing them.

To use the chop­sticks to keep pok­ing food around.

To hold chop­sticks to­gether and tap them on a dish or the top of the table to align the tips.

To make a noise by tap­ping chop­sticks on a dish.

!!! (Serious) To stand chop­sticks up­right in a bowl of rice. This is taboo, as it is the way rice is pre­sented as a Buddhist fu­neral of­fer­ing.

To use chop­sticks that are made of dif­fer­ent ma­te­ri­als (for ex­am­ple, one made from wood and the other made from bam­boo).

To hold one chop­stick in each hand and use them like a knife and fork to tear or cut food into smaller pieces.

To place the chop­sticks on the table with the tips point­ing to the right.

To al­low sauce or soup to drip from the tips of the chop­sticks when eat­ing. Namida means tears.”

To grip both chop­sticks in a fist.

To place the chop­sticks like a bridge across the top of a dish to show one is fin­ished. Chopsticks should be placed on the hash­ioki (chopstick rest).

To use chop­sticks to push aside food that one does not want to eat.

To raise the tips of the chop­sticks higher than the back of one’s hand.

To shake off soup, sauce, or small bits of food from the tips of the chop­sticks.

To keep one’s chop­sticks hov­er­ing over the dishes, un­able to de­cide which food to eat.

To stir soup with the chop­sticks.

To put chop­sticks side­ways in one’s mouth in­stead of plac­ing them on the table when mov­ing a dish.

To bite off and eat grains of rice that are stuck to the chop­sticks.

To hold both chop­sticks and a dish in one hand at the same time.

To use a chop­stick like a tooth­pick.

To line the chop­sticks up to­gether and use them like a spoon to scoop up food.

To pull a dish to­ward one­self us­ing chop­sticks.

...

Read the original on www.nippon.com »

8 259 shares, 6 trendiness

Oregon School Cell Phone Ban: ‘Engaged students, joyful teachers’

Swipe or click to see more

Gov. Tina Kotek vis­ited Estacada High School to hear how her cell phone ban has been go­ing. (Staff photo: Christopher Keizur)

Swipe or click to see more

Gov. Tina Kotek said she chose Estacada High for her visit be­cause of the pos­i­tive things hap­pen­ing within the dis­trict. (Staff photo: Christopher Keizur)

Swipe or click to see more

Swipe or click to see more

Swipe or click to see more

Swipe or click to see more

There was plenty of un­cer­tainty and de­bate about the ef­fec­tive­ness of a cell phone ban de­creed by ex­ec­u­tive or­der last sum­mer.

But at least in Estacada, the pol­icy has earned two thumbs up, in­clud­ing ap­proval from a grumpy old teacher.”

Jeff Mellema is a lan­guage arts teacher at Estacada High School. He has worked in the build­ing for 24 years, and he said the new pol­icy that pro­hibits stu­dents from us­ing their phones dur­ing the day has been a breath of fresh air.

There is so much bet­ter dis­course in my class­room, be it per­sonal or aca­d­e­mic,” Mellema said. Students can’t avoid those con­ver­sa­tions any­more with their phones.”

This ban has brought joy back to this old, grumpy teacher,” he added with a smile.

That is the kind of feed­back Gov. Tina Kotek was hop­ing for as she vis­ited Estacada High School on Wednesday af­ter­noon, March 18. Her goal was to visit class­rooms, speak with ad­min­is­tra­tors, and meet with stu­dents one-on-one to hear about the ef­fec­tive­ness of her phone pol­icy.

I knew when I put out the or­der, not every­one would love it from day one,” Gov. Kotek said. I ap­pre­ci­ate all the feed­back to­day.”

You have an amaz­ing school. Go Rangers,” she added with a smile.

For years, ed­u­ca­tors had re­ported that cell phones were dis­rup­tive in class­rooms and hin­dered ef­fec­tive teach­ing. Research sup­ported those anec­do­tal claims, show­ing that phones un­der­mine stu­dents’ abil­ity to fo­cus, even when they are just sit­ting on the desk, un­used.

So the gov­er­nor is­sued her ex­ec­u­tive or­der pro­hibit­ing cell phone use by stu­dents dur­ing the school day in Oregon’s K-12 pub­lic schools. To help im­ple­ment the ban, her of­fice worked with the Oregon Department of Education to share model poli­cies for schools that al­ready have pro­hi­bi­tions in place, as well as guid­ance on im­ple­men­ta­tion flex­i­bil­ity.

We are grate­ful to have Governor Kotek here to­day to see with her own eyes the pos­i­tive ef­fect this has had,” said Estacada Superintendent Ryan Carpenter. We can now de­mand this ex­pec­ta­tion for our stu­dents’ well-be­ing and suc­cess.”

Since that or­der was is­sued, every sin­gle pub­lic school dis­trict in Oregon is in com­pli­ance with the band.

The goal is for every stu­dent to have the best op­por­tu­nity to be suc­cess­ful,” Kotek said. They need to know how to talk to peo­ple, learn, and go out into the world.”

The gov­er­nor vis­ited two class­rooms dur­ing her trip to Estacada High — Mr. Schaenman’s his­tory class and Mrs. Hannet’s al­ge­bra class. Gov. Kotek as­sured the kids that she still uses al­ge­bra in her day-to-day life, no mat­ter how un­likely that may seem to the young­sters.

In the class­rooms, she was able to take a straw poll around the cell phone ban and then get spe­cific, di­rect feed­back from the kids.

Overall, it was pos­i­tive. The Rangers said they no­ticed changes in how they in­ter­act with teach­ers and peers. They don’t feel that siren’s song” tug of their phones as of­ten, and the changes are bleed­ing into every­day life as well — think less re­minders to put phones away dur­ing fam­ily din­ners. Phones also led to is­sues around bul­ly­ing and on­line tox­i­c­ity dur­ing the school day.

There are some hic­cups. The stu­dents spoke about dif­fi­cul­ties in track­ing busy sched­ules. Many ath­letes re­lied on their phones for prac­tice times and lo­ca­tions. Some ad­vanced place­ment kids said the overzeal­ous pro­grams mon­i­tor­ing school lap­tops blocked ac­cess to needed re­sources for study­ing/​re­search­ing school­work. There is even a strange quirk with school-pro­vided tech that pre­vents them from ac­cess­ing their cal­cu­la­tors.

Maybe the fil­ters are too strong right now,” Gov. Kotek said. That is why we are work­ing with the dis­tricts to best im­ple­ment the pol­icy.”

The kids also weighed in on the de­bate around the ex­tent of the ban. The two op­tions bandied in Salem were a bell-to-bell” pol­icy or just in­side class­rooms. The lat­ter would al­low kids to use their phones dur­ing pass­ing pe­riod and lunch. Several ad­vo­cated for that change.

That mir­rored the de­bate within the Oregon leg­is­la­ture. It ul­ti­mately led to a stale­mate and the need for Gov. Kotek’s ex­ec­u­tive rul­ing.

When you make a de­ci­sion like this, you don’t know how it will ul­ti­mately work,” Kotek told the stu­dents. I ap­pre­ci­ate you adapt­ing to the sit­u­a­tion and mak­ing it work for you.”

While things could change in the fu­ture, the gov­er­nor is pleased with the early re­sults. The phone ban is here to stay.

Estacada School District is rev­el­ing in its sta­tus of pub­lic school Golden Child.”

The visit from the gov­er­nor is the lat­est feather in the cap of the rural, small dis­trict. In 2025, Estacada High had a 92.5% grad­u­a­tion rate. That is a stun­ning turn­around from a record-low of 38.5% in 2015. The dis­trict cred­its poli­cies aimed at re­tain­ing tal­ented teach­ers and em­pow­er­ing stu­dents to take a more ac­tive role in their learn­ing.

We are proud of these re­sults,” Carpenter said. This is a re­flec­tion and re­ward for a ton of hard work. This dis­trict lit­er­ally changed its stars to be seen as a true aca­d­e­mic pow­er­house in Oregon.”

That mind­set con­tin­ues with the cell phone ban. Like many oth­ers, Estacada had a ver­sion of this in place be­fore the of­fi­cial edict. But with the gov­er­nor’s push, it em­pow­ered ad­min­is­tra­tors and teach­ers to learn fully em­brace the ban.

Any pol­icy is only as good as the teach­ers who en­force it,” Carpenter said.

In craft­ing its pol­icy, Estacada in­cor­po­rated feed­back from par­ents. That led to some key de­ci­sions around the cell phone ban. Rather than use pouches or lock­ers, stu­dents are al­lowed to keep their phones safely stored in their back­packs. That was for two rea­sons — it al­lows stu­dents to con­tact loved ones dur­ing emer­gen­cies, and many par­ents use phone track­ers to keep tabs on their kids.

The dis­trict has also leaned on di­rect, im­me­di­ate com­mu­ni­ca­tion. The flow of in­for­ma­tion reaches par­ents di­rectly, avoid­ing some of the mis­com­mu­ni­ca­tion that oc­curred in the past.

Even I’m sur­prised by the im­pact this has had,” Kotek said. I’m thank­ful for the ed­u­ca­tors who took up the charge when I said we’ve got to do this.”

We can model what Estacada is do­ing for other dis­tricts across the state,” she added.

Oregon vot­ers have re­jected most laws in ref­er­en­dums, with many de­cided out­side November

Now’s the time to see Portland cherry blos­soms in peak bloom

...

Read the original on portlandtribune.com »

9 218 shares, 18 trendiness

ghostty-org/ghostling: A minimum viable terminal emulator built on top of the libghostty C API. Ex minimo, infinita nascuntur. 👻🐣

Ghostling is a demo pro­ject meant to high­light a min­i­mum func­tional ter­mi­nal built on the libghostty C API in a

sin­gle C file.

The ex­am­ple uses Raylib for win­dow­ing and ren­der­ing. It is sin­gle-threaded (although libghostty-vt sup­ports thread­ing) and uses a 2D graph­ics ren­derer in­stead of a di­rect GPU ren­derer like the pri­mary Ghostty GUI. This is to show­case the flex­i­bil­ity of libghostty and how it can be used in a va­ri­ety of con­texts.

Libghostty is an em­bed­d­a­ble li­brary ex­tracted from Ghostty’s core, ex­pos­ing a C and Zig API so any ap­pli­ca­tion can em­bed cor­rect, fast ter­mi­nal em­u­la­tion.

Ghostling uses libghostty-vt, a zero-de­pen­dency li­brary (not even libc) that han­dles VT se­quence pars­ing, ter­mi­nal state man­age­ment (cursor po­si­tion, styles, text re­flow, scroll­back, etc.), and ren­derer state man­age­ment. It con­tains no ren­derer draw­ing or win­dow­ing code; the con­sumer (Ghostling, in this case) pro­vides its own. The core logic is ex­tracted di­rectly from Ghostty and in­her­its all of its real-world ben­e­fits: ex­cel­lent, ac­cu­rate, and com­plete ter­mi­nal em­u­la­tion sup­port, SIMD-optimized pars­ing, lead­ing Unicode sup­port, highly op­ti­mized mem­ory us­age, and a ro­bust fuzzed and tested code­base, all proven by mil­lions of daily ac­tive users of Ghostty GUI.

Despite be­ing a min­i­mal, thin layer above libghostty, look at all the fea­tures you do get:

* Unicode and multi-code­point grapheme han­dling (no shap­ing or lay­out)

* And more. Effectively all the ter­mi­nal em­u­la­tion fea­tures sup­ported

by Ghostty!

These fea­tures aren’t prop­erly ex­posed by libghostty-vt yet but will be:

These are things that could work but haven’t been tested or aren’t im­ple­mented in Ghostling it­self:

This list is in­com­plete and we’ll add things as we find them.

libghostty is fo­cused on core ter­mi­nal em­u­la­tion fea­tures. As such, you don’t get fea­tures that are pro­vided by the GUI above the ter­mi­nal em­u­la­tion layer, such as:

* Search UI (although search in­ter­nals are pro­vided by libghostty-vt)

These are the things that libghostty con­sumers are ex­pected to im­ple­ment on their own, if they want them. This ex­am­ple does­n’t im­ple­ment these to try to stay as min­i­mal as pos­si­ble.

Requires CMake 3.19+, a C com­piler, and Zig 0.15.x on PATH. Raylib is fetched au­to­mat­i­cally via CMake’s FetchContent if not al­ready in­stalled.

cmake -B build -G Ninja

cmake –build build

./build/ghostling

cmake -B build -G Ninja -DCMAKE_BUILD_TYPE=Release

cmake –build build

After the ini­tial con­fig­ure, you only need to run the build step:

cmake –build build

libghostty-vt has a fully ca­pa­ble and proven Zig API. Ghostty GUI it­self uses this and is a good — al­though com­plex — ex­am­ple of how to use it. However, this demo is meant to show­case the min­i­mal C API since C is so much more broadly used and ac­ces­si­ble to a wide va­ri­ety of de­vel­op­ers and lan­guage ecosys­tems.

libghostty-vt has a C API and can have zero de­pen­den­cies, so it can be used with min­i­mally thin bind­ings in ba­si­cally any lan­guage. I’m not sure yet if the Ghostty pro­ject will main­tain of­fi­cial bind­ings for lan­guages other than C and Zig, but I hope the com­mu­nity will cre­ate and main­tain bind­ings for many lan­guages!

No no no! libghostty has no opin­ion about the ren­derer or GUI frame­work used; it’s even stand­alone WASM-compatible for browsers and other en­vi­ron­ments.

libghostty pro­vides a high-per­for­mance ren­der state API

which only keeps track of the state re­quired to build a ren­derer. This is the same API used by Ghostty GUI for Metal and OpenGL ren­der­ing and in this repos­i­tory for the Raylib 2D graph­ics API. You can layer any ren­derer on top of this!

I needed to pick some­thing. Really, any build sys­tem and any li­brary could be used. CMake is widely used and sup­ported, and Raylib is a sim­ple and el­e­gant li­brary for win­dow­ing and 2D ren­der­ing that is easy to set up. Don’t get bogged down in these de­tails!

...

Read the original on github.com »

10 213 shares, 15 trendiness

Rewriting our Rust WASM Parser in TypeScript

We rewrote our Rust WASM Parser in TypeScript - and it got 3x Faster

We built the openui-lang parser in Rust and com­piled it to WASM. The logic was sound: Rust is fast, WASM gives you near-na­tive speed in the browser, and our parser is a rea­son­ably com­plex multi-stage pipeline. Why would­n’t you want that in Rust?

Turns out we were op­ti­mis­ing the wrong thing.

The openui-lang parser con­verts a cus­tom DSL emit­ted by an LLM into a React com­po­nent tree. It runs on every stream­ing chunk — so la­tency mat­ters a lot. The pipeline has six stages:

* Mapper: con­verts in­ter­nal AST into the pub­lic OutputNode for­mat con­sumed by the React ren­derer

Every call to the WASM parser pays a manda­tory over­head re­gard­less of how fast the Rust code it­self runs:

The Rust pars­ing it­self was never the slow part. The over­head was en­tirely in the bound­ary: copy string in, se­ri­al­ize re­sult to JSON string, copy JSON string out, then V8 de­se­ri­al­izes it back into a JS ob­ject.

The nat­ural ques­tion was: what if WASM re­turned a JS ob­ject di­rectly, skip­ping the JSON se­ri­al­iza­tion step? We in­te­grated serde-wasm-bind­gen which does ex­actly this — it con­verts the Rust struct into a JsValue and re­turns it di­rectly.

Here’s why. JS can­not read a Rust struc­t’s bytes from WASM lin­ear mem­ory as a na­tive JS ob­ject — the two run­times use com­pletely dif­fer­ent mem­ory lay­outs. To con­struct a JS ob­ject from Rust data, serde-wasm-bind­gen must re­cur­sively ma­te­ri­alise Rust data into real JS ar­rays and ob­jects, which in­volves many fine-grained con­ver­sions across the run­time bound­ary per parse() in­vo­ca­tion.

Compare that to the JSON ap­proach: serde_j­son::to_string() runs in pure Rust with zero bound­ary cross­ings, pro­duces one string, one mem­cpy copies it to the JS heap, then V8′s na­tive C++ JSON.parse processes it in a sin­gle op­ti­mised pass. Fewer, larger, and more op­ti­mised op­er­a­tions win over many small ones.

We ported the full parser pipeline to TypeScript. Same six-stage ar­chi­tec­ture, same ParseResult out­put shape — no WASM, no bound­ary, runs en­tirely in the V8 heap.

What is mea­sured: A sin­gle parse(com­pleteString) call on the fin­ished out­put string. This iso­lates per-call parser cost.

How it was run: 30 warm-up it­er­a­tions to sta­bilise JIT, then 1000 timed it­er­a­tions us­ing per­for­mance.now() (µs pre­ci­sion). The me­dian is re­ported. Fixtures are real LLM-generated com­po­nent trees se­ri­alised in each for­mat’s real stream­ing syn­tax.

* sim­ple-table — root + one Table with 3 columns and 5 rows (~180 chars)

Eliminating WASM fixed the per-call cost, but the stream­ing ar­chi­tec­ture still had a deeper in­ef­fi­ciency.

The parser is called on every LLM chunk. The naïve ap­proach ac­cu­mu­lates chunks and re-parses the en­tire string from scratch each time:

For a 1000-char out­put de­liv­ered in 20-char chunks: 50 parse calls pro­cess­ing a cu­mu­la­tive to­tal of ~25,000 char­ac­ters. O(N²) in the num­ber of chunks.

Statements ter­mi­nated by a depth-0 new­line are im­mutable — the LLM will never come back and mod­ify them. We added a stream­ing parser that caches com­pleted state­ment ASTs:

Completed state­ments are never re-parsed. Only the trail­ing in-progress state­ment is re-parsed per chunk. O(total_length) in­stead of O(N²).

What is mea­sured: The to­tal parse over­head ac­cu­mu­lated across every chunk call for one com­plete doc­u­ment. This is dif­fer­ent from the one-shot bench­mark — it mea­sures the sum of all parse calls dur­ing a real stream, not a sin­gle call. This is the num­ber that af­fects ac­tual user-per­ceived re­spon­sive­ness.

How it was run: Documents are re­played in 20-char chunks. Each chunk trig­gers a parse() (naïve) or push() (incremental) call. Total time across all calls is recorded. 100 full-stream re­plays, me­dian taken.

The sim­ple-table fix­ture is a sin­gle state­ment — there’s noth­ing to cache, so both ap­proaches are equiv­a­lent. The ben­e­fit scales with the num­ber of state­ments be­cause more of the doc­u­ment gets cached and skipped on each chunk.

The one-shot table shows 13.4µs for con­tact-form; the stream­ing table shows 316µs (naïve). These are not con­tra­dic­tory — they mea­sure dif­fer­ent things:

* 13.4µs = cost of one parse() call on the com­plete 400-char string

* 316µs = to­tal cost of ~20 parse() calls dur­ing the stream (chunk 1 parses 20 chars, chunk 2 parses 40 chars, …, chunk 20 parses 400 chars — cu­mu­la­tive sum of all those grow­ing calls)

This ex­pe­ri­ence sharp­ened our think­ing on the right use cases for WASM:

✅ Compute-bound with min­i­mal in­terop: im­age/​video pro­cess­ing, cryp­tog­ra­phy, physics sim­u­la­tions, au­dio codecs. Large in­put → scalar out­put or in-place mu­ta­tion. The bound­ary is crossed rarely.

✅ Portable na­tive li­braries: ship­ping C/C++ li­braries (SQLite, OpenCV, libpng) to the browser with­out a full JS rewrite.

❌ Parsing struc­tured text into JS ob­jects: you pay the se­ri­al­iza­tion cost ei­ther way. The pars­ing com­pu­ta­tion is fast enough that V8′s JIT elim­i­nates any Rust ad­van­tage. The bound­ary over­head dom­i­nates.

❌ Frequently-called func­tions on small in­puts: if the func­tion is called 50 times per stream and the com­pu­ta­tion takes 5µs, you can­not amor­tise the bound­ary cost.

Profile where time is ac­tu­ally spent be­fore choos­ing the im­ple­men­ta­tion lan­guage.

For us, the cost was never in the com­pu­ta­tion - it was al­ways in data trans­fer across the WASM-JS bound­ary.

Direct ob­ject pass­ing” through serde-wasm-bind­gen is not cheaper.

Constructing a JS ob­ject field-by-field from Rust in­volves more bound­ary cross­ings than a sin­gle JSON string trans­fer, not fewer. The bound­ary cross­ings hap­pen in­side the sin­gle FFI call, in­vis­i­bly.

Algorithmic com­plex­ity im­prove­ments dom­i­nate lan­guage-level op­ti­mi­sa­tions.

Going from O(N²) to O(N) in the stream­ing case had a larger prac­ti­cal im­pact than switch­ing from WASM to TypeScript.

WASM and JS do not share a heap.

WASM has a flat lin­ear mem­ory (WebAssembly. Memory) that JS can read as raw bytes, but those bytes are Rust’s in­ter­nal lay­out - point­ers, enum dis­crim­i­nants, align­ment padding - com­pletely opaque to the JS run­time. Conversion is al­ways re­quired and al­ways costs some­thing.

...

Read the original on www.openui.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.