10 interesting stories served every morning and every evening.
Chuck Norris, the martial arts champion who became an iconic action star and led the hit series “Walker, Texas Ranger,” has died. He was 86.
Norris was hospitalized in Hawaii on Thursday, and his family posted a statement Friday saying that he died that morning. “While we would like to keep the circumstances private, please know that he was surrounded by his family and was at peace,” his family wrote.
Disney and Marc Jacobs Team Up on ‘Mickey & Friends’ Tennis Collection
Debra OConnell Rises to Chairman of U. S. Entertainment TV for Disney as Creative Chief Dana Walden Unveils Her Executive Team
“To the world, he was a martial artist, actor, and a symbol of strength. To us, he was a devoted husband, a loving father and grandfather, an incredible brother, and the heart of our family,” the statement continued. “He lived his life with faith, purpose, and an unwavering commitment to the people he loved. Through his work, discipline, and kindness, he inspired millions around the world and left a lasting impact on so many lives.”
As an action star, Norris had a degree of credibility that most others could not match.. Not only did he appear opposite the legendary Bruce Lee in 1972 film “The Way of the Dragon” (aka “Return of the Dragon”), but he was a genuine martial arts champion who was a black belt in judo, 3rd degree black belt in Brazilian Jiu-Jitsu, 5th degree black belt in Karate, 8th degree black belt in Taekwondo, 9th degree black belt in Tang Soo Do and 10th degree black belt in Chun Kuk Do.
Norris was extremely prolific in the late 1970s and ’80s, starring in “The Delta Force” and “Missing in Action” films, “Good Guys Wear Black” (1978), “The Octagon” (1980), “Lone Wolf McQuade” (1983), “Code of Silence” (1985) and “Firewalker” (1986).
Norris joined a bevy of other action stars in the Sylvester Stallone-directed “The Expendables 2” in 2012 after an absence from the screen of seven years.
While he scored high on credibility, Norris did not leaven his work with humor the way Arnold Schwarzenegger, Bruce Willis and Jackie Chan did. He was nevertheless the action star of choice for those seeking an all-American icon.
In 1984, Norris starred in “Missing in Action,” the first in a series of films centered around the rescue of American POWs purportedly still held after being captured during the Vietnam War. (Norris’ younger brother Wieland had been killed while serving in Vietnam, and the actor dedicated his “Missing in Action” films to his brother’s memory, but critics of Norris and producer Cannon Films maintained that the films borrowed too heavily from the central conceit of Stallone’s highly successful “Rambo” films.)
As Norris’ movie career began to wane, he made a timely move to television, starring in the CBS series “Walker, Texas Ranger,” inspired by his film “Lone Wolf McQuade.” The program ran from 1993-2001, and the actor reprised the role of Cordell Walker in the TV movies “Walker Texas Ranger 3: Deadly Reunion” (1994) and “Walker, Texas Ranger: Trial by Fire” (2005). (Also in 2005 Norris made the last film in which he starred, the straight-to-DVD “The Cutter.”)
In his later years, Norris was portrayed in memes documenting fictional, frequently absurd feats associated with him, such as “Chuck Norris kills 100% of germs” and “Paper beats rock, rock beats scissors, and scissors beats paper, but Chuck Norris beats all 3 at the same time.” In his later years Norris appeared in infomercials for workout equipment and became increasingly outspoken as a political conservative.
Carlos Ray Norris was born in Ryan, Okla.; his father served as a soldier in World War II. In 1958 he joined the Air Force as an Air Policeman (AP, analogous to the Army’s MPs). While serving at Osan Air Base in South Korea, Norris first acquired the nickname “Chuck” and began his training in Tang Soo Do (aka tangsudo), leading to his achievements in other martial arts and to his development of hybrid style Chun Kuk Do (“The Universal Way”). He returned to the U. S. and served as an AP at March Air Force Base in California.
After his 1962 discharge, Norris worked for aerospace company Northrop and opened a chain of karate schools; celebrity clients at the schools included Steve McQueen, Chad McQueen, Bob Barker, Priscilla Presley, Donny Osmond and Marie Osmond.
Norris made his acting debut in an uncredited role in the 1969 cult Matt Helm film “The Wrecking Crew,” starring Dean Martin. Norris met Bruce Lee at a martial arts demonstration in Long Beach, Calif., and played the nemesis of Lee’s character in 1972 movie “The Way of the Dragon” (retitled “Return of the Dragon” for U. S. distribution). In 1974 McQueen spurred Norris to begin taking acting classes at MGM.
Norris first starred in the 1977 action film “Breaker! Breaker!,” in which he played a trucker searching for his brother, who’s disappeared in a town with a judge who’s corrupt.
The actor proved his box office mettle with his subsequent films, “Good Guys Wear Black” (1978), “The Octagon” (1980), “An Eye for an Eye” (1981) and “Lone Wolf McQuade.”
Norris began starring in movies for Cannon Films in 1984. Over the next four years, he became Cannon’s most prominent star, appearing in eight films, including the three “Missing in Action” films; “Code of Silence” — qualitatively, one of his best films — the two “Delta Force” films and “Firewalker.” Norris’ brother Aaron Norris produced several of these films, and also became a producer on “Walker, Texas Ranger.”
A longtime supporter of conservative politicians, he wrote several books with Christian and patriotic themes.
Norris was twice married, the first time to Dianne Holechek from 1958 until their divorce in 1988.
He is survived by second wife Gena O’Kelley, whom he married in 1998; three sons, Eric, Mike and Dakota, daughters Danilee and Dina; and a number of grandchildren.
...
Read the original on variety.com »
The version that arrived in your mailbox is truncated. Visit the full article here to view the rest.
Reporters who want to get in touch: Drop an email at aicpnay@proton.me. In case my Proton account gets blocked or you don’t get an answer, tweet using the hashtag #AICPNAY with contact details and I’ll do my best to get in touch with you.
At its core, this article argues that Delve fakes compliance while creating the appearance of compliance without the underlying substance.
Delve achieves its claim of being the fastest platform by producing fake evidence, generating auditor conclusions on behalf of certification mills that rubber stamp reports, and skipping major framework requirements while telling clients they have achieved 100% compliance. Their “US-based auditors” are Indian certification mills operating through empty US shells and mailbox agents. Auditors breach independence rules by signing off anyway, leaving companies unknowingly exposed to criminal liability under HIPAA and hefty fines under GDPR.
Delve hands customers fabricated evidence of board meetings, tests, and processes that never happened. The platform forces companies to choose between adopting fake evidence or performing mostly manual work with little real automation or AI. It produces audit reports that falsely claim independent verification while Delve itself effectively wears the auditor hat, generating identical reports for all clients, including Lovable, Bland, Cluely, and NASDAQ-traded Duos Edge. It hosts trust pages that list security measures that were never implemented.
When clients ask hard questions, Delve dodges. They demand calls where founders charm, promise, and namedrop Lovable, Bland, and “every Fortune 500” as proof. When that fails, donuts arrive.
Preface - How it all startedWhat the leak and our research reveals
Two months ago, an email went out to a few hundred Delve clients informing them that Delve had leaked their audit reports, alongside other confidential information, through a Google spreadsheet that was publicly accessible. This email also claimed that Delve’s audit reports were fraudulent. The company I work for was one of those clients that received that email.
Instead of providing clarification and being transparent, Delve’s leadership decided to go into deny and deflect mode. When directly asking them for clarification, they flat-out denied everything.
This was serious, as it potentially involved nearly all of Delve’s clients and raised questions about the validity of the compliance reports Delve’s clients had received.
Multiple colleagues in my network received the same email. Having the shared experience of being underwhelmed with the Delve experience, and having the overall sense that something fishy was going on, we decided to pool resources and investigate together. This article is the result of that collaboration.
The reason we felt the way we did was due to how little actual work any of us had to perform to become ‘compliant’, combined with a product practically devoid of any real AI. It mostly felt like a SOC 2 template pack with a thin SAAS platform wrapper where you simply adopt and sign all templated documents. No custom tailoring, no AI guidance, no real automation. Just pre-populated forms that required you to click “save”.
Some of us have gone through compliance before and felt there was a huge mismatch between our past experience and our experience with Delve.
In this article I will walk you through a typical experience with Delve, the leak that exposed their operation, and how it revealed the fraud we uncovered. I will show, among other things, how for each of the below categories:
* Delve breaches AICPA/ISO rules by acting as auditor, generating pre-drafted assessments, tests, and conclusions
* Delve relies on audit firms that rubber stamp reports because genuine independent verification would expose the evidence as fabricated or deficient
* Delve misleads clients by claiming reports are produced by US-based CPA firms, when in reality they are produced by Delve and rubber stamped by Indian certification mills
* Delve leads clients to believe they are compliant when they are not
* Delve helps clients mislead the public by hosting trust pages that contain security measures that were never implemented
* Delve lies to clients when directly questioned, denying documented facts about the leak and report generation
* Delve markets AI-driven automation while the product is practically devoid of AI, relying on pre-populated templates, manual forms, and fabricated evidence
* Delve’s product is unable to get companies truly compliant
* Delve’s platform forces companies to choose between adopting fake evidence or performing mostly manual work with little real automation
* Unable to deliver real compliance through its platform, Delve depends on fraudulent auditors who rubber stamp reports for clients, falling back on off-platform manual work with external vCISOs and good auditors only when complaints or profile threaten its business interests
* Delve’s process results in clients violating GDPR and HIPAA requirements, exposing them to criminal liability under HIPAA and fines up to 4% of global revenue under GDPR
For any prospect considering Delve or any current Delve client:
Delve loves claiming this is all an attempt by their “jealous competitors” to fraudulently discredit them. When clients ask concrete questions, they dodge answering the question and instead coax you into getting on a call with them, where they charm you and tell you everything you want to hear. They’ll even throw in some donuts.
One other tactic they frequently employ is to promise issues won’t arise again because their old process, product or auditor is about to be or has already been replaced with something better, that they are about to see the results of. This is a deflection and delaying technique. Whenever they start being frequntly called out on using a particular auditor, they’ll switch to another equally dodgy one that hasn’t been flagged yet. If they’re called out on their deficient product, they point to a superior tier or new feature on the timeline that will solve everything.
Expressing unhappiness and a desire to leave will lead to Delve pairing you with an external virtual CISO that will help you do compliance right. You should know that this means you will have to do all the work manually.
If you are concerned about Delve’s conduct and practices, ask them questions in writing. Do not allow them to deflect. Do not get on a call with them. In the closing words at the end of this article you’ll find more advice.
All information contained in this article can be reproduced by consulting public sources and having access to the Delve platform.
All screenshots and information are actual as of mid January 2026.
Here we set the stage. We’ll quickly list all parties involved, and will provide some background context that is useful to understand the rest of this article.
High profile companies like those listed in the image above, and hundreds of others are affected by this. Companies additionally affected are those that partner with Delve’s clients, having been misinformed of the risks involved in partnering with those clients.
Many of those companies process PHI of millions of US citizens on a daily basis. Some of those even serve national defense interests.
The audit firms listed in the illustration above were identified during our research process, but are not necessarily all audit firms used by Delve.
From what we were able to establish, 99%+ of Delve’s clients went through either Accorp or Gradient over the past 6 months.
In the wake of Delve’s leak in December, Delve is reported to have switched to Glocert as their primary ISO 27001 auditing firm.
* Karun Kaushik and Selin Kocalar - The founders of Delve
The above individuals knowingly participated in Delve’s deliberate misconduct regarding audit practices.
Delve is a compliance company. They help businesses get certified for frameworks like SOC 2, ISO 27001, HIPAA, and GDPR. Companies need these certifications to prove they handle data in a secure way and to unlock deals with larger customers who require them.
Compliance has traditionally been a time-consuming process that involved lots of spreadsheets. It used to be manual, expensive and slow.
To give you an idea of what this is all about, we will primarily focus on SOC 2 in this article. SOC 2 is the most commonly pursued framework in the US. Practically all tech companies that sell to enterprises are expected to be ‘SOC 2 compliant’, which basically means they’ve had to have a SOC 2 audit performed in the last year.
Getting a clean SOC 2 report means hiring a CPA firm to review your security controls. If they successfully verify the security you claim to have, through a lot of evidence you provide them, they issue a report saying your security measures are sound. This report becomes proof you can show customers and investors.
SOC 2 and ISO 27001, the European counterpart to SOC 2, are voluntary frameworks. HIPAA and GDPR are not.
HIPAA applies to any company handling health records in the US. Penalties are severe, with willful neglect punishable by criminal charges and prison time.
GDPR covers any company processing data of EU residents, regardless of where the company is based. Fines run up to 4% of global annual revenue or 20 million euros, whichever is higher.
These frameworks carry the force of law because they protect information people cannot easily protect themselves: medical histories, genetic data, biometric identifiers, location patterns, and the full record of their digital lives.
The companies that help other companies become compliant through automation are called GRC automation platforms.
Delve was founded in 2023 by Karun Kaushik and Selin Kocalar, both Forbes 30 Under 30 members and MIT dropouts who met as freshmen. They started with a medical AI scribe, pivoted to compliance after hitting HIPAA headaches themselves, and went through Y Combinator in 2024.
In July 2025, Delve raised $32 million in Series A funding led by Insight Partners. Before that they had raised a $3.3 million seed round and went through Y Combinator.
Delve’s pitch is speed through AI. They claim to get companies compliant in days rather than months, using what they call “agentic AI” through an “AI-native” platform.
Their marketing promises AI agents that automatically collect evidence, write reports, and monitor compliance gaps without human busywork.
The reality, as this article will show, is different.
SOC 2 audits operate under strict independence requirements designed to preserve trust in the attestation process. These rules exist precisely to prevent the kind of conduct this article exposes. While Delve offers compliance services for HIPAA, GDPR, and ISO 27001, this article focuses primarily on SOC 2. The rules surrounding those other frameworks would require their own detailed treatment; the goal here is to establish a clear pattern in Delve’s behavior, not to catalog every possible regulatory violation.
The fundamental principle is simple: the party implementing controls cannot be the party attesting to their effectiveness. AICPA’s Code of Professional Conduct states that members must “accept the obligation to act in a way that will serve the public interest, honor the public trust, and demonstrate a commitment to professionalism.” When accountants cannot be expected to make truthful representations, “we lose the ability to assess any public or private company’s actual performance.”
For SOC 2 specifically, AT-C Section 205 requires practitioners to maintain independence in both fact and appearance throughout the engagement. The practitioner must not assume management responsibilities or act as an advocate for the subject matter being examined.
Under AT-C Section 315, auditors must “seek to obtain reasonable assurance that the entity complied with the specified requirements, in all material respects, including designing the examination to detect both intentional and unintentional material noncompliance.” This requires independent design of test procedures, independent evaluation of evidence, and independent formation of conclusions.
The auditor’s report must represent their own professional judgment, not pre-written conclusions provided by the entity being audited or its platform vendor.
Delve’s model inverts this structure. By generating auditor conclusions, test procedures, and final reports before any independent review occurs, Delve places itself in the role of both implementer and examiner. This is not a technicality. It is a structural fraud that invalidates the entire attestation.
For readers who wish to verify these requirements:
This story has many threads, and I struggled with where to start. Do I lead with the leaked documents? The auditor shell game? The fake AI?
In the end, I felt that starting with my company’s experience, which illustrates many of the points I make and prove later on, was the most effective way to get the picture across.
I collaborated on this article with users from other Delve clients. We compared notes. The patterns I describe below showed up across all of our accounts. Unless mentioned otherwise, nothing here is cherry-picked. Later sections dissect specific mechanisms.
This section shows what it is like to be a Delve client.
Delve goes all out during the sales process. Through their marketing and sales process they continuously emphasize being the fastest to get companies compliant, thanks to their AI. They repeatedly emphasize the impressive companies they work with, and how they partner with the best and most respected US-based audit firms, and that their work is accepted by Fortune 500 companies.
Within the first few minutes of the demo call we did with Delve, they had already mentioned how companies like Lovable, Bland, WisprFlow and hundreds of others are choosing Delve over competitors. The main point they kept driving home was that their revolutionary tech, supposedly way ahead of any other company’s, enabled compliance to be achieved with just 10 hours of work instead of the hundreds it used to take.
Their demo was short but did a good job showing how automated everything supposedly was. They clicked through everything pretty quickly, and stopped at every place that made the process look easy or automated. They showed an integration pulling information out of AWS, the “ready to go” default policies you could just adopt without modification, the reasonably short list of tasks, the AI questionnaire automation, the beautiful trust page you could publish and their ‘AI-copilot’ chatbot.
What they didn’t show, however, was that most integrations were fake and required manual screenshots, that tons of forms need to be manually filled out, that the trust page wasn’t accurate and they didn’t show any of the tasks that had pre-created fake evidence.
Their “best offer” for SOC 2 started at $15,000, which we were able to negotiate down to $13,000 on the call. They kept emphasizing how it was a great deal, that we were getting so much value that other companies paid more for. They remained pretty inflexible on pricing until we made clear we were considering a competitor. We must have been told at least four times that they couldn’t go any lower because they’d make a loss. There was a lot of pressure and posturing, but the price quickly dropped to just $6,000 when they realized we were serious about going elsewhere, and they would throw in ISO 27001 and a 200 hour penetration test as well.
Pushing us to sign within 24 hours or lose that good a deal, we decided to just move forward and get it over with. In hindsiht, there were many red flags, but we just wanted to get the job done quickly and move on. If Delve was good enough for companies like Lovable, they had to be doing something right. Right?
Once you’ve been invited to the Delve platform and log in for the first time, you’ll be greeted with an interface that reveals there are four categories for activities:
But even before you get started doing any real work, you can go into the trust tab and activate and publish the trust page for your company. You’d think it would probably be a very minimal trust page with everything failing at first, since you haven’t done any work, right?
Nope, you immediately get a fully populated trust page that would have you believe you’re running the most secure company on earth. Delve’s trust page presented our company as fully secure before we had completed any compliance work, enabling us to close deals based on misrepresented security claims.
Noteworthy is that the list hadn’t changed after we finished compliance in any way, but still wasn’t truthful. My expectation was that the list reflected the security you’d get at the end of Delve’s process, but it took getting there to learn that that wasn’t true either.
It says we did vulnerability scanning and a pentest, when we only ever did the scan. It says we did data recovery simulations, which we never did. It says we remediated vulnerabilities, which we never did.
It is literally a made-up list of security measures of which more than half are not implemented, or even supported or addressed by Delve’s process and platform.
In short, this product is built to help companies fake security rather than communicate their real security.
When you do actually decide to get started and do the work, you’ll find that the platform expects you to perform a number of tasks across four categories:
Policies - These are all pre-created, and Delve recommends adopting them as they are.
Sadly, unless you spend a whole week of manually revising them to be accurate, you will have inaccurate policies full of false promises. Every single one of Delve’s policies claims to have measures in place that Delve’s process and platform do not address. Even though we knew we’d technically be lying about our security to anyone we sent these policies to for review (clients, auditors, investors), we decided to adopt these policies because we simply didn’t have the bandwidth to rewrite them all manually. Team - Background checks, security training and manual device security through screenshots for every employee.
This is a lot of manual work. it took each of our emloyees 2 hours to their individual tasks done.30 minutes to manually secure and screenshot laptop security configuration. It is a lot of work if you have a big company.
Also, we didn’t have disk encryption support on a few laptops, but that didn’t stop Delve from signing off on HIPAA:
30 minutes (or more) to manually write up a performance evaluation:30 minutes to do all training (watching Delve’s Youtube videos) and quizzes.
Tech - You add vendors here. This is what Delve calls integrations, but most of these are just forms where you are asked to submit screenshots:Company - Security procedures and company-wide tasks are live here. This is just a list of forms that are all pre-filled with fake evidence. You can speedrun through this by just clicking accept on every one of them.
One thing that stands out as you go through all of Delve’s tasks is that there isn’t any AI to speed up any of the work. The only thing available is an ‘AI Co-pilot’ that can provide generic advice, and that doesn’t seem to have much context beyond what form you’re in. More than half of the time the AI co-pilot would tell me the evidence in the platform was not sufficient, and it would refer to links of other GRC platforms.
As opposed to what is promised during the sales process, the tasks are not customized to the company. You are basically dumped into an interface and told by the customer success person that you can ask questions on Slack if you get stuck.
The reality is that there will be very few times you’ll ever need any help from Delve’s team, primarily because everything in their platform is pre-populated. You won’t have to ask how to do a risk assessment, because you get the same pre-generated risks that every other Delve customer gets. You won’t have to complain about having to do board meetings, after you were explicitly promised during the sales call that this is something all other providers but Delve ask for, because you get pre-created fake board meeting notes that you can adopt as is.
Seriously, becoming compliant with Delve is nothing more than clicking through a bunch of pre-populated forms and accepting everything. Unless you want to do compliance the proper way, in which case Dropbox is as good a tool as Delve since you need to then manually collect and write everything.
Ok, you sit down to get cracking on compliance. What do you do next?
You accept the default policies that are inaccurate out of the box. Like this policy that claims to have an MDM in place when the Delve process consists of making a manual screenshot of your Mac firewall settings:
You accept the pre-created contents for security simulation by accepting the three “security incidents”:
You do the risk assessment by adopting the ten default risks:
One of our obvious concerns was that this approach would never pass an audit, but we were explicitly told Delve never failed a single audit in the past, and that auditors have never flagged a single issue with their process. They tried to put our minds at ease by telling us about all the amazing Delve clients that sold to Fortune 500 companies using the exact same process.
Delve continuously reminding us that they serve clients like Lovable, Bland, WisprFlow and many others ended up wearing us down, so we just took their word for it and moved on.
When the time comes to actually hook up your stack to Delve, so that Delve can do that ‘continuous monitoring’ thing, you’ll find that the vast majority of their integrations don’t integrate with anything at all. They are just containers for screenshots you’ll have to go out and manually collect.
Imagine my surprise when I learned that AI-native compliance would mean I’d have to spend many hours manually collecting screenshots and filling out forms. I truly feel like a mindless agent in what Delve calls “the agentic experience”.
Here you can see how the Linear ‘integration’ consists of Manual Tests and Forms:
On the employee tab, you manually do background checks through Certn, fill out more forms, watch useless YouTube videos, and manually screenshot laptop security. For 100 employees, all 100 of them have to manually secure laptops once and upload screenshots.
You also do manual performance reviews for every employee with no way to pull data from other solutions. Lots of typing if you have 20+ employees.
...
Read the original on substack.com »
Zen gives you access to a handpicked set of AI models that OpenCode has tested and benchmarked specifically for coding agents. No need to worry about inconsistent performance and quality across providers, use validated models that work.
...
Read the original on opencode.ai »
Skip to main content
Skip to main content
I want to speak to you directly, as an engineer who has spent his career building technology that people depend on every day. Windows touches more people’s lives than almost any technology on Earth. Every day, we hear from the community about how you experience Windows. And over the past several months, the team and I have spent a great deal of time analyzing your feedback. What came through was the voice of people who care deeply about Windows and want it to be better.
Today, I’m sharing what we are doing in response. Here are some of the initial changes we will preview in builds with Windows Insiders this month and throughout April.
More taskbar customization, including vertical and top positions: Repositioning the taskbar is one of the top asks we’ve heard from you. We are introducing the ability to reposition it to the top or sides of your screen, making it easier to personalize your workspace.
Integrating AI where it’s most meaningful, with craft and focus: You will see us be more intentional about how and where Copilot integrates across Windows, focusing on experiences that are genuinely useful and well‑crafted. As part of this, we are reducing unnecessary Copilot entry points, starting with apps like Snipping Tool, Photos, Widgets and Notepad.
Reducing disruption from Windows Updates: Receiving updates should be predictable and easy to plan around, so we’re giving you more control. This includes the ability to skip updates during device setup to get to the desktop faster, restart or shut down without installing updates and pause updates for longer when needed, all while reducing update noise with fewer automatic restarts and notifications.
Faster and more dependable File Explorer: File Explorer is one of the most used surfaces in Windows. Our first round of improvements will focus on a quicker launch experience, reduced flicker, smoother navigation and more reliable performance for everyday file tasks.
More control over widgets and feed experiences: Widgets should feel helpful and relevant, not distracting or overwhelming. We’re introducing quieter defaults, more control over when and how widgets appear, and improved personalization for the Discover feed.
A simpler, more transparent Windows Insider Program: The Windows Insider Program is how you help shape the future of Windows, and it should be easy to understand what to expect and how to participate. We are implementing changes to make it easier for you to navigate with clearer channel definitions, easier access to new features, higher quality builds, better visibility into how your feedback shapes Windows, and more opportunities to engage directly with us.
Improved Feedback Hub, available starting today: Your feedback is essential to improving Windows, and it should be easy to share and see what others are saying. Today, we’re rolling out the largest update to Feedback Hub yet to our Insiders, with a redesigned experience that makes it faster and easier to submit feedback and engage with the community.
Building on these changes, what follows below is our broader plan and areas of focus for the year to raise the bar on Windows 11 quality. The work is underway. You can expect to see tangible progress that you’ll be able to feel as you preview builds from us throughout the rest of the year.
Last night I had the chance to sit down with a small group of Windows Insiders here in Seattle to listen, to answer questions, and to share more about where we’re headed. The Seattle meetup was the first of several stops our team will be making to engage in person, in more cities around the world, to connect with the Windows community.
Thank you for holding us to a high standard. Windows is as much yours as it is ours. We’re committed to strengthening its foundation and delivering innovation where it matters, for you.
Please keep the feedback coming, to help us shape the future of Windows together.
What follows is our plan to raise the bar on Windows 11 quality this year, with a focus on performance, reliability and well-crafted experiences. These areas have meaningful impact on how you experience Windows: how fast it starts and responds, how stable it is under real workloads, and how consistent and thoughtful the experience feels.
We are focusing on making Windows 11 more responsive and consistent, so performance feels smooth and reliable.
Over the course of the year, we’re improving system performance, app responsiveness, File Explorer, and the Windows Subsystem for Linux, helping Windows stay fast as you move between apps and workloads.
Improving system performance: Reducing resource usage by Windows to free up more performance for what you’re doing.
Faster and more responsive Windows experiences, with early improvements already delivering launch time reductions in apps like File Explorer
Improved memory efficiency, lowering the baseline memory footprint for Windows, freeing up more capacity for the apps you run
More consistent performance, even under load, so apps stay responsive throughout the day
More fluid and responsive app interactions: Reducing interaction latency by moving core Windows experiences to the WinUI3 framework.
Improving the shared UI infrastructure that Windows experiences rely on, reducing interaction latency and overhead at the platform level
Faster responsiveness in core Windows experiences like the Start menu, by moving more experiences to WinUI3
Improving File Explorer fundamentals: Reducing latency and improving reliability across search, navigation and file operations.
Copying and moving large files will be faster and more reliable
Elevating the Windows Subsystem for Linux (WSL) experience: Improving performance, reliability and integration for developers using Linux tools and environments on Windows.
Better enterprise management with stronger policy control, security and governance
Reliability is the bedrock of trust. You should trust that your PC is going to be there and function when you need it most.
Across the operating system, we will focus on improving the baseline reliability of areas such as the Windows Insider Program, drivers and apps, updates and Windows Hello.
Strengthening reliability and quality of the Windows Insider Program: Making it clearer what to expect from each Insider channel, raising the quality bar for builds and strengthening feedback signals to improve build quality before broad release.
Clearer visibility into what features are included in each Insider build, so you know what to expect
More control over which new features you try, with easier switching between Insider channels to match your desired level of stability or early access
Higher quality builds entering each channel, with more rigorous validation and feedback signals before release
Stronger feedback loops across Windows so issues are identified, prioritized and addressed faster
Increasing OS, driver and app reliability: Delivering a smoother, more dependable Windows 11 experience by strengthening system stability, driver quality and app reliability across our vibrant ecosystem of silicon, ISV and OEM partners. Our priorities include:
Strengthening the Windows foundation by reducing OS level crashes, improving driver quality and app stability across our ecosystem so PCs run smoothly and reliably every day
Creating easier, faster and stable connections with Bluetooth accessories, fewer USB related crashes and connection loss, and improved printer discoverability and connections
More reliable camera and audio connections to increase your productivity at work and play
More consistent device wake (including further wake consistency improvements for docking scenarios) so you can get back to your work faster
Improving the Windows Update experience: Faster, more predictable updates with clearer control over restarts and timing.
Less disruption from Windows Update, moving devices to a single monthly reboot, while organizations and users who wish to get new features and fixes faster remain able to do so
More direct control over updates, including the ability to pause updates for as long as you need and restart or shut down without being forced to install them
Faster, more reliable update experiences, with clearer progress during updates and built‑in recovery to help keep devices stable if something goes wrong
Improving Windows Hello biometric authentication: We’re strengthening Windows Hello sign‑in so it feels reliable, effortless and secure, reducing friction while increasing confidence that your device recognizes you correctly.
More reliable facial recognition, so you can trust sign‑in to work when you need it
Faster and more dependable fingerprint sign‑in, with fewer retries
Easier secure sign‑in on gaming handhelds like the ROG Xbox Ally X, with full gamepad support for creating a PIN during setup and in Settings.
To us, craft is the discipline that turns functional products into loved ones through usability, polish, coherence and refinement.
This year, you will see us invest in raising the bar on the overall usability of the experience, with more opportunities for personalization, less noise, less distraction and more control across the OS. That includes being thoughtful about how and where we bring AI into Windows, leading with transparency, choice and control, so that new capabilities enhance the experience rather than complicate it.
Improving the Start and Taskbar experience: Making these core Windows surfaces more reliable, flexible and personalized so you can navigate your PC in the way that works best for you.
Start and Taskbar deliver even more consistent, dependable access to apps and files, so moving between your content feels fluid throughout the day
Expanded taskbar personalization options, including alternate taskbar positions and a smaller taskbar, giving you greater control over how this core surface fits your workflow
A more relevant Recommended section in Start will surface apps and content you care about most, with clear controls to customize the experience or turn it off
More focused user experience with less distractions: Making the Windows experience quieter, to help you stay focused, minimize distractions and stay in your flow.
Device setup on new Windows PCs is quieter and more streamlined, with fewer pages and reboots so getting started is simpler
Widgets surface information more intentionally by default, keeping content glanceable and reducing unnecessary interruptions
Simpler settings make it easier to personalize, opt into or turn off Widgets and feed content based on your preferences
Reduced notifications so you can stay focused throughout the day
Enhancing Search: Delivering faster, more accurate results with consistent search experience across Windows surfaces.
Find what matters faster, with search that surfaces apps, files and settings clearly so you can get to the right result quickly
Clearer and more trustworthy results, with results from content on your device easy to understand and clearly distinct from web results
A more consistent search experience across the Taskbar, Start, File Explorer and Settings
As part of this effort, we are evolving how Windows is built behind the scenes to raise the quality bar and deliver innovation where it matters most, shaped by the feedback we are hearing from you.
This includes deeper validation and broader testing across real-world hardware and usage scenarios before new experiences reach Windows Insiders, and a more intentional approach to where and how new capabilities are introduced. The result will be higher quality builds, more meaningful innovation and greater flexibility in choosing what you want to try. This is how we will continue to build and ship Windows 11, so we can deliver better experiences with greater confidence, month after month.
In line with Microsoft’s Secure Future Initiative, we will continue to make Windows more secure with every release, building in new capabilities and strengthening security by default to help protect users, devices and data.
As we improve and innovate, we look forward to your continued feedback on where we can keep making Windows better.
...
Read the original on blogs.windows.com »
In an odd approach to trying to improve customer tech support, HP allegedly implemented mandatory, 15-minute wait times for people calling the vendor for help with their computers and printers in certain geographies.
Callers from the United Kingdom, France, Germany, Ireland, and Italy were met with the forced holding periods, The Register reported on Thursday. The publication cited internal communications it saw from February 18 that reportedly said the wait times aimed to “influence customers to increase their adoption of digital self-solve, as a faster way to address their support question. This involves inserting a message of high call volumes, to expect a delay in connecting to an agent and offering digital self-solve solutions as an alternative.”
Even if HP’s telephone support center wasn’t busy, callers would reportedly hear:
We are experiencing longer waiting times and we apologize for the inconvenience. The next available representative will be with you in about 15 minutes.
To quickly resolve your issue, please visit our website support.hp.com to check out other support options or find helpful articles and assistant to get a guided help by visiting virtualagent.hpcloud.hp.com.
Callers were then told to “please stay on the line” if they wanted to speak to a representative. The phone system was also set to remind customers of their other support options and to apologize for the long (HP-induced) wait times upon the fifth, 10th, and 13th minute of the call.
The mandatory support call times have been lifted, per a company statement shared by HP spokesperson Katie Derkits:
We’re always looking for ways to improve our customer service experience. This support offering was intended to provide more digital options with the goal of reducing time to resolve inquiries. We have found that many of our customers were not aware of the digital support options we provide. Based on initial feedback, we know the importance of speaking to live customer service agents in a timely fashion is paramount. As a result, we will continue to prioritize timely access to live phone support to ensure we are delivering an exceptional customer experience.
HP didn’t immediately clarify when it removed the wait times. Some HP workers were reportedly unhappy with the mandatory hold times, with an anonymous “insider” in HP’s European operations reportedly telling The Register, per its Thursday report: “Many within HP are pretty unhappy [about] the measures being taken and the fact those making decisions don’t have to deal with the customers who their decisions impact.”
...
Read the original on arstechnica.com »
[Note that this article is a transcript of the video embedded above.]
On the northern edge of Los Angeles, fresh water spills down two stark concrete chutes perched on the foothills of the San Gabriel Mountains, a place simply called The Cascades. It’s a deceptively simple-looking finish line: the end of a roughly 300-mile (or 500 km) journey from the eastern slopes of the Sierra Nevada into the city.
On November 5, 1913, tens of thousands of people climbed these hills to watch the first water arrive. When the gates finally opened, water trickled through, but that trickle quickly became a torrent. The project’s chief engineer, William Mulholland, leaned over to the mayor and shouted the line that’s been repeated ever since: “There it is, Mr. Mayor. Take it!”
That moment was profound for a lot of reasons, depending on where you live and how you feel about water rights. LA didn’t become LA by living within the limits of its local resources. Its meteoric growth into the metropolis we know was enabled by an early and extraordinary decision to reach far beyond its own watershed and pull a whole new river into town. Today, roughly a third of LA’s water comes from the Eastern Sierra through the Los Angeles Aqueduct system. That share swings with snowpack, drought, and environmental constraints, but this one piece of infrastructure helped turn a water-limited town into a world city. It’s one of the most impressive and controversial engineering projects in American history.
But to really appreciate that water in the cascades, you have to look way upstream and see what it took to get it there. It’s gravity, geology, politics, and human ambition all in a part of the state that most people never see. Let’s take a little tour so you can see what I mean. I’m Grady and this is Practical Engineering.
When most people think about aqueducts, this is what they picture: a bridge carrying water over a valley or river. And, just to be clear, these are aqueducts. But engineers often use the term more broadly to describe any type of conveyance system that carries water over a long distance from a source to a distribution point. Could be a canal, a pipe, a tunnel, or even just a ditch. In the case of the LA aqueduct, it’s all of them, plus a lot of supporting infrastructure as well.
From the center of the city, it’s about a four hour drive to the Owens River Diversion Weir. It’s not accessible to the public, but it is the official start of the LA Aqueduct, at least when it was originally built. Here, all the snowmelt and rain from a huge drainage system between the Sierra Nevada and Inyo Mountains funnel down into the Owens River, where a large concrete diversion weir peels nearly all of it out of its natural course and into a canal. This point is roughly 2,500 feet (or 750 meters) higher in elevation than the bottom of the Cascades at the downstream end, which makes it obvious why LA chose it as a source. The entire aqueduct is a gravity machine. There are no pumps pushing the water toward the city. Half a mile of elevation change feels like a lot until you realize you have to spread it out over 300 miles. It’s all achieved through careful grading and managing elevations along the way to keep the flow moving.
That care is particularly important in this upper section of the aqueduct, where the water flows in an open canal. To do this efficiently, you need a relatively constant slope from start to finish. That’s a tough thing to achieve on the surface of a bumpy earth. Following a river valley makes this easier, but you can see the twists and turns necessary to keep the aqueduct on its gentle slope toward LA.
If it seems kind of wild that a city would buy up the land and water rights from somewhere so far away, it did to a lot of the people who lived in the Owens Valley, too. A lot of the acquisitions and politics of the original LA Aqueduct were carried out in bad faith, souring relationships with landowners, ranchers, farmers, and communities in the area. The saga is full of broken promises and shady dealings. Then when the diversion started, the area dried up, disrupting the ecology of the region, making agriculture more difficult and residents even more resentful. Many resorted to violence, not against people but against the infrastructure. They vandalized parts of the aqueduct, a conflict that later became known as the California Water Wars. In one case in 1924, ranchers used dynamite to blow up a part of the canal. Later that year, they seized the Alabama Gates.
About 20 miles or 35 kilometers downstream from the diversion weir, a set of gates sits on the eastern bank of the aqueduct canal. Because it runs beside the river valley, the aqueduct captures some of the water that flows down from the surrounding mountains in addition to what’s diverted out of the Owens River, particularly during strong storms. That means it’s actually possible for the canal to overfill. The Alabama Gates serve as a spillway, allowing operators to divert water back down to the river. This also helps drain the canal for maintenance or repairs when needed.
Those Owens Valley ranchers understood exactly what the Alabama Gates controlled. Open them, and the water would run back where it had always run, down the Owens River, instead of south to Los Angeles. The resistance simmered and flared for years, but it didn’t end in the dramatic showdown at the aqueduct. Instead, it ended at a bank counter. The Inyo County Bank was run by two brothers who were also key organizers and financiers of the resistance campaign. In August 1927, an audit revealed major shortfalls and ongoing embezzlement, and the bank quickly collapsed. Residents across the valley saw their savings wiped out or frozen overnight, shattering what was left of the community’s ability to keep fighting.
The Alabama Gates weren’t just a political flashpoint though. They also marked an important dividing line in the aqueduct’s design. LA knew that even if the ranchers didn’t release the water to the river in protests, a lot of it would end up there anyway through seepage. As the canal climbed away from the valley floor and crossed more porous soil, it would naturally lose its water through the ground. So, at the Alabama Gates, the aqueduct transitions from an unlined canal to a concrete-lined channel. It’s still open to the air, so there’s no protection against evaporation or contamination, but the losses to the ground are a lot less.
This design continues for about 35 miles (or 55 kilometers) through the valley. Along the way, the aqueduct passes the remains of Owens Lake. Once a large body of water, it quickly dried up with the diversion of the Owens River. Of course, there were impacts to wildlife from the loss of water, but the bigger problem came later: dust. All the fine sediment that settled on the lakebed over thousands of years was now exposed to the hot desert sun. When the wind picked up, it filled the air with fine particulates that are dangerous to breathe. Over the years, there have been times when Owens Lake is the single largest source of dust pollution in the entire country, and LA has spent more than a billion dollars just trying to fix this problem alone. The aqueduct passing along the hillside past the lake and its challenges is a reminder that the true cost of water is often a lot more than the infrastructure it takes to deliver it.
So far, it might be obvious that this aqueduct system is pretty fragile to be making up a major part of a city’s fresh water supply. Even beyond the vandalism and political resistance, there are a lot of things that could go wrong along the way, from bank collapses, earthquakes, diversion failures, and more. That’s why Haiwee Reservoir was originally built in a narrow saddle between two hills as a kind of buffer. With a dam on either side, it stored water up so the aqueduct could keep running even during a disruption upstream. It also slowed the water down, exposing it to the hot desert sun as a natural form of UV disinfection. In the 1960s, the reservoir was reconfigured into two basins to add some flexibility. That’s because, around that time, the LA aqueduct became two. While the open-topped canal section was large enough to meet demands, the underground conduit in the next section wasn’t. So, LA built a second one in 1970 to increase the flow. If you look at this map of the Haiwee Reservoirs, you can see that water has two paths: it can flow into the second aqueduct here from the north basin, or it can pass through the Merritt Cut to the south reservoir, through the intake there, and into the first aqueduct. This setup allows for some redundancy, along with regulation and balancing of the flows between the two aqueducts. Haiwee marks the start of the long desert run, with both systems no longer in open-topped lined canals, but running underground in concrete conduits.
There are a lot of advantages to running an aqueduct in a closed conduit underground, especially one this long through a desert landscape. There’s far less evaporation and less potential for contamination. It doesn’t divide the landscape at the surface level, so there’s no need for bridges, culverts, and wildlife crossings. Going underground also offers more flexibility when it comes to topography. You don’t have to follow the contours of the surface so carefully because if you come to a hill, you can just dig a little deeper to keep the constant slope.
Of course, those benefits come with a cost. An underground conduit is more expensive than a simple channel on the surface, and not all the problems with topography are solved. This is Jawbone Canyon, one of the biggest drops for the first aqueduct. Rather than taking a major detour around it, the aqueduct descends 850 feet (or 250 meters) and then ascends back up. This type of structure is often called an inverted siphon. I’ve done a video on how these work for sewer systems, and I’ve also done a video on flood tunnels that work in a similar way, if you want to learn more after this.
Unlike the concrete conduit, which really just acts like an underground canal with a roof, this is one of the places where the water in the aqueduct is pressurized. 850 feet of water column is about 370 psi, 26 bar, or two-and-a-half Megapascals. It’s a lot of pressure. These sections of pipe had to be specially manufactured on the East Coast, where the major steel facilities were, and transported by ship because of their size. They travelled all the way around Cape Horn, since the Panama Canal was still under construction. There are actually quite a few of these siphons crossing canyons in this section of the aqueduct, but Jawbone Canyon is the biggest one.
A little further downstream, the LA aqueduct crosses the California Aqueduct, part of the State Water Project. That system has a connection to LA as well, but this branch at the crossing actually heads to Silverwood Lake. However, there is a transfer facility, recently completed, that can pump water out of the California Aqueduct directly into the first LA aqueduct. This creates opportunities for LA to buy water that moves through the state system and offers some flexibility in where that water ends up. There’s also a turn-in that can move water from the LA aqueduct into the California aqueduct for situations where trades make sense. The second LA aqueduct passes underneath the state canal here. And this is a good example of the differences between the first project (built in the 1910s) and the second one, built in the 1960s. Over that time, the price of labor went up a lot more than the price of materials. Where the first one carefully followed the existing topography with bends and turns to minimize the need for expensive pressurized pipe, the second one could take a more direct path, reducing labor in return for the more specialized conduit materials.
After wandering more than a hundred miles (or 160 kilometers) apart, the two Los Angeles Aqueducts come back together at Fairmont Reservoir, in the northern foothills of the Sierra Pelona Mountains. This is the last major topographic barrier on the way to Los Angeles. There was no way to go up and over without pumps, so instead they went straight through. The largest project was the Elizabeth tunnel.
Here, the two aqueducts come together again into a single watercourse. About 5 miles or 8 kilometers of excavation through everything from hard rock to loose, wet ground became one of the most difficult parts of the entire project. The tunnel required continuous temporary supports along most of its length, followed by a permanent concrete lining. It was a monumental effort for its time and essential not only to cross the range. The Elizabeth Tunnel also delivers that water under pressure to the San Francisquito Power Plant Number 1.
This is the largest of the eight hydroelectric plants that run along the aqueduct, capturing some of the energy from the water as it flows downward toward LA. These plants are a major part of how the project paid for itself, and they continue to serve as an important source of electricity in the region today.
Continuing downstream, Bouquet Canyon reservoir adds another layer of operational flexibility. It helps regulate flow through the power plants and provides additional storage, a sort of insurance policy since this whole reach depends on a single major tunnel crossing the San Andreas Fault. In case of a major earthquake, it’d be best if Angelinos could avoid a simultaneous water shortage.
The aqueduct splits again just upstream of the San Francisquito Plant Number 2, which was famously destroyed by the St. Francis Dam failure. That reservoir project was designed to supplement the storage capacity along the aqueduct, but the dam failed catastrophically in 1928, just 2 years after it was completed, killing more than 400 people and destroying several parts of the aqueduct as well. The tragedy was one of the worst engineering disasters in American history. It put another stain on the aqueduct project, and it effectively ruined the reputation of William Mulholland, who was largely considered a hero in LA for all his work on the aqueduct and the city’s water system. The dam was never rebuilt, but workers restored the aqueduct to functioning service in only 12 days.
At Drinkwater Reservoir, the two aqueducts run roughly parallel through the Santa Clarita area, sometimes aboveground and sometimes below, before finally reaching the terminal structures that carry water into LA. Usually, the water stays in the conduits, which feed the two hydropower plants at the foot of the mountains. If the plants are out of service or there’s more flow than they can handle, you see excess water thundering through the cascade structures instead.
From here, the aqueduct drops out of the mountains and into the north end of the San Fernando Valley, where the water is treated and prepared for distribution. After filtration and disinfection, it’s stored in the Los Angeles Reservoir, the system’s terminal reservoir, so the city can smooth out day-to-day swings in demand even while the aqueduct’s inflow stays relatively steady.
For most of Los Angeles’ history, that “finished water storage” was out in the open air. But in the 2000s, drinking-water rules pushed utilities to add stronger protection for treated water held in uncovered reservoirs. There’s a good chance you’ve seen their solution on the Veritasium channel or elsewhere: 96 million plastic shade balls that act like a floating cover, blocking sunlight to prevent water-chemistry problems and helping keep wildlife out. They’re the final protection for this water that traveled so long to reach the city. While the LA Reservoir is, in a sense, the end of the journey for this water, the original diversion way back at Owen’s River isn’t even technically the start anymore!
In 1940, LA extended the aqueduct system upstream northward by connecting the Mono basin and funneling its water through tunnels to the Owens River basin. Like Owens Lake downstream, Mono Lake began drying out as well. And also like Owens Lake, lawsuits, court orders, and environmental regulations have tempered the value of this water source, forcing LA to significantly reduce diversions and implement costly restoration projects.
That’s kind of the story of the LA aqueduct in a nutshell. The project seemed obvious from an engineering perspective. There was lots of snowmelt in the mountains; the city had the technical prowess, the funding, the elevation, and the political power to reach out and take it. The result was one of the most impressive works of infrastructure of the early 20th century. And continued efforts to expand and improve the system have made it even more efficient, flexible, and valuable to the many millions of people who live in one of the most populous cities in America, delivering not only water but also hundreds of megawatts of hydropower.
But it many ways, it was not only unscrupulous, but also short-sighted. Residents of the Owens Valley watched ranchland and farmland dry up as the water that had shaped their home was rerouted south. Native communities saw their homeland transformed with access to gathering areas disrupted, places made unrecognizable, and cultural ties strained by changes they didn’t choose. Wind picked up alkaline dust from dried lakebeds. Habitats were disrupted, and the birds that depended on these waters and wetlands lost part of what made this migration corridor work. It’s easy to see why the aqueduct remains controversial, and why what we sometimes dismiss as “red tape” around major infrastructure is often completely justified due diligence. As engineers, and really, as humans, we have to try and account for costs that don’t show up on a balance sheet, but can come back later as decades of lawsuits, mitigation, and restoration.
And even the aqueduct’s original thesis (that there’s reliable snowmelt up there, and a growing city down here) is starting to falter. In recent decades, the mountains have delivered less predictable runoff: more swings, more years when the timing is wrong, and more uncertainty about what “normal” even means anymore. California’s climate has always moved in long cycles, but the margin for error is thinner now, and no one can say with much confidence when or if the moisture the state depends on will return to its old pattern.
The hopeful part is that this is exactly where engineering makes a difference: at the messy intersection of geology, climate, culture, politics, and human need. The Los Angeles Aqueduct is a case study in what we can build when we’re ambitious, but also what happens when we treat a landscape like a machine with only one output. The next era of water engineers can learn a lot from it.
...
Read the original on practical.engineering »
The Free Software Foundation (FSF), like many others, received a notice regarding settlement in the copyright infringement lawsuit Bartz v. Anthropic. It is a class action lawsuit claiming that Anthropic infringed copyright by downloading works in Library Genesis and Pirate Library Mirror datasets for purposes of training large language models (LLMs). According to the notice, the district court ruled that using the books to train LLMs was fair use but left for trial the question of whether downloading them for this purpose was legal. Apparently, the parties agreed to settle instead of waiting for the trial and they are now reaching out to potential copyright holders to offer money in lieu of potential damages.
The FSF holds copyrights to many programs in the GNU Project, as well as to several books. We publish all works that we hold copyrights to under free (as in freedom) licenses. Among the works we hold copyrights over is Sam Williams and Richard Stallman’s Free as in freedom: Richard Stallman’s crusade for free software, which was found in datasets used by Anthropic as training inputs for their LLMs. It was published by O’Reilly and by the FSF under the GNU Free Documentation License (GNU FDL). This is a free license allowing use of the work for any purpose without payment.
Obviously, the right thing to do is protect computing freedom: share complete training inputs with every user of the LLM, together with the complete model, training configuration settings, and the accompanying software source code. Therefore, we urge Anthropic and other LLM developers that train models using huge datasets downloaded from the Internet to provide these LLMs to their users in freedom. We are a small organization with limited resources and we have to pick our battles, but if the FSF were to participate in a lawsuit such as Bartz v. Anthropic and find our copyright and license violated, we would certainly request user freedom as compensation.
...
Read the original on www.fsf.org »
Swipe or click to see more
Gov. Tina Kotek visited Estacada High School to hear how her cell phone ban has been going. (Staff photo: Christopher Keizur)
Swipe or click to see more
Gov. Tina Kotek said she chose Estacada High for her visit because of the positive things happening within the district. (Staff photo: Christopher Keizur)
Swipe or click to see more
Swipe or click to see more
Swipe or click to see more
Swipe or click to see more
There was plenty of uncertainty and debate about the effectiveness of a cell phone ban decreed by executive order last summer.
But at least in Estacada, the policy has earned two thumbs up, including approval from a “grumpy old teacher.”
Jeff Mellema is a language arts teacher at Estacada High School. He has worked in the building for 24 years, and he said the new policy that prohibits students from using their phones during the day has been a breath of fresh air.
“There is so much better discourse in my classroom, be it personal or academic,” Mellema said. “Students can’t avoid those conversations anymore with their phones.”
“This ban has brought joy back to this old, grumpy teacher,” he added with a smile.
That is the kind of feedback Gov. Tina Kotek was hoping for as she visited Estacada High School on Wednesday afternoon, March 18. Her goal was to visit classrooms, speak with administrators, and meet with students one-on-one to hear about the effectiveness of her phone policy.
“I knew when I put out the order, not everyone would love it from day one,” Gov. Kotek said. “I appreciate all the feedback today.”
“You have an amazing school. Go Rangers,” she added with a smile.
For years, educators had reported that cell phones were disruptive in classrooms and hindered effective teaching. Research supported those anecdotal claims, showing that phones undermine students’ ability to focus, even when they are just sitting on the desk, unused.
So the governor issued her executive order prohibiting cell phone use by students during the school day in Oregon’s K-12 public schools. To help implement the ban, her office worked with the Oregon Department of Education to share model policies for schools that already have prohibitions in place, as well as guidance on implementation flexibility.
“We are grateful to have Governor Kotek here today to see with her own eyes the positive effect this has had,” said Estacada Superintendent Ryan Carpenter. “We can now demand this expectation for our students’ well-being and success.”
Since that order was issued, every single public school district in Oregon is in compliance with the band.
“The goal is for every student to have the best opportunity to be successful,” Kotek said. “They need to know how to talk to people, learn, and go out into the world.”
The governor visited two classrooms during her trip to Estacada High — Mr. Schaenman’s history class and Mrs. Hannet’s algebra class. Gov. Kotek assured the kids that she still uses algebra in her day-to-day life, no matter how unlikely that may seem to the youngsters.
In the classrooms, she was able to take a straw poll around the cell phone ban and then get specific, direct feedback from the kids.
Overall, it was positive. The Rangers said they noticed changes in how they interact with teachers and peers. They don’t feel that “siren’s song” tug of their phones as often, and the changes are bleeding into everyday life as well — think less reminders to put phones away during family dinners. Phones also led to issues around bullying and online toxicity during the school day.
There are some hiccups. The students spoke about difficulties in tracking busy schedules. Many athletes relied on their phones for practice times and locations. Some advanced placement kids said the overzealous programs monitoring school laptops blocked access to needed resources for studying/researching schoolwork. There is even a strange quirk with school-provided tech that prevents them from accessing their calculators.
“Maybe the filters are too strong right now,” Gov. Kotek said. “That is why we are working with the districts to best implement the policy.”
The kids also weighed in on the debate around the extent of the ban. The two options bandied in Salem were a “bell-to-bell” policy or just inside classrooms. The latter would allow kids to use their phones during passing period and lunch. Several advocated for that change.
That mirrored the debate within the Oregon legislature. It ultimately led to a stalemate and the need for Gov. Kotek’s executive ruling.
“When you make a decision like this, you don’t know how it will ultimately work,” Kotek told the students. “I appreciate you adapting to the situation and making it work for you.”
While things could change in the future, the governor is pleased with the early results. The phone ban is here to stay.
Estacada School District is reveling in its status of public school “Golden Child.”
The visit from the governor is the latest feather in the cap of the rural, small district. In 2025, Estacada High had a 92.5% graduation rate. That is a stunning turnaround from a record-low of 38.5% in 2015. The district credits policies aimed at retaining talented teachers and empowering students to take a more active role in their learning.
“We are proud of these results,” Carpenter said. “This is a reflection and reward for a ton of hard work. This district literally changed its stars to be seen as a true academic powerhouse in Oregon.”
That mindset continues with the cell phone ban. Like many others, Estacada had a version of this in place before the official edict. But with the governor’s push, it empowered administrators and teachers to learn fully embrace the ban.
“Any policy is only as good as the teachers who enforce it,” Carpenter said.
In crafting its policy, Estacada incorporated feedback from parents. That led to some key decisions around the cell phone ban. Rather than use pouches or lockers, students are allowed to keep their phones safely stored in their backpacks. That was for two reasons — it allows students to contact loved ones during emergencies, and many parents use phone trackers to keep tabs on their kids.
The district has also leaned on direct, immediate communication. The flow of information reaches parents directly, avoiding some of the miscommunication that occurred in the past.
“Even I’m surprised by the impact this has had,” Kotek said. “I’m thankful for the educators who took up the charge when I said we’ve got to do this.”
“We can model what Estacada is doing for other districts across the state,” she added.
Oregon voters have rejected most laws in referendums, with many decided outside November
Now’s the time to see Portland cherry blossoms in peak bloom
...
Read the original on portlandtribune.com »
Part 1 of 3 in the Java Performance Optimization series. Parts 2 and 3 coming soon.
I built a Java order-processing app for a talk I gave at DevNexus a couple of weeks ago. The app worked. Tests passed. I ran a load test and collected a Java Flight Recording (JFR).
Before any changes: 1,198ms elapsed time, 85,000 orders per second, peak heap sitting at just over 1GB, 19 GC pauses.
After: 239ms. 419,000 orders per second. 139MB heap. 4 GC pauses.
Same app. Same tests. Same JDK. No architectural changes. And those numbers get a lot more meaningful when you consider that code like this doesn’t run on a single box in production. It runs across a fleet.
In Part 2 I’ll walk through the profiling data behind those numbers: the flame graph, which methods were actually hot, and what changed when we fixed them. Before we get there, you need to know what kinds of things we were actually fixing.
The problems were patterns that show up in real codebases. They compile fine, they sneak through code review, and they’re the kind of thing you could miss without profiling data telling you where to look. Here are eight of them.
TL;DR: Fixing anti-patterns like these turned a Java app that took 1,198ms into one that took 239ms. Here are some to look for and fix:
Too-broad synchronization — one lock becomes the bottleneck
After fixing: 5x throughput, 87% less heap, 79% fewer GC pauses. Same app, same tests, same JDK.
String report = “”;
for (String line : logLines) {
report = report + line + “\n”;
This code looks good, right? The problem is what String immutability means in practice.
Every time you use +, Java creates a brand new String object, a full copy of all previous content with the new bit appended. The old one gets discarded. This happens every single iteration.
The characters being copied scale as O(n²). If you have 10,000 lines, iteration 1 copies roughly nothing, iteration 5,000 copies 5,000 characters worth of accumulated content, iteration 10,000 copies all of it. BellSoft ran JMH benchmarks on exactly this and showed that when n grows by 4x, the loop-concatenation version slows down by more than 7x, much worse than linear growth.
StringBuilder sb = new StringBuilder();
for (String line : logLines) {
sb.append(line).append(“\n”);
String report = sb.toString();
StringBuilder works off a single mutable character buffer. One allocation. Every append writes into that buffer. One toString() at the end.
Note: Since JDK 9, the compiler is smart enough to optimize “Order: ” + id + ” total: ” + amount on a single line. But that optimization doesn’t carry into loops. Inside a loop, you still get a new StringBuilder created and thrown away on every iteration. You have to declare it before the loop yourself, like the fix above shows.
for (Order order : orders) {
int hour = order.timestamp().atZone(ZoneId.systemDefault()).getHour();
long countForHour = orders.stream()
.filter(o -> o.timestamp().atZone(ZoneId.systemDefault()).getHour() == hour)
.count();
ordersByHour.put(hour, countForHour);
This looks reasonable. You’re grouping orders by hour. But look at what’s happening: for each order, you’re streaming over the entire list to count how many orders share that hour. If you have 10,000 orders, that’s 10,000 iterations times 10,000 stream elements. That’s 100 million comparisons for what should be a single pass.
In my demo app, this exact pattern was the single largest CPU hotspot. It accounted for nearly 71% of CPU stack samples in the JFR recording.
for (Order order : orders) {
int hour = order.timestamp().atZone(ZoneId.systemDefault()).getHour();
ordersByHour.merge(hour, 1L, Long::sum);
One pass. O(n). Each order increments its hour’s count directly. You could also use Collectors.groupingBy(… Collectors.counting()) to do it in a single stream pipeline, but the merge approach is clear and avoids the overhead of creating a stream at all.
If you see a .stream() call inside a loop body, that’s a signal to pause and check whether you’re doing redundant work.
public String buildOrderSummary(String orderId, String customer, double amount) {
return String.format(“Order %s for %s: $%.2f”, orderId, customer, amount);
String.format() tends to get recommended as the clean, readable way to build strings. Yep, it’s readable and it’s also the slowest string-building option in Java when you’re calling it frequently.
Baeldung ran JMH benchmarks across every string concatenation approach in Java. String.format() came in last in every category. It has to parse the format string every call, run regex-based token matching, and dispatch through the full java.util. Formatter machinery. StringBuilder was consistently the fastest.
return “Order ” + orderId + ” for ” + customer + ”: $” + String.format(“%.2f”, amount);
Use String.format() for the numeric formatting where you need it, and let the compiler optimize the rest. Or just use a StringBuilder if you need full control.
String.format() is fine for config loading, startup code, error messages, anywhere that runs infrequently. Move it out of anything your profiler says is hot.
Long sum = 0L;
for (Long value : values) {
sum += value;
What’s actually happening at the JVM level:
Long sum = Long.valueOf(0L);
for (Long value : values) {
sum = Long.valueOf(sum.longValue() + value.longValue());
Each iteration unboxes sum to get a long, adds, then boxes the result back into a new Long object. With a million elements, you’ve created a million Long objects that the GC has to clean up. Each Long on a 64-bit JVM takes roughly 16 bytes on the heap. That’s 16MB of heap churn for what should be a simple addition loop.
long sum = 0L; // primitive, not the wrapper
for (long value : values) {
sum += value;
Where this tends to sneak in: aggregation and processing loops. Summing metrics, accumulating counters, building stats. Boxed types creep in because someone used Long in a collection signature somewhere upstream and nobody thought about what it costs downstream in the loop. That can be legitimately easy to miss.
Watch for Integer, Long, or Double used as local loop variables or accumulators. Also watch for List and Map in frequently-called code. Every .get() and .put() involves a box/unbox round trip that you’re paying for silently.
public int parseOrDefault(String value, int defaultValue) {
try {
return Integer.parseInt(value);
} catch (NumberFormatException e) {
return defaultValue;
If this method is called in a tight loop with a meaningful percentage of non-numeric inputs, you have a performance problem that might not look like one.
The expensive part is Throwable.fillInStackTrace(), which runs inside the Throwable constructor every time an exception is created. It walks the entire call stack via a native method and materializes it into StackTraceElement objects. The deeper your call stack, the more expensive this is. Imagine a situation in a framework like Spring where this can get very deep. Norman Maurer from the Netty project benchmarked this and the difference is significant. Baeldung’s JMH results show that throwing an exception makes a method run hundreds of times slower than a normal return path.
This isn’t theoretical. There’s a real production story of a Scala/JVM templating system that cut response time by 3x after discovering that a NumberFormatException was being thrown on every field of every template render. Every time a field name was being tested to see if it was a numeric index, it threw.
public int parseOrDefault(String value, int defaultValue) {
if (value == null || value.isBlank()) return defaultValue;
for (int i = 0; i < value.length(); i++) {
char c = value.charAt(i);
if (i == 0 && c == ‘-’) continue;
if (!Character.isDigit(c)) return defaultValue;
try {
return Integer.parseInt(value);
} catch (NumberFormatException e) {
return defaultValue;
Or use NumberUtils.isParsable() from Apache Commons Lang if it’s already on your classpath.
Updated: several HN commenters correctly pointed out that the fix above originally didn’t include the try-catch, which meant overflow values and edge cases like a bare ”-” would throw an unhandled exception. Updated to keep a try-catch around the final parseInt as a safety net. The pre-validation still avoids the expensive exception path for the vast majority of bad inputs, which is the point.
The principle: if invalid input is a routine case in your application, user-provided data, external feeds, anything you don’t fully control, pre-validate explicitly. Exceptions are for genuinely unexpected conditions, not for “this might be in the wrong format.”
public class MetricsCollector {
private final Map
Shared mutable state needs protection. But synchronized on the whole method means only one thread can call either method at any given time. In a service handling real concurrency, every thread calling increment() queues up waiting for every other thread to finish. The lock itself becomes the bottleneck.
private final ConcurrentHashMap
ConcurrentHashMap handles concurrent reads and writes without locking the whole structure. LongAdder is purpose-built for high-concurrency incrementing. It distributes the counter across internal cells and outperforms AtomicLong under contention.
Worth calling out separately: Collections.synchronizedMap() wrappers have the same broad-lock problem, one lock for the entire map. ConcurrentHashMap is almost always the right replacement.
public String serializeOrder(Order order) throws JsonProcessingException {
return new ObjectMapper().writeValueAsString(order);
ObjectMapper is one of the most common examples of an object that looks cheap to create but isn’t. Constructing one involves module discovery, serializer cache initialization, and configuration loading. It’s real work happening on every call here.
Same pattern with DateTimeFormatter.ofPattern(”…“), new Gson(), new XmlMapper(). They’re all designed to be constructed once and reused. Creating them in a hot method means paying that setup cost on every invocation.
private static final ObjectMapper MAPPER = new ObjectMapper();
public String serializeOrder(Order order) throws JsonProcessingException {
return MAPPER.writeValueAsString(order);
ObjectMapper is thread-safe once configured, so sharing a static final instance is fine. The DateTimeFormatter built-ins like DateTimeFormatter. ISO_LOCAL_DATE are already singletons. If you’re calling DateTimeFormatter.ofPattern(”…“) in a hot method, move it to a constant.
The heuristic: if an object’s constructor does substantial setup work and the object is stateless (or safely shareable) after construction, it should be a field or a constant, not a local variable.
This one is worth including if you’ve started using virtual threads, introduced as a production feature in Java 21.
Virtual threads work by mounting onto a small pool of platform (OS) threads called carrier threads. When a virtual thread blocks, waiting on I/O for example, the scheduler unmounts it from the carrier, freeing that carrier to run something else. That’s the whole scalability story with virtual threads.
But there’s a catch. When a virtual thread enters a synchronized block and hits a blocking operation while inside it, it can’t be unmounted. It pins the carrier thread. That platform thread is now stuck waiting, unable to serve other virtual threads, for as long as the blocking operation takes.
// This pattern can pin a carrier thread on JDK 21
public synchronized String fetchData(String key) throws IOException {
return Files.readString(Path.of(“/data/” + key)); // blocking I/O inside synchronized
If this happens frequently enough, all your carrier threads get pinned and your application stalls, even with thousands of virtual threads waiting to do work. Netflix ran into exactly this in production and wrote a post about debugging it.
JFR actually tells you when this is happening. The jdk. VirtualThreadPinned event fires whenever a virtual thread blocks while pinned, and by default it only triggers when the operation takes longer than 20ms, so it’s already filtered to the cases that actually matter.
private final ReentrantLock lock = new ReentrantLock();
...
Read the original on jvogel.me »
The final report of the Expert Panel on the 28 April 2025 blackout in continental Spain and Portugal identifies the causes of the blackout and outlines recommendations to strengthen the resilience of Europe’s interconnected electricity system. It was prepared by a technical Expert Panel of 49 members, including representatives from Transmission System Operators (TSOs), Regional Coordination Centres (RCCs), ACER and National Regulatory Authorities (NRAs), and was chaired by experts from two unaffected TSOs. The investigation concludes that the blackout resulted from a combination of many interacting factors, including oscillations, gaps in voltage and reactive power control, differences in voltage regulation practices, rapid output reductions and generator disconnections in Spain, and uneven stabilisation capabilities. These factors led to fast increases of voltage and cascading generation disconnections in Spain, resulting in the blackout in continental Spain and Portugal.Based on these findings, the Expert Panel sets out recommendations addressing each of the factors identified in the report to help prevent similar events in the future. These include strengthened operational practices, improved monitoring of system behaviour and closer coordination and data exchange among power system actors. The findings of the investigation also underscore the need for regulatory frameworks to adapt in order to support the evolving nature of the power system.The 28 April blackout was a first of its kind event, and the recommendations aim to strengthen system resilience with solutions that are already technologically deployable. This blackout highlights how developments at the local level can have system-wide implications and underlines the importance of maintaining strong links between local and European system behaviour and coordination, while ensuring that market mechanisms, regulatory frameworks and energy policies remain aligned with the physical limits of the system.Download the final report of the Expert Panel
On 28 April 2025, at 12:33 CEST, the power systems of continental Spain and Portugal experienced a total blackout. A small area in Southwest France close to the Spanish border experienced disruptions for a very short duration and several industrial consumers and generators were affected. The rest of the European power system did not experience any significant disturbance as a result of the incident.This was the most severe blackout incident on the European power system in over 20 years, and the first ever of its kind. Figure 1 — Geographic area affected by the incident of 28 April 2025.
Following the blackout, on 12 May 2025, ENTSO-E set up an Expert Panel in line with Article 15(5) of the Commission Regulation (EU) 2017/1485 of 2 August 2017 establishing a guideline on electricity transmission system operation (SO GL) and the Incident Classification Scale (ICS) Methodology. The ICS Methodology is the framework for classifying and reporting incidents in the power system and for organising the investigation of such incidents and is especially relevant to the work of the Expert Panel. It is noted that the investigation of the Expert Panel was performed in line with the version of the ICS Methodology applicable at the time of the incident. Under the legal requirements of both SO GL and the ICS Methodology, when the incident is classified according to the ICS Methodology criteria as scale 3 incident — blackout — the Expert Panel is tasked to investigate the root causes of the incident, produce a comprehensive analysis, and make recommendations in a final report, published on 20 March 2026.The Expert Panel consists of representatives from TSOs, the Agency for the Cooperation of Energy Regulators (ACER), National Regulatory Authorities (NRAs), and Regional Coordination Centres (RCCs).The Panel is led by experts from TSOs not directly affected by the incident and includes experts from both affected and non-affected TSOs. The Expert Panel is led by Klaus Kaschnitz (APG, Austria) and Richard Balog (MAVIR, Hungary).Olivier Arrivé — as chair of the System Operation CommitteesRobert Koch — as convenor of the Steering Group Resilient OperationRafal Kuczynski — as convenor of the Regional Group Continental EuropeExperts from TSOs and RCCs participating in the Expert PanelExperts from ACER and NRAs participating in the Expert Panel
On 12 May 2025, the Expert Panel initiated its investigation into the causes of the blackout. In accordance with the ICS methodology, the investigation is conducted in two phases:In the first phase of the investigation, the Expert Panel collected and analysed all data on the incident available at the time to reconstruct the events of 28 April. At the end of this first phase, the Expert Panel delivered its factual report, released on 3 October 2025. It describes the system conditions that prevailed on 28 April 2025, provides a detailed sequence of events during the incident and describes how the system was restored after the incident. The report highlights the exceptional and unprecedented nature of this incident - the first time a cascading series of disconnections of generation components along with voltage increases has been part of the sequence of events leading to a blackout in the Continental Europe Synchronous Area.Download the factual report of the Expert PanelIn the second phase, which started immediately after the finalisation of the factual report, the Panel focused on the identification and analysis of the root causes of the incident. The Expert Panel specifically evaluated cascading disconnections of generation in the system, the voltage control and the oscillations’ mitigation measures. The Panel also assessed the performance of generators in regard to protection settings and contribution to voltage control, as well as the performance of the system defence plans and analysed the various steps of the restoration phase.Based on the findings, the Panel sets out recommendations in its final report addressing each of the factors identified to help prevent similar events in the future.Download the final report of the Expert Panel
A dedicated joint workshop of the System Operations European Stakeholder Committee (SO ESC) and of the Grid Connection European Stakeholder Committee (GC ESC), chaired by ACER, was organised on 18 July 2025 to inform the stakeholders on the progress of the investigation of the Expert Panel. A second joint workshop of SO ESC and GC ESC took place on 13 October 2025 to discuss the factual report. Detailed information about the role, composition and work of these two Committees is available on the ENTSO-E website here.
...
Read the original on www.entsoe.eu »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.