10 interesting stories served every morning and every evening.
Zen gives you access to a handpicked set of AI models that OpenCode has tested and benchmarked specifically for coding agents. No need to worry about inconsistent performance and quality across providers, use validated models that work.
...
Read the original on opencode.ai »
Chuck Norris, the martial arts champion who became an iconic action star and led the hit series “Walker, Texas Ranger,” has died. He was 86.
Norris was hospitalized in Hawaii on Thursday, and his family posted a statement Friday saying that he died that morning. “While we would like to keep the circumstances private, please know that he was surrounded by his family and was at peace,” his family wrote.
Ryan Coogler Makes Actor Awards History With ‘Sinners’ as First Person to Direct Two Cast Ensemble Winners
SAG’s Actor Awards Winners: ‘Sinners’ Wins Top Prize, ‘The Studio’ and ‘The Pitt’ Lead for TV
“To the world, he was a martial artist, actor, and a symbol of strength. To us, he was a devoted husband, a loving father and grandfather, an incredible brother, and the heart of our family,” the statement continued. “He lived his life with faith, purpose, and an unwavering commitment to the people he loved. Through his work, discipline, and kindness, he inspired millions around the world and left a lasting impact on so many lives.”
As an action star, Norris had a degree of credibility that most others could not match.. Not only did he appear opposite the legendary Bruce Lee in 1972 film “The Way of the Dragon” (aka “Return of the Dragon”), but he was a genuine martial arts champion who was a black belt in judo, 3rd degree black belt in Brazilian Jiu-Jitsu, 5th degree black belt in Karate, 8th degree black belt in Taekwondo, 9th degree black belt in Tang Soo Do and 10th degree black belt in Chun Kuk Do.
Norris was extremely prolific in the late 1970s and ’80s, starring in “The Delta Force” and “Missing in Action” films, “Good Guys Wear Black” (1978), “The Octagon” (1980), “Lone Wolf McQuade” (1983), “Code of Silence” (1985) and “Firewalker” (1986).
Norris joined a bevy of other action stars in the Sylvester Stallone-directed “The Expendables 2” in 2012 after an absence from the screen of seven years.
While he scored high on credibility, Norris did not leaven his work with humor the way Arnold Schwarzenegger, Bruce Willis and Jackie Chan did. He was nevertheless the action star of choice for those seeking an all-American icon.
In 1984, Norris starred in “Missing in Action,” the first in a series of films centered around the rescue of American POWs purportedly still held after being captured during the Vietnam War. (Norris’ younger brother Wieland had been killed while serving in Vietnam, and the actor dedicated his “Missing in Action” films to his brother’s memory, but critics of Norris and producer Cannon Films maintained that the films borrowed too heavily from the central conceit of Stallone’s highly successful “Rambo” films.)
As Norris’ movie career began to wane, he made a timely move to television, starring in the CBS series “Walker, Texas Ranger,” inspired by his film “Lone Wolf McQuade.” The program ran from 1993-2001, and the actor reprised the role of Cordell Walker in the TV movies “Walker Texas Ranger 3: Deadly Reunion” (1994) and “Walker, Texas Ranger: Trial by Fire” (2005). (Also in 2005 Norris made the last film in which he starred, the straight-to-DVD “The Cutter.”)
In his later years, Norris was portrayed in memes documenting fictional, frequently absurd feats associated with him, such as “Chuck Norris kills 100% of germs” and “Paper beats rock, rock beats scissors, and scissors beats paper, but Chuck Norris beats all 3 at the same time.” In his later years Norris appeared in infomercials for workout equipment and became increasingly outspoken as a political conservative.
Carlos Ray Norris was born in Ryan, Okla.; his father served as a soldier in World War II. In 1958 he joined the Air Force as an Air Policeman (AP, analogous to the Army’s MPs). While serving at Osan Air Base in South Korea, Norris first acquired the nickname “Chuck” and began his training in Tang Soo Do (aka tangsudo), leading to his achievements in other martial arts and to his development of hybrid style Chun Kuk Do (“The Universal Way”). He returned to the U. S. and served as an AP at March Air Force Base in California.
After his 1962 discharge, Norris worked for aerospace company Northrop and opened a chain of karate schools; celebrity clients at the schools included Steve McQueen, Chad McQueen, Bob Barker, Priscilla Presley, Donny Osmond and Marie Osmond.
Norris made his acting debut in an uncredited role in the 1969 cult Matt Helm film “The Wrecking Crew,” starring Dean Martin. Norris met Bruce Lee at a martial arts demonstration in Long Beach, Calif., and played the nemesis of Lee’s character in 1972 movie “The Way of the Dragon” (retitled “Return of the Dragon” for U. S. distribution). In 1974 McQueen spurred Norris to begin taking acting classes at MGM.
Norris first starred in the 1977 action film “Breaker! Breaker!,” in which he played a trucker searching for his brother, who’s disappeared in a town with a judge who’s corrupt.
The actor proved his box office mettle with his subsequent films, “Good Guys Wear Black” (1978), “The Octagon” (1980), “An Eye for an Eye” (1981) and “Lone Wolf McQuade.”
Norris began starring in movies for Cannon Films in 1984. Over the next four years, he became Cannon’s most prominent star, appearing in eight films, including the three “Missing in Action” films; “Code of Silence” — qualitatively, one of his best films — the two “Delta Force” films and “Firewalker.” Norris’ brother Aaron Norris produced several of these films, and also became a producer on “Walker, Texas Ranger.”
A longtime supporter of conservative politicians, he wrote several books with Christian and patriotic themes.
Norris was twice married, the first time to Dianne Holechek from 1958 until their divorce in 1988.
He is survived by second wife Gena O’Kelley, whom he married in 1998; three sons, Eric, Mike and Dakota, daughters Danilee and Dina; and a number of grandchildren.
...
Read the original on variety.com »
The version that arrived in your mailbox is truncated. Visit the full article here to view the rest.
Reporters who want to get in touch: Drop an email at aicpnay@proton.me. In case my Proton account gets blocked or you don’t get an answer, tweet using the hashtag #AICPNAY with contact details and I’ll do my best to get in touch with you.
At its core, this article argues that Delve fakes compliance while creating the appearance of compliance without the underlying substance.
Delve achieves its claim of being the fastest platform by producing fake evidence, generating auditor conclusions on behalf of certification mills that rubber stamp reports, and skipping major framework requirements while telling clients they have achieved 100% compliance. Their “US-based auditors” are Indian certification mills operating through empty US shells and mailbox agents. Auditors breach independence rules by signing off anyway, leaving companies unknowingly exposed to criminal liability under HIPAA and hefty fines under GDPR.
Delve hands customers fabricated evidence of board meetings, tests, and processes that never happened. The platform forces companies to choose between adopting fake evidence or performing mostly manual work with little real automation or AI. It produces audit reports that falsely claim independent verification while Delve itself effectively wears the auditor hat, generating identical reports for all clients, including Lovable, Bland, Cluely, and NASDAQ-traded Duos Edge. It hosts trust pages that list security measures that were never implemented.
When clients ask hard questions, Delve dodges. They demand calls where founders charm, promise, and namedrop Lovable, Bland, and “every Fortune 500” as proof. When that fails, donuts arrive.
Preface - How it all startedWhat the leak and our research reveals
Two months ago, an email went out to a few hundred Delve clients informing them that Delve had leaked their audit reports, alongside other confidential information, through a Google spreadsheet that was publicly accessible. This email also claimed that Delve’s audit reports were fraudulent. The company I work for was one of those clients that received that email.
Instead of providing clarification and being transparent, Delve’s leadership decided to go into deny and deflect mode. When directly asking them for clarification, they flat-out denied everything.
This was serious, as it potentially involved nearly all of Delve’s clients and raised questions about the validity of the compliance reports Delve’s clients had received.
Multiple colleagues in my network received the same email. Having the shared experience of being underwhelmed with the Delve experience, and having the overall sense that something fishy was going on, we decided to pool resources and investigate together. This article is the result of that collaboration.
The reason we felt the way we did was due to how little actual work any of us had to perform to become ‘compliant’, combined with a product practically devoid of any real AI. It mostly felt like a SOC 2 template pack with a thin SAAS platform wrapper where you simply adopt and sign all templated documents. No custom tailoring, no AI guidance, no real automation. Just pre-populated forms that required you to click “save”.
Some of us have gone through compliance before and felt there was a huge mismatch between our past experience and our experience with Delve.
In this article I will walk you through a typical experience with Delve, the leak that exposed their operation, and how it revealed the fraud we uncovered. I will show, among other things, how for each of the below categories:
* Delve breaches AICPA/ISO rules by acting as auditor, generating pre-drafted assessments, tests, and conclusions
* Delve relies on audit firms that rubber stamp reports because genuine independent verification would expose the evidence as fabricated or deficient
* Delve misleads clients by claiming reports are produced by US-based CPA firms, when in reality they are produced by Delve and rubber stamped by Indian certification mills
* Delve leads clients to believe they are compliant when they are not
* Delve helps clients mislead the public by hosting trust pages that contain security measures that were never implemented
* Delve lies to clients when directly questioned, denying documented facts about the leak and report generation
* Delve markets AI-driven automation while the product is practically devoid of AI, relying on pre-populated templates, manual forms, and fabricated evidence
* Delve’s product is unable to get companies truly compliant
* Delve’s platform forces companies to choose between adopting fake evidence or performing mostly manual work with little real automation
* Unable to deliver real compliance through its platform, Delve depends on fraudulent auditors who rubber stamp reports for clients, falling back on off-platform manual work with external vCISOs and good auditors only when complaints or profile threaten its business interests
* Delve’s process results in clients violating GDPR and HIPAA requirements, exposing them to criminal liability under HIPAA and fines up to 4% of global revenue under GDPR
For any prospect considering Delve or any current Delve client:
Delve loves claiming this is all an attempt by their “jealous competitors” to fraudulently discredit them. When clients ask concrete questions, they dodge answering the question and instead coax you into getting on a call with them, where they charm you and tell you everything you want to hear. They’ll even throw in some donuts.
One other tactic they frequently employ is to promise issues won’t arise again because their old process, product or auditor is about to be or has already been replaced with something better, that they are about to see the results of. This is a deflection and delaying technique. Whenever they start being frequntly called out on using a particular auditor, they’ll switch to another equally dodgy one that hasn’t been flagged yet. If they’re called out on their deficient product, they point to a superior tier or new feature on the timeline that will solve everything.
Expressing unhappiness and a desire to leave will lead to Delve pairing you with an external virtual CISO that will help you do compliance right. You should know that this means you will have to do all the work manually.
If you are concerned about Delve’s conduct and practices, ask them questions in writing. Do not allow them to deflect. Do not get on a call with them. In the closing words at the end of this article you’ll find more advice.
All information contained in this article can be reproduced by consulting public sources and having access to the Delve platform.
All screenshots and information are actual as of mid January 2026.
Here we set the stage. We’ll quickly list all parties involved, and will provide some background context that is useful to understand the rest of this article.
High profile companies like those listed in the image above, and hundreds of others are affected by this. Companies additionally affected are those that partner with Delve’s clients, having been misinformed of the risks involved in partnering with those clients.
Many of those companies process PHI of millions of US citizens on a daily basis. Some of those even serve national defense interests.
The audit firms listed in the illustration above were identified during our research process, but are not necessarily all audit firms used by Delve.
From what we were able to establish, 99%+ of Delve’s clients went through either Accorp or Gradient over the past 6 months.
In the wake of Delve’s leak in December, Delve is reported to have switched to Glocert as their primary ISO 27001 auditing firm.
* Karun Kaushik and Selin Kocalar - The founders of Delve
The above individuals knowingly participated in Delve’s deliberate misconduct regarding audit practices.
Delve is a compliance company. They help businesses get certified for frameworks like SOC 2, ISO 27001, HIPAA, and GDPR. Companies need these certifications to prove they handle data in a secure way and to unlock deals with larger customers who require them.
Compliance has traditionally been a time-consuming process that involved lots of spreadsheets. It used to be manual, expensive and slow.
To give you an idea of what this is all about, we will primarily focus on SOC 2 in this article. SOC 2 is the most commonly pursued framework in the US. Practically all tech companies that sell to enterprises are expected to be ‘SOC 2 compliant’, which basically means they’ve had to have a SOC 2 audit performed in the last year.
Getting a clean SOC 2 report means hiring a CPA firm to review your security controls. If they successfully verify the security you claim to have, through a lot of evidence you provide them, they issue a report saying your security measures are sound. This report becomes proof you can show customers and investors.
SOC 2 and ISO 27001, the European counterpart to SOC 2, are voluntary frameworks. HIPAA and GDPR are not.
HIPAA applies to any company handling health records in the US. Penalties are severe, with willful neglect punishable by criminal charges and prison time.
GDPR covers any company processing data of EU residents, regardless of where the company is based. Fines run up to 4% of global annual revenue or 20 million euros, whichever is higher.
These frameworks carry the force of law because they protect information people cannot easily protect themselves: medical histories, genetic data, biometric identifiers, location patterns, and the full record of their digital lives.
The companies that help other companies become compliant through automation are called GRC automation platforms.
Delve was founded in 2023 by Karun Kaushik and Selin Kocalar, both Forbes 30 Under 30 members and MIT dropouts who met as freshmen. They started with a medical AI scribe, pivoted to compliance after hitting HIPAA headaches themselves, and went through Y Combinator in 2024.
In July 2025, Delve raised $32 million in Series A funding led by Insight Partners. Before that they had raised a $3.3 million seed round and went through Y Combinator.
Delve’s pitch is speed through AI. They claim to get companies compliant in days rather than months, using what they call “agentic AI” through an “AI-native” platform.
Their marketing promises AI agents that automatically collect evidence, write reports, and monitor compliance gaps without human busywork.
The reality, as this article will show, is different.
SOC 2 audits operate under strict independence requirements designed to preserve trust in the attestation process. These rules exist precisely to prevent the kind of conduct this article exposes. While Delve offers compliance services for HIPAA, GDPR, and ISO 27001, this article focuses primarily on SOC 2. The rules surrounding those other frameworks would require their own detailed treatment; the goal here is to establish a clear pattern in Delve’s behavior, not to catalog every possible regulatory violation.
The fundamental principle is simple: the party implementing controls cannot be the party attesting to their effectiveness. AICPA’s Code of Professional Conduct states that members must “accept the obligation to act in a way that will serve the public interest, honor the public trust, and demonstrate a commitment to professionalism.” When accountants cannot be expected to make truthful representations, “we lose the ability to assess any public or private company’s actual performance.”
For SOC 2 specifically, AT-C Section 205 requires practitioners to maintain independence in both fact and appearance throughout the engagement. The practitioner must not assume management responsibilities or act as an advocate for the subject matter being examined.
Under AT-C Section 315, auditors must “seek to obtain reasonable assurance that the entity complied with the specified requirements, in all material respects, including designing the examination to detect both intentional and unintentional material noncompliance.” This requires independent design of test procedures, independent evaluation of evidence, and independent formation of conclusions.
The auditor’s report must represent their own professional judgment, not pre-written conclusions provided by the entity being audited or its platform vendor.
Delve’s model inverts this structure. By generating auditor conclusions, test procedures, and final reports before any independent review occurs, Delve places itself in the role of both implementer and examiner. This is not a technicality. It is a structural fraud that invalidates the entire attestation.
For readers who wish to verify these requirements:
This story has many threads, and I struggled with where to start. Do I lead with the leaked documents? The auditor shell game? The fake AI?
In the end, I felt that starting with my company’s experience, which illustrates many of the points I make and prove later on, was the most effective way to get the picture across.
I collaborated on this article with users from other Delve clients. We compared notes. The patterns I describe below showed up across all of our accounts. Unless mentioned otherwise, nothing here is cherry-picked. Later sections dissect specific mechanisms.
This section shows what it is like to be a Delve client.
Delve goes all out during the sales process. Through their marketing and sales process they continuously emphasize being the fastest to get companies compliant, thanks to their AI. They repeatedly emphasize the impressive companies they work with, and how they partner with the best and most respected US-based audit firms, and that their work is accepted by Fortune 500 companies.
Within the first few minutes of the demo call we did with Delve, they had already mentioned how companies like Lovable, Bland, WisprFlow and hundreds of others are choosing Delve over competitors. The main point they kept driving home was that their revolutionary tech, supposedly way ahead of any other company’s, enabled compliance to be achieved with just 10 hours of work instead of the hundreds it used to take.
Their demo was short but did a good job showing how automated everything supposedly was. They clicked through everything pretty quickly, and stopped at every place that made the process look easy or automated. They showed an integration pulling information out of AWS, the “ready to go” default policies you could just adopt without modification, the reasonably short list of tasks, the AI questionnaire automation, the beautiful trust page you could publish and their ‘AI-copilot’ chatbot.
What they didn’t show, however, was that most integrations were fake and required manual screenshots, that tons of forms need to be manually filled out, that the trust page wasn’t accurate and they didn’t show any of the tasks that had pre-created fake evidence.
Their “best offer” for SOC 2 started at $15,000, which we were able to negotiate down to $13,000 on the call. They kept emphasizing how it was a great deal, that we were getting so much value that other companies paid more for. They remained pretty inflexible on pricing until we made clear we were considering a competitor. We must have been told at least four times that they couldn’t go any lower because they’d make a loss. There was a lot of pressure and posturing, but the price quickly dropped to just $6,000 when they realized we were serious about going elsewhere, and they would throw in ISO 27001 and a 200 hour penetration test as well.
Pushing us to sign within 24 hours or lose that good a deal, we decided to just move forward and get it over with. In hindsiht, there were many red flags, but we just wanted to get the job done quickly and move on. If Delve was good enough for companies like Lovable, they had to be doing something right. Right?
Once you’ve been invited to the Delve platform and log in for the first time, you’ll be greeted with an interface that reveals there are four categories for activities:
But even before you get started doing any real work, you can go into the trust tab and activate and publish the trust page for your company. You’d think it would probably be a very minimal trust page with everything failing at first, since you haven’t done any work, right?
Nope, you immediately get a fully populated trust page that would have you believe you’re running the most secure company on earth. Delve’s trust page presented our company as fully secure before we had completed any compliance work, enabling us to close deals based on misrepresented security claims.
Noteworthy is that the list hadn’t changed after we finished compliance in any way, but still wasn’t truthful. My expectation was that the list reflected the security you’d get at the end of Delve’s process, but it took getting there to learn that that wasn’t true either.
It says we did vulnerability scanning and a pentest, when we only ever did the scan. It says we did data recovery simulations, which we never did. It says we remediated vulnerabilities, which we never did.
It is literally a made-up list of security measures of which more than half are not implemented, or even supported or addressed by Delve’s process and platform.
In short, this product is built to help companies fake security rather than communicate their real security.
When you do actually decide to get started and do the work, you’ll find that the platform expects you to perform a number of tasks across four categories:
Policies - These are all pre-created, and Delve recommends adopting them as they are.
Sadly, unless you spend a whole week of manually revising them to be accurate, you will have inaccurate policies full of false promises. Every single one of Delve’s policies claims to have measures in place that Delve’s process and platform do not address. Even though we knew we’d technically be lying about our security to anyone we sent these policies to for review (clients, auditors, investors), we decided to adopt these policies because we simply didn’t have the bandwidth to rewrite them all manually. Team - Background checks, security training and manual device security through screenshots for every employee.
This is a lot of manual work. it took each of our emloyees 2 hours to their individual tasks done.30 minutes to manually secure and screenshot laptop security configuration. It is a lot of work if you have a big company.
Also, we didn’t have disk encryption support on a few laptops, but that didn’t stop Delve from signing off on HIPAA:
30 minutes (or more) to manually write up a performance evaluation:30 minutes to do all training (watching Delve’s Youtube videos) and quizzes.
Tech - You add vendors here. This is what Delve calls integrations, but most of these are just forms where you are asked to submit screenshots:Company - Security procedures and company-wide tasks are live here. This is just a list of forms that are all pre-filled with fake evidence. You can speedrun through this by just clicking accept on every one of them.
One thing that stands out as you go through all of Delve’s tasks is that there isn’t any AI to speed up any of the work. The only thing available is an ‘AI Co-pilot’ that can provide generic advice, and that doesn’t seem to have much context beyond what form you’re in. More than half of the time the AI co-pilot would tell me the evidence in the platform was not sufficient, and it would refer to links of other GRC platforms.
As opposed to what is promised during the sales process, the tasks are not customized to the company. You are basically dumped into an interface and told by the customer success person that you can ask questions on Slack if you get stuck.
The reality is that there will be very few times you’ll ever need any help from Delve’s team, primarily because everything in their platform is pre-populated. You won’t have to ask how to do a risk assessment, because you get the same pre-generated risks that every other Delve customer gets. You won’t have to complain about having to do board meetings, after you were explicitly promised during the sales call that this is something all other providers but Delve ask for, because you get pre-created fake board meeting notes that you can adopt as is.
Seriously, becoming compliant with Delve is nothing more than clicking through a bunch of pre-populated forms and accepting everything. Unless you want to do compliance the proper way, in which case Dropbox is as good a tool as Delve since you need to then manually collect and write everything.
Ok, you sit down to get cracking on compliance. What do you do next?
You accept the default policies that are inaccurate out of the box. Like this policy that claims to have an MDM in place when the Delve process consists of making a manual screenshot of your Mac firewall settings:
You accept the pre-created contents for security simulation by accepting the three “security incidents”:
You do the risk assessment by adopting the ten default risks:
One of our obvious concerns was that this approach would never pass an audit, but we were explicitly told Delve never failed a single audit in the past, and that auditors have never flagged a single issue with their process. They tried to put our minds at ease by telling us about all the amazing Delve clients that sold to Fortune 500 companies using the exact same process.
Delve continuously reminding us that they serve clients like Lovable, Bland, WisprFlow and many others ended up wearing us down, so we just took their word for it and moved on.
When the time comes to actually hook up your stack to Delve, so that Delve can do that ‘continuous monitoring’ thing, you’ll find that the vast majority of their integrations don’t integrate with anything at all. They are just containers for screenshots you’ll have to go out and manually collect.
Imagine my surprise when I learned that AI-native compliance would mean I’d have to spend many hours manually collecting screenshots and filling out forms. I truly feel like a mindless agent in what Delve calls “the agentic experience”.
Here you can see how the Linear ‘integration’ consists of Manual Tests and Forms:
On the employee tab, you manually do background checks through Certn, fill out more forms, watch useless YouTube videos, and manually screenshot laptop security. For 100 employees, all 100 of them have to manually secure laptops once and upload screenshots.
You also do manual performance reviews for every employee with no way to pull data from other solutions. Lots of typing if you have 20+ employees.
...
Read the original on substack.com »
Skip to main content
Skip to main content
I want to speak to you directly, as an engineer who has spent his career building technology that people depend on every day. Windows touches more people’s lives than almost any technology on Earth. Every day, we hear from the community about how you experience Windows. And over the past several months, the team and I have spent a great deal of time analyzing your feedback. What came through was the voice of people who care deeply about Windows and want it to be better.
Today, I’m sharing what we are doing in response. Here are some of the initial changes we will preview in builds with Windows Insiders this month and throughout April.
More taskbar customization, including vertical and top positions: Repositioning the taskbar is one of the top asks we’ve heard from you. We are introducing the ability to reposition it to the top or sides of your screen, making it easier to personalize your workspace.
Integrating AI where it’s most meaningful, with craft and focus: You will see us be more intentional about how and where Copilot integrates across Windows, focusing on experiences that are genuinely useful and well‑crafted. As part of this, we are reducing unnecessary Copilot entry points, starting with apps like Snipping Tool, Photos, Widgets and Notepad.
Reducing disruption from Windows Updates: Receiving updates should be predictable and easy to plan around, so we’re giving you more control. This includes the ability to skip updates during device setup to get to the desktop faster, restart or shut down without installing updates and pause updates for longer when needed, all while reducing update noise with fewer automatic restarts and notifications.
Faster and more dependable File Explorer: File Explorer is one of the most used surfaces in Windows. Our first round of improvements will focus on a quicker launch experience, reduced flicker, smoother navigation and more reliable performance for everyday file tasks.
More control over widgets and feed experiences: Widgets should feel helpful and relevant, not distracting or overwhelming. We’re introducing quieter defaults, more control over when and how widgets appear, and improved personalization for the Discover feed.
A simpler, more transparent Windows Insider Program: The Windows Insider Program is how you help shape the future of Windows, and it should be easy to understand what to expect and how to participate. We are implementing changes to make it easier for you to navigate with clearer channel definitions, easier access to new features, higher quality builds, better visibility into how your feedback shapes Windows, and more opportunities to engage directly with us.
Improved Feedback Hub, available starting today: Your feedback is essential to improving Windows, and it should be easy to share and see what others are saying. Today, we’re rolling out the largest update to Feedback Hub yet to our Insiders, with a redesigned experience that makes it faster and easier to submit feedback and engage with the community.
Building on these changes, what follows below is our broader plan and areas of focus for the year to raise the bar on Windows 11 quality. The work is underway. You can expect to see tangible progress that you’ll be able to feel as you preview builds from us throughout the rest of the year.
Last night I had the chance to sit down with a small group of Windows Insiders here in Seattle to listen, to answer questions, and to share more about where we’re headed. The Seattle meetup was the first of several stops our team will be making to engage in person, in more cities around the world, to connect with the Windows community.
Thank you for holding us to a high standard. Windows is as much yours as it is ours. We’re committed to strengthening its foundation and delivering innovation where it matters, for you.
Please keep the feedback coming, to help us shape the future of Windows together.
What follows is our plan to raise the bar on Windows 11 quality this year, with a focus on performance, reliability and well-crafted experiences. These areas have meaningful impact on how you experience Windows: how fast it starts and responds, how stable it is under real workloads, and how consistent and thoughtful the experience feels.
We are focusing on making Windows 11 more responsive and consistent, so performance feels smooth and reliable.
Over the course of the year, we’re improving system performance, app responsiveness, File Explorer, and the Windows Subsystem for Linux, helping Windows stay fast as you move between apps and workloads.
Improving system performance: Reducing resource usage by Windows to free up more performance for what you’re doing.
Faster and more responsive Windows experiences, with early improvements already delivering launch time reductions in apps like File Explorer
Improved memory efficiency, lowering the baseline memory footprint for Windows, freeing up more capacity for the apps you run
More consistent performance, even under load, so apps stay responsive throughout the day
More fluid and responsive app interactions: Reducing interaction latency by moving core Windows experiences to the WinUI3 framework.
Improving the shared UI infrastructure that Windows experiences rely on, reducing interaction latency and overhead at the platform level
Faster responsiveness in core Windows experiences like the Start menu, by moving more experiences to WinUI3
Improving File Explorer fundamentals: Reducing latency and improving reliability across search, navigation and file operations.
Copying and moving large files will be faster and more reliable
Elevating the Windows Subsystem for Linux (WSL) experience: Improving performance, reliability and integration for developers using Linux tools and environments on Windows.
Better enterprise management with stronger policy control, security and governance
Reliability is the bedrock of trust. You should trust that your PC is going to be there and function when you need it most.
Across the operating system, we will focus on improving the baseline reliability of areas such as the Windows Insider Program, drivers and apps, updates and Windows Hello.
Strengthening reliability and quality of the Windows Insider Program: Making it clearer what to expect from each Insider channel, raising the quality bar for builds and strengthening feedback signals to improve build quality before broad release.
Clearer visibility into what features are included in each Insider build, so you know what to expect
More control over which new features you try, with easier switching between Insider channels to match your desired level of stability or early access
Higher quality builds entering each channel, with more rigorous validation and feedback signals before release
Stronger feedback loops across Windows so issues are identified, prioritized and addressed faster
Increasing OS, driver and app reliability: Delivering a smoother, more dependable Windows 11 experience by strengthening system stability, driver quality and app reliability across our vibrant ecosystem of silicon, ISV and OEM partners. Our priorities include:
Strengthening the Windows foundation by reducing OS level crashes, improving driver quality and app stability across our ecosystem so PCs run smoothly and reliably every day
Creating easier, faster and stable connections with Bluetooth accessories, fewer USB related crashes and connection loss, and improved printer discoverability and connections
More reliable camera and audio connections to increase your productivity at work and play
More consistent device wake (including further wake consistency improvements for docking scenarios) so you can get back to your work faster
Improving the Windows Update experience: Faster, more predictable updates with clearer control over restarts and timing.
Less disruption from Windows Update, moving devices to a single monthly reboot, while organizations and users who wish to get new features and fixes faster remain able to do so
More direct control over updates, including the ability to pause updates for as long as you need and restart or shut down without being forced to install them
Faster, more reliable update experiences, with clearer progress during updates and built‑in recovery to help keep devices stable if something goes wrong
Improving Windows Hello biometric authentication: We’re strengthening Windows Hello sign‑in so it feels reliable, effortless and secure, reducing friction while increasing confidence that your device recognizes you correctly.
More reliable facial recognition, so you can trust sign‑in to work when you need it
Faster and more dependable fingerprint sign‑in, with fewer retries
Easier secure sign‑in on gaming handhelds like the ROG Xbox Ally X, with full gamepad support for creating a PIN during setup and in Settings.
To us, craft is the discipline that turns functional products into loved ones through usability, polish, coherence and refinement.
This year, you will see us invest in raising the bar on the overall usability of the experience, with more opportunities for personalization, less noise, less distraction and more control across the OS. That includes being thoughtful about how and where we bring AI into Windows, leading with transparency, choice and control, so that new capabilities enhance the experience rather than complicate it.
Improving the Start and Taskbar experience: Making these core Windows surfaces more reliable, flexible and personalized so you can navigate your PC in the way that works best for you.
Start and Taskbar deliver even more consistent, dependable access to apps and files, so moving between your content feels fluid throughout the day
Expanded taskbar personalization options, including alternate taskbar positions and a smaller taskbar, giving you greater control over how this core surface fits your workflow
A more relevant Recommended section in Start will surface apps and content you care about most, with clear controls to customize the experience or turn it off
More focused user experience with less distractions: Making the Windows experience quieter, to help you stay focused, minimize distractions and stay in your flow.
Device setup on new Windows PCs is quieter and more streamlined, with fewer pages and reboots so getting started is simpler
Widgets surface information more intentionally by default, keeping content glanceable and reducing unnecessary interruptions
Simpler settings make it easier to personalize, opt into or turn off Widgets and feed content based on your preferences
Reduced notifications so you can stay focused throughout the day
Enhancing Search: Delivering faster, more accurate results with consistent search experience across Windows surfaces.
Find what matters faster, with search that surfaces apps, files and settings clearly so you can get to the right result quickly
Clearer and more trustworthy results, with results from content on your device easy to understand and clearly distinct from web results
A more consistent search experience across the Taskbar, Start, File Explorer and Settings
As part of this effort, we are evolving how Windows is built behind the scenes to raise the quality bar and deliver innovation where it matters most, shaped by the feedback we are hearing from you.
This includes deeper validation and broader testing across real-world hardware and usage scenarios before new experiences reach Windows Insiders, and a more intentional approach to where and how new capabilities are introduced. The result will be higher quality builds, more meaningful innovation and greater flexibility in choosing what you want to try. This is how we will continue to build and ship Windows 11, so we can deliver better experiences with greater confidence, month after month.
In line with Microsoft’s Secure Future Initiative, we will continue to make Windows more secure with every release, building in new capabilities and strengthening security by default to help protect users, devices and data.
As we improve and innovate, we look forward to your continued feedback on where we can keep making Windows better.
...
Read the original on blogs.windows.com »
[Note that this article is a transcript of the video embedded above.]
On the northern edge of Los Angeles, fresh water spills down two stark concrete chutes perched on the foothills of the San Gabriel Mountains, a place simply called The Cascades. It’s a deceptively simple-looking finish line: the end of a roughly 300-mile (or 500 km) journey from the eastern slopes of the Sierra Nevada into the city.
On November 5, 1913, tens of thousands of people climbed these hills to watch the first water arrive. When the gates finally opened, water trickled through, but that trickle quickly became a torrent. The project’s chief engineer, William Mulholland, leaned over to the mayor and shouted the line that’s been repeated ever since: “There it is, Mr. Mayor. Take it!”
That moment was profound for a lot of reasons, depending on where you live and how you feel about water rights. LA didn’t become LA by living within the limits of its local resources. Its meteoric growth into the metropolis we know was enabled by an early and extraordinary decision to reach far beyond its own watershed and pull a whole new river into town. Today, roughly a third of LA’s water comes from the Eastern Sierra through the Los Angeles Aqueduct system. That share swings with snowpack, drought, and environmental constraints, but this one piece of infrastructure helped turn a water-limited town into a world city. It’s one of the most impressive and controversial engineering projects in American history.
But to really appreciate that water in the cascades, you have to look way upstream and see what it took to get it there. It’s gravity, geology, politics, and human ambition all in a part of the state that most people never see. Let’s take a little tour so you can see what I mean. I’m Grady and this is Practical Engineering.
When most people think about aqueducts, this is what they picture: a bridge carrying water over a valley or river. And, just to be clear, these are aqueducts. But engineers often use the term more broadly to describe any type of conveyance system that carries water over a long distance from a source to a distribution point. Could be a canal, a pipe, a tunnel, or even just a ditch. In the case of the LA aqueduct, it’s all of them, plus a lot of supporting infrastructure as well.
From the center of the city, it’s about a four hour drive to the Owens River Diversion Weir. It’s not accessible to the public, but it is the official start of the LA Aqueduct, at least when it was originally built. Here, all the snowmelt and rain from a huge drainage system between the Sierra Nevada and Inyo Mountains funnel down into the Owens River, where a large concrete diversion weir peels nearly all of it out of its natural course and into a canal. This point is roughly 2,500 feet (or 750 meters) higher in elevation than the bottom of the Cascades at the downstream end, which makes it obvious why LA chose it as a source. The entire aqueduct is a gravity machine. There are no pumps pushing the water toward the city. Half a mile of elevation change feels like a lot until you realize you have to spread it out over 300 miles. It’s all achieved through careful grading and managing elevations along the way to keep the flow moving.
That care is particularly important in this upper section of the aqueduct, where the water flows in an open canal. To do this efficiently, you need a relatively constant slope from start to finish. That’s a tough thing to achieve on the surface of a bumpy earth. Following a river valley makes this easier, but you can see the twists and turns necessary to keep the aqueduct on its gentle slope toward LA.
If it seems kind of wild that a city would buy up the land and water rights from somewhere so far away, it did to a lot of the people who lived in the Owens Valley, too. A lot of the acquisitions and politics of the original LA Aqueduct were carried out in bad faith, souring relationships with landowners, ranchers, farmers, and communities in the area. The saga is full of broken promises and shady dealings. Then when the diversion started, the area dried up, disrupting the ecology of the region, making agriculture more difficult and residents even more resentful. Many resorted to violence, not against people but against the infrastructure. They vandalized parts of the aqueduct, a conflict that later became known as the California Water Wars. In one case in 1924, ranchers used dynamite to blow up a part of the canal. Later that year, they seized the Alabama Gates.
About 20 miles or 35 kilometers downstream from the diversion weir, a set of gates sits on the eastern bank of the aqueduct canal. Because it runs beside the river valley, the aqueduct captures some of the water that flows down from the surrounding mountains in addition to what’s diverted out of the Owens River, particularly during strong storms. That means it’s actually possible for the canal to overfill. The Alabama Gates serve as a spillway, allowing operators to divert water back down to the river. This also helps drain the canal for maintenance or repairs when needed.
Those Owens Valley ranchers understood exactly what the Alabama Gates controlled. Open them, and the water would run back where it had always run, down the Owens River, instead of south to Los Angeles. The resistance simmered and flared for years, but it didn’t end in the dramatic showdown at the aqueduct. Instead, it ended at a bank counter. The Inyo County Bank was run by two brothers who were also key organizers and financiers of the resistance campaign. In August 1927, an audit revealed major shortfalls and ongoing embezzlement, and the bank quickly collapsed. Residents across the valley saw their savings wiped out or frozen overnight, shattering what was left of the community’s ability to keep fighting.
The Alabama Gates weren’t just a political flashpoint though. They also marked an important dividing line in the aqueduct’s design. LA knew that even if the ranchers didn’t release the water to the river in protests, a lot of it would end up there anyway through seepage. As the canal climbed away from the valley floor and crossed more porous soil, it would naturally lose its water through the ground. So, at the Alabama Gates, the aqueduct transitions from an unlined canal to a concrete-lined channel. It’s still open to the air, so there’s no protection against evaporation or contamination, but the losses to the ground are a lot less.
This design continues for about 35 miles (or 55 kilometers) through the valley. Along the way, the aqueduct passes the remains of Owens Lake. Once a large body of water, it quickly dried up with the diversion of the Owens River. Of course, there were impacts to wildlife from the loss of water, but the bigger problem came later: dust. All the fine sediment that settled on the lakebed over thousands of years was now exposed to the hot desert sun. When the wind picked up, it filled the air with fine particulates that are dangerous to breathe. Over the years, there have been times when Owens Lake is the single largest source of dust pollution in the entire country, and LA has spent more than a billion dollars just trying to fix this problem alone. The aqueduct passing along the hillside past the lake and its challenges is a reminder that the true cost of water is often a lot more than the infrastructure it takes to deliver it.
So far, it might be obvious that this aqueduct system is pretty fragile to be making up a major part of a city’s fresh water supply. Even beyond the vandalism and political resistance, there are a lot of things that could go wrong along the way, from bank collapses, earthquakes, diversion failures, and more. That’s why Haiwee Reservoir was originally built in a narrow saddle between two hills as a kind of buffer. With a dam on either side, it stored water up so the aqueduct could keep running even during a disruption upstream. It also slowed the water down, exposing it to the hot desert sun as a natural form of UV disinfection. In the 1960s, the reservoir was reconfigured into two basins to add some flexibility. That’s because, around that time, the LA aqueduct became two. While the open-topped canal section was large enough to meet demands, the underground conduit in the next section wasn’t. So, LA built a second one in 1970 to increase the flow. If you look at this map of the Haiwee Reservoirs, you can see that water has two paths: it can flow into the second aqueduct here from the north basin, or it can pass through the Merritt Cut to the south reservoir, through the intake there, and into the first aqueduct. This setup allows for some redundancy, along with regulation and balancing of the flows between the two aqueducts. Haiwee marks the start of the long desert run, with both systems no longer in open-topped lined canals, but running underground in concrete conduits.
There are a lot of advantages to running an aqueduct in a closed conduit underground, especially one this long through a desert landscape. There’s far less evaporation and less potential for contamination. It doesn’t divide the landscape at the surface level, so there’s no need for bridges, culverts, and wildlife crossings. Going underground also offers more flexibility when it comes to topography. You don’t have to follow the contours of the surface so carefully because if you come to a hill, you can just dig a little deeper to keep the constant slope.
Of course, those benefits come with a cost. An underground conduit is more expensive than a simple channel on the surface, and not all the problems with topography are solved. This is Jawbone Canyon, one of the biggest drops for the first aqueduct. Rather than taking a major detour around it, the aqueduct descends 850 feet (or 250 meters) and then ascends back up. This type of structure is often called an inverted siphon. I’ve done a video on how these work for sewer systems, and I’ve also done a video on flood tunnels that work in a similar way, if you want to learn more after this.
Unlike the concrete conduit, which really just acts like an underground canal with a roof, this is one of the places where the water in the aqueduct is pressurized. 850 feet of water column is about 370 psi, 26 bar, or two-and-a-half Megapascals. It’s a lot of pressure. These sections of pipe had to be specially manufactured on the East Coast, where the major steel facilities were, and transported by ship because of their size. They travelled all the way around Cape Horn, since the Panama Canal was still under construction. There are actually quite a few of these siphons crossing canyons in this section of the aqueduct, but Jawbone Canyon is the biggest one.
A little further downstream, the LA aqueduct crosses the California Aqueduct, part of the State Water Project. That system has a connection to LA as well, but this branch at the crossing actually heads to Silverwood Lake. However, there is a transfer facility, recently completed, that can pump water out of the California Aqueduct directly into the first LA aqueduct. This creates opportunities for LA to buy water that moves through the state system and offers some flexibility in where that water ends up. There’s also a turn-in that can move water from the LA aqueduct into the California aqueduct for situations where trades make sense. The second LA aqueduct passes underneath the state canal here. And this is a good example of the differences between the first project (built in the 1910s) and the second one, built in the 1960s. Over that time, the price of labor went up a lot more than the price of materials. Where the first one carefully followed the existing topography with bends and turns to minimize the need for expensive pressurized pipe, the second one could take a more direct path, reducing labor in return for the more specialized conduit materials.
After wandering more than a hundred miles (or 160 kilometers) apart, the two Los Angeles Aqueducts come back together at Fairmont Reservoir, in the northern foothills of the Sierra Pelona Mountains. This is the last major topographic barrier on the way to Los Angeles. There was no way to go up and over without pumps, so instead they went straight through. The largest project was the Elizabeth tunnel.
Here, the two aqueducts come together again into a single watercourse. About 5 miles or 8 kilometers of excavation through everything from hard rock to loose, wet ground became one of the most difficult parts of the entire project. The tunnel required continuous temporary supports along most of its length, followed by a permanent concrete lining. It was a monumental effort for its time and essential not only to cross the range. The Elizabeth Tunnel also delivers that water under pressure to the San Francisquito Power Plant Number 1.
This is the largest of the eight hydroelectric plants that run along the aqueduct, capturing some of the energy from the water as it flows downward toward LA. These plants are a major part of how the project paid for itself, and they continue to serve as an important source of electricity in the region today.
Continuing downstream, Bouquet Canyon reservoir adds another layer of operational flexibility. It helps regulate flow through the power plants and provides additional storage, a sort of insurance policy since this whole reach depends on a single major tunnel crossing the San Andreas Fault. In case of a major earthquake, it’d be best if Angelinos could avoid a simultaneous water shortage.
The aqueduct splits again just upstream of the San Francisquito Plant Number 2, which was famously destroyed by the St. Francis Dam failure. That reservoir project was designed to supplement the storage capacity along the aqueduct, but the dam failed catastrophically in 1928, just 2 years after it was completed, killing more than 400 people and destroying several parts of the aqueduct as well. The tragedy was one of the worst engineering disasters in American history. It put another stain on the aqueduct project, and it effectively ruined the reputation of William Mulholland, who was largely considered a hero in LA for all his work on the aqueduct and the city’s water system. The dam was never rebuilt, but workers restored the aqueduct to functioning service in only 12 days.
At Drinkwater Reservoir, the two aqueducts run roughly parallel through the Santa Clarita area, sometimes aboveground and sometimes below, before finally reaching the terminal structures that carry water into LA. Usually, the water stays in the conduits, which feed the two hydropower plants at the foot of the mountains. If the plants are out of service or there’s more flow than they can handle, you see excess water thundering through the cascade structures instead.
From here, the aqueduct drops out of the mountains and into the north end of the San Fernando Valley, where the water is treated and prepared for distribution. After filtration and disinfection, it’s stored in the Los Angeles Reservoir, the system’s terminal reservoir, so the city can smooth out day-to-day swings in demand even while the aqueduct’s inflow stays relatively steady.
For most of Los Angeles’ history, that “finished water storage” was out in the open air. But in the 2000s, drinking-water rules pushed utilities to add stronger protection for treated water held in uncovered reservoirs. There’s a good chance you’ve seen their solution on the Veritasium channel or elsewhere: 96 million plastic shade balls that act like a floating cover, blocking sunlight to prevent water-chemistry problems and helping keep wildlife out. They’re the final protection for this water that traveled so long to reach the city. While the LA Reservoir is, in a sense, the end of the journey for this water, the original diversion way back at Owen’s River isn’t even technically the start anymore!
In 1940, LA extended the aqueduct system upstream northward by connecting the Mono basin and funneling its water through tunnels to the Owens River basin. Like Owens Lake downstream, Mono Lake began drying out as well. And also like Owens Lake, lawsuits, court orders, and environmental regulations have tempered the value of this water source, forcing LA to significantly reduce diversions and implement costly restoration projects.
That’s kind of the story of the LA aqueduct in a nutshell. The project seemed obvious from an engineering perspective. There was lots of snowmelt in the mountains; the city had the technical prowess, the funding, the elevation, and the political power to reach out and take it. The result was one of the most impressive works of infrastructure of the early 20th century. And continued efforts to expand and improve the system have made it even more efficient, flexible, and valuable to the many millions of people who live in one of the most populous cities in America, delivering not only water but also hundreds of megawatts of hydropower.
But it many ways, it was not only unscrupulous, but also short-sighted. Residents of the Owens Valley watched ranchland and farmland dry up as the water that had shaped their home was rerouted south. Native communities saw their homeland transformed with access to gathering areas disrupted, places made unrecognizable, and cultural ties strained by changes they didn’t choose. Wind picked up alkaline dust from dried lakebeds. Habitats were disrupted, and the birds that depended on these waters and wetlands lost part of what made this migration corridor work. It’s easy to see why the aqueduct remains controversial, and why what we sometimes dismiss as “red tape” around major infrastructure is often completely justified due diligence. As engineers, and really, as humans, we have to try and account for costs that don’t show up on a balance sheet, but can come back later as decades of lawsuits, mitigation, and restoration.
And even the aqueduct’s original thesis (that there’s reliable snowmelt up there, and a growing city down here) is starting to falter. In recent decades, the mountains have delivered less predictable runoff: more swings, more years when the timing is wrong, and more uncertainty about what “normal” even means anymore. California’s climate has always moved in long cycles, but the margin for error is thinner now, and no one can say with much confidence when or if the moisture the state depends on will return to its old pattern.
The hopeful part is that this is exactly where engineering makes a difference: at the messy intersection of geology, climate, culture, politics, and human need. The Los Angeles Aqueduct is a case study in what we can build when we’re ambitious, but also what happens when we treat a landscape like a machine with only one output. The next era of water engineers can learn a lot from it.
...
Read the original on practical.engineering »
In an odd approach to trying to improve customer tech support, HP allegedly implemented mandatory, 15-minute wait times for people calling the vendor for help with their computers and printers in certain geographies.
Callers from the United Kingdom, France, Germany, Ireland, and Italy were met with the forced holding periods, The Register reported on Thursday. The publication cited internal communications it saw from February 18 that reportedly said the wait times aimed to “influence customers to increase their adoption of digital self-solve, as a faster way to address their support question. This involves inserting a message of high call volumes, to expect a delay in connecting to an agent and offering digital self-solve solutions as an alternative.”
Even if HP’s telephone support center wasn’t busy, callers would reportedly hear:
We are experiencing longer waiting times and we apologize for the inconvenience. The next available representative will be with you in about 15 minutes.
To quickly resolve your issue, please visit our website support.hp.com to check out other support options or find helpful articles and assistant to get a guided help by visiting virtualagent.hpcloud.hp.com.
Callers were then told to “please stay on the line” if they wanted to speak to a representative. The phone system was also set to remind customers of their other support options and to apologize for the long (HP-induced) wait times upon the fifth, 10th, and 13th minute of the call.
The mandatory support call times have been lifted, per a company statement shared by HP spokesperson Katie Derkits:
We’re always looking for ways to improve our customer service experience. This support offering was intended to provide more digital options with the goal of reducing time to resolve inquiries. We have found that many of our customers were not aware of the digital support options we provide. Based on initial feedback, we know the importance of speaking to live customer service agents in a timely fashion is paramount. As a result, we will continue to prioritize timely access to live phone support to ensure we are delivering an exceptional customer experience.
HP didn’t immediately clarify when it removed the wait times. Some HP workers were reportedly unhappy with the mandatory hold times, with an anonymous “insider” in HP’s European operations reportedly telling The Register, per its Thursday report: “Many within HP are pretty unhappy [about] the measures being taken and the fact those making decisions don’t have to deal with the customers who their decisions impact.”
...
Read the original on arstechnica.com »
From bad manners to taboo, there are certain ways of using chopsticks that are considered as going against dining etiquette. These various acts, known as kiraibashi, are listed below.
To raise the chopsticks above the height of one’s mouth.
To clean the chopsticks in soup or beverages.
!!! (Serious) To pass food from one pair of chopsticks to another. This is taboo due to the custom after a cremation service of picking up remains and passing them between chopsticks.
To hold out one’s bowl for more while still holding chopsticks.
To keep putting the chopsticks into the same side dishes. It is proper etiquette to first eat rice, move on to eat from a side dish, eat rice again, and then eat from a different side dish.
To pick up food with the chopsticks and then put it back without taking it.
To hold the chopsticks between both hands when expressing thanks for the food. It is considered rude to hold objects in your hands when in prayer and it is taboo to hold the chopsticks while saying Itadakimasu, a phrase said before eating, giving thanks for the life of the food.
To use the chopsticks to push food deep inside one’s mouth.
To drop the chopsticks while eating.
To turn the chopsticks around when serving food so that the tips of the chopsticks that have touched one’s mouth do not touch the food.
To place one’s mouth against the side of a dish and push food in with the chopsticks. This can also mean to use the chopsticks to scratch one’s head or other parts of the body.
To take the tips of the chopsticks in one’s mouth.
To use the chopsticks to pick something out from near the bottom of the dish.
To rub waribashi (disposable chopsticks) together to remove splinters.
To use the chopsticks to stir the food around to find something.
To use the chopsticks to stab food and skewer it.
To point at people and things using chopsticks.
To use one’s own chopsticks instead of serving chopsticks to take food from a large serving dish.
After eating the top half of a fish, to use the chopsticks to keep eating by poking between the bones instead of removing them.
To use the chopsticks to keep poking food around.
To hold chopsticks together and tap them on a dish or the top of the table to align the tips.
To make a noise by tapping chopsticks on a dish.
!!! (Serious) To stand chopsticks upright in a bowl of rice. This is taboo, as it is the way rice is presented as a Buddhist funeral offering.
To use chopsticks that are made of different materials (for example, one made from wood and the other made from bamboo).
To hold one chopstick in each hand and use them like a knife and fork to tear or cut food into smaller pieces.
To place the chopsticks on the table with the tips pointing to the right.
To allow sauce or soup to drip from the tips of the chopsticks when eating. Namida means “tears.”
To grip both chopsticks in a fist.
To place the chopsticks like a bridge across the top of a dish to show one is finished. Chopsticks should be placed on the hashioki (chopstick rest).
To use chopsticks to push aside food that one does not want to eat.
To raise the tips of the chopsticks higher than the back of one’s hand.
To shake off soup, sauce, or small bits of food from the tips of the chopsticks.
To keep one’s chopsticks hovering over the dishes, unable to decide which food to eat.
To stir soup with the chopsticks.
To put chopsticks sideways in one’s mouth instead of placing them on the table when moving a dish.
To bite off and eat grains of rice that are stuck to the chopsticks.
To hold both chopsticks and a dish in one hand at the same time.
To use a chopstick like a toothpick.
To line the chopsticks up together and use them like a spoon to scoop up food.
To pull a dish toward oneself using chopsticks.
...
Read the original on www.nippon.com »
Swipe or click to see more
Gov. Tina Kotek visited Estacada High School to hear how her cell phone ban has been going. (Staff photo: Christopher Keizur)
Swipe or click to see more
Gov. Tina Kotek said she chose Estacada High for her visit because of the positive things happening within the district. (Staff photo: Christopher Keizur)
Swipe or click to see more
Swipe or click to see more
Swipe or click to see more
Swipe or click to see more
There was plenty of uncertainty and debate about the effectiveness of a cell phone ban decreed by executive order last summer.
But at least in Estacada, the policy has earned two thumbs up, including approval from a “grumpy old teacher.”
Jeff Mellema is a language arts teacher at Estacada High School. He has worked in the building for 24 years, and he said the new policy that prohibits students from using their phones during the day has been a breath of fresh air.
“There is so much better discourse in my classroom, be it personal or academic,” Mellema said. “Students can’t avoid those conversations anymore with their phones.”
“This ban has brought joy back to this old, grumpy teacher,” he added with a smile.
That is the kind of feedback Gov. Tina Kotek was hoping for as she visited Estacada High School on Wednesday afternoon, March 18. Her goal was to visit classrooms, speak with administrators, and meet with students one-on-one to hear about the effectiveness of her phone policy.
“I knew when I put out the order, not everyone would love it from day one,” Gov. Kotek said. “I appreciate all the feedback today.”
“You have an amazing school. Go Rangers,” she added with a smile.
For years, educators had reported that cell phones were disruptive in classrooms and hindered effective teaching. Research supported those anecdotal claims, showing that phones undermine students’ ability to focus, even when they are just sitting on the desk, unused.
So the governor issued her executive order prohibiting cell phone use by students during the school day in Oregon’s K-12 public schools. To help implement the ban, her office worked with the Oregon Department of Education to share model policies for schools that already have prohibitions in place, as well as guidance on implementation flexibility.
“We are grateful to have Governor Kotek here today to see with her own eyes the positive effect this has had,” said Estacada Superintendent Ryan Carpenter. “We can now demand this expectation for our students’ well-being and success.”
Since that order was issued, every single public school district in Oregon is in compliance with the band.
“The goal is for every student to have the best opportunity to be successful,” Kotek said. “They need to know how to talk to people, learn, and go out into the world.”
The governor visited two classrooms during her trip to Estacada High — Mr. Schaenman’s history class and Mrs. Hannet’s algebra class. Gov. Kotek assured the kids that she still uses algebra in her day-to-day life, no matter how unlikely that may seem to the youngsters.
In the classrooms, she was able to take a straw poll around the cell phone ban and then get specific, direct feedback from the kids.
Overall, it was positive. The Rangers said they noticed changes in how they interact with teachers and peers. They don’t feel that “siren’s song” tug of their phones as often, and the changes are bleeding into everyday life as well — think less reminders to put phones away during family dinners. Phones also led to issues around bullying and online toxicity during the school day.
There are some hiccups. The students spoke about difficulties in tracking busy schedules. Many athletes relied on their phones for practice times and locations. Some advanced placement kids said the overzealous programs monitoring school laptops blocked access to needed resources for studying/researching schoolwork. There is even a strange quirk with school-provided tech that prevents them from accessing their calculators.
“Maybe the filters are too strong right now,” Gov. Kotek said. “That is why we are working with the districts to best implement the policy.”
The kids also weighed in on the debate around the extent of the ban. The two options bandied in Salem were a “bell-to-bell” policy or just inside classrooms. The latter would allow kids to use their phones during passing period and lunch. Several advocated for that change.
That mirrored the debate within the Oregon legislature. It ultimately led to a stalemate and the need for Gov. Kotek’s executive ruling.
“When you make a decision like this, you don’t know how it will ultimately work,” Kotek told the students. “I appreciate you adapting to the situation and making it work for you.”
While things could change in the future, the governor is pleased with the early results. The phone ban is here to stay.
Estacada School District is reveling in its status of public school “Golden Child.”
The visit from the governor is the latest feather in the cap of the rural, small district. In 2025, Estacada High had a 92.5% graduation rate. That is a stunning turnaround from a record-low of 38.5% in 2015. The district credits policies aimed at retaining talented teachers and empowering students to take a more active role in their learning.
“We are proud of these results,” Carpenter said. “This is a reflection and reward for a ton of hard work. This district literally changed its stars to be seen as a true academic powerhouse in Oregon.”
That mindset continues with the cell phone ban. Like many others, Estacada had a version of this in place before the official edict. But with the governor’s push, it empowered administrators and teachers to learn fully embrace the ban.
“Any policy is only as good as the teachers who enforce it,” Carpenter said.
In crafting its policy, Estacada incorporated feedback from parents. That led to some key decisions around the cell phone ban. Rather than use pouches or lockers, students are allowed to keep their phones safely stored in their backpacks. That was for two reasons — it allows students to contact loved ones during emergencies, and many parents use phone trackers to keep tabs on their kids.
The district has also leaned on direct, immediate communication. The flow of information reaches parents directly, avoiding some of the miscommunication that occurred in the past.
“Even I’m surprised by the impact this has had,” Kotek said. “I’m thankful for the educators who took up the charge when I said we’ve got to do this.”
“We can model what Estacada is doing for other districts across the state,” she added.
Oregon voters have rejected most laws in referendums, with many decided outside November
Now’s the time to see Portland cherry blossoms in peak bloom
...
Read the original on portlandtribune.com »
Ghostling is a demo project meant to highlight a minimum functional terminal built on the libghostty C API in a
single C file.
The example uses Raylib for windowing and rendering. It is single-threaded (although libghostty-vt supports threading) and uses a 2D graphics renderer instead of a direct GPU renderer like the primary Ghostty GUI. This is to showcase the flexibility of libghostty and how it can be used in a variety of contexts.
Libghostty is an embeddable library extracted from Ghostty’s core, exposing a C and Zig API so any application can embed correct, fast terminal emulation.
Ghostling uses libghostty-vt, a zero-dependency library (not even libc) that handles VT sequence parsing, terminal state management (cursor position, styles, text reflow, scrollback, etc.), and renderer state management. It contains no renderer drawing or windowing code; the consumer (Ghostling, in this case) provides its own. The core logic is extracted directly from Ghostty and inherits all of its real-world benefits: excellent, accurate, and complete terminal emulation support, SIMD-optimized parsing, leading Unicode support, highly optimized memory usage, and a robust fuzzed and tested codebase, all proven by millions of daily active users of Ghostty GUI.
Despite being a minimal, thin layer above libghostty, look at all the features you do get:
* Unicode and multi-codepoint grapheme handling (no shaping or layout)
* And more. Effectively all the terminal emulation features supported
by Ghostty!
These features aren’t properly exposed by libghostty-vt yet but will be:
These are things that could work but haven’t been tested or aren’t implemented in Ghostling itself:
This list is incomplete and we’ll add things as we find them.
libghostty is focused on core terminal emulation features. As such, you don’t get features that are provided by the GUI above the terminal emulation layer, such as:
* Search UI (although search internals are provided by libghostty-vt)
These are the things that libghostty consumers are expected to implement on their own, if they want them. This example doesn’t implement these to try to stay as minimal as possible.
Requires CMake 3.19+, a C compiler, and Zig 0.15.x on PATH. Raylib is fetched automatically via CMake’s FetchContent if not already installed.
cmake -B build -G Ninja
cmake –build build
./build/ghostling
cmake -B build -G Ninja -DCMAKE_BUILD_TYPE=Release
cmake –build build
After the initial configure, you only need to run the build step:
cmake –build build
libghostty-vt has a fully capable and proven Zig API. Ghostty GUI itself uses this and is a good — although complex — example of how to use it. However, this demo is meant to showcase the minimal C API since C is so much more broadly used and accessible to a wide variety of developers and language ecosystems.
libghostty-vt has a C API and can have zero dependencies, so it can be used with minimally thin bindings in basically any language. I’m not sure yet if the Ghostty project will maintain official bindings for languages other than C and Zig, but I hope the community will create and maintain bindings for many languages!
No no no! libghostty has no opinion about the renderer or GUI framework used; it’s even standalone WASM-compatible for browsers and other environments.
libghostty provides a high-performance render state API
which only keeps track of the state required to build a renderer. This is the same API used by Ghostty GUI for Metal and OpenGL rendering and in this repository for the Raylib 2D graphics API. You can layer any renderer on top of this!
I needed to pick something. Really, any build system and any library could be used. CMake is widely used and supported, and Raylib is a simple and elegant library for windowing and 2D rendering that is easy to set up. Don’t get bogged down in these details!
...
Read the original on github.com »
We rewrote our Rust WASM Parser in TypeScript - and it got 3x Faster
We built the openui-lang parser in Rust and compiled it to WASM. The logic was sound: Rust is fast, WASM gives you near-native speed in the browser, and our parser is a reasonably complex multi-stage pipeline. Why wouldn’t you want that in Rust?
Turns out we were optimising the wrong thing.
The openui-lang parser converts a custom DSL emitted by an LLM into a React component tree. It runs on every streaming chunk — so latency matters a lot. The pipeline has six stages:
* Mapper: converts internal AST into the public OutputNode format consumed by the React renderer
Every call to the WASM parser pays a mandatory overhead regardless of how fast the Rust code itself runs:
The Rust parsing itself was never the slow part. The overhead was entirely in the boundary: copy string in, serialize result to JSON string, copy JSON string out, then V8 deserializes it back into a JS object.
The natural question was: what if WASM returned a JS object directly, skipping the JSON serialization step? We integrated serde-wasm-bindgen which does exactly this — it converts the Rust struct into a JsValue and returns it directly.
Here’s why. JS cannot read a Rust struct’s bytes from WASM linear memory as a native JS object — the two runtimes use completely different memory layouts. To construct a JS object from Rust data, serde-wasm-bindgen must recursively materialise Rust data into real JS arrays and objects, which involves many fine-grained conversions across the runtime boundary per parse() invocation.
Compare that to the JSON approach: serde_json::to_string() runs in pure Rust with zero boundary crossings, produces one string, one memcpy copies it to the JS heap, then V8′s native C++ JSON.parse processes it in a single optimised pass. Fewer, larger, and more optimised operations win over many small ones.
We ported the full parser pipeline to TypeScript. Same six-stage architecture, same ParseResult output shape — no WASM, no boundary, runs entirely in the V8 heap.
What is measured: A single parse(completeString) call on the finished output string. This isolates per-call parser cost.
How it was run: 30 warm-up iterations to stabilise JIT, then 1000 timed iterations using performance.now() (µs precision). The median is reported. Fixtures are real LLM-generated component trees serialised in each format’s real streaming syntax.
* simple-table — root + one Table with 3 columns and 5 rows (~180 chars)
Eliminating WASM fixed the per-call cost, but the streaming architecture still had a deeper inefficiency.
The parser is called on every LLM chunk. The naïve approach accumulates chunks and re-parses the entire string from scratch each time:
For a 1000-char output delivered in 20-char chunks: 50 parse calls processing a cumulative total of ~25,000 characters. O(N²) in the number of chunks.
Statements terminated by a depth-0 newline are immutable — the LLM will never come back and modify them. We added a streaming parser that caches completed statement ASTs:
Completed statements are never re-parsed. Only the trailing in-progress statement is re-parsed per chunk. O(total_length) instead of O(N²).
What is measured: The total parse overhead accumulated across every chunk call for one complete document. This is different from the one-shot benchmark — it measures the sum of all parse calls during a real stream, not a single call. This is the number that affects actual user-perceived responsiveness.
How it was run: Documents are replayed in 20-char chunks. Each chunk triggers a parse() (naïve) or push() (incremental) call. Total time across all calls is recorded. 100 full-stream replays, median taken.
The simple-table fixture is a single statement — there’s nothing to cache, so both approaches are equivalent. The benefit scales with the number of statements because more of the document gets cached and skipped on each chunk.
The one-shot table shows 13.4µs for contact-form; the streaming table shows 316µs (naïve). These are not contradictory — they measure different things:
* 13.4µs = cost of one parse() call on the complete 400-char string
* 316µs = total cost of ~20 parse() calls during the stream (chunk 1 parses 20 chars, chunk 2 parses 40 chars, …, chunk 20 parses 400 chars — cumulative sum of all those growing calls)
This experience sharpened our thinking on the right use cases for WASM:
✅ Compute-bound with minimal interop: image/video processing, cryptography, physics simulations, audio codecs. Large input → scalar output or in-place mutation. The boundary is crossed rarely.
✅ Portable native libraries: shipping C/C++ libraries (SQLite, OpenCV, libpng) to the browser without a full JS rewrite.
❌ Parsing structured text into JS objects: you pay the serialization cost either way. The parsing computation is fast enough that V8′s JIT eliminates any Rust advantage. The boundary overhead dominates.
❌ Frequently-called functions on small inputs: if the function is called 50 times per stream and the computation takes 5µs, you cannot amortise the boundary cost.
Profile where time is actually spent before choosing the implementation language.
For us, the cost was never in the computation - it was always in data transfer across the WASM-JS boundary.
“Direct object passing” through serde-wasm-bindgen is not cheaper.
Constructing a JS object field-by-field from Rust involves more boundary crossings than a single JSON string transfer, not fewer. The boundary crossings happen inside the single FFI call, invisibly.
Algorithmic complexity improvements dominate language-level optimisations.
Going from O(N²) to O(N) in the streaming case had a larger practical impact than switching from WASM to TypeScript.
WASM and JS do not share a heap.
WASM has a flat linear memory (WebAssembly. Memory) that JS can read as raw bytes, but those bytes are Rust’s internal layout - pointers, enum discriminants, alignment padding - completely opaque to the JS runtime. Conversion is always required and always costs something.
...
Read the original on www.openui.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.