10 interesting stories served every morning and every evening.
You’re happily working away, fingers flying, deep in flow, and suddenly, boink, your session has expired. You sigh, re-enter your password (again), complete an MFA challenge (again), maybe approve an email notification (again), and finally — access restored. Until next time.
This wasn’t so bad when it was just passwords; we all got pretty fast at retyping our passwords. But all those MFA challenges really slow us down. And MFA fatigue attacks, a growing vector for phishing, get worse as more legitimate MFA requests arrive.
We all used to believe that changing passwords often was a good idea; turns out the opposite is true. Similarly, we all used to believe that making people log in frequently was good security. If authentication is good, then surely more authentication is better, right? Like taking vitamins — one a day is good, so twenty must be great! Except, well, that’s not how anything works.
Security isn’t about how often you log in. It’s about how well your access is managed in the first place, how fast we can react to policy changes on your account, and how confident we are that your key hasn’t been leaked since the last auth. The good news? You can get strong security guarantees without making users miserable.
Authentication usually boils down to one of two things:
Are you still in physical possession of the device? (For example, Windows Hello PINs, YubiKeys, or smart cards; tests which anyone physically present can likely pass.)Are you the right person? (Passwords, Face ID, Touch ID — things that supposedly nobody but you can replicate, but which don’t prove you’re physically near a given device.)
Identity providers (IdPs) focus mostly on who you are, since their whole job is identity verification. If they require a YubiKey, they might also check device possession, but that’s not really their main gig.
Integrated authentication systems like Apple’s Face ID and Touch ID, and tools like Windows Hello, are interesting because they do both at once. They’re amazing as long as they are securely enrolled and their keys are held in a highly trusted, malware resistant, TPM.
So why do frequent re-logins exist? Usually, it’s because admins aren’t confident that changes will take effect immediately. Sometimes, especially with SAML, an IdP is configured to send policy attributes to apps only during the user-interactive login process, which means they can’t update without a new login. How long are we vulnerable if someone leaves the company or loses their laptop?! But it doesn’t have to be that way.
Most attackers aren’t lurking in your office, waiting for you to step away. They’re remote, so their attack vector is phishing — it’s pretty easy for them to steal your password. As an administrator, the best policy is to assume remote attackers already have your password, and build your systems accordingly. That means the second factor (SMS, email, or preferably a Yubikey or equivalent) is the most important defense against remote attacks.
But, there’s also physical attacks. If someone steals your laptop, usually your screen is already locked. That means open browser sessions won’t do them much good. Random cafe thieves probably don’t have your password. If they do, more logins aren’t much of a defense.
In fact, frequent logins give attackers, both local and remote, more chances to steal your credentials. That’s deadly for security, in addition to creating annoyance for users.
Modern operating systems already handle this problem with screen locks. If your screen locks when you step away, your OS is doing exactly what a frequent login prompt would do, except without annoying you every few hours. Consider enforcing automatic screen lock when you walk away. If your screen is locked, all those other open sessions are safe too.
Some web apps log you out quickly under the assumption that you might be on a shared computer. That makes sense if you’re using an Internet café in 2006. But for most people, web session expiry is just a relic of a bygone era.
A 15-minute session duration makes sense for something highly sensitive and disproportionately valuable, like your bank, where you want that little bit extra. But the “agressively mid-range” expiry times on most websites, like 7 days or 30 days, don’t help anyone with anything. They’re too long to stop real session hijacking before the damage is done, but so short they’re constantly annoying. It’s the worst of both worlds. Security theatre.
If you really need to confirm someone is at their keyboard, you don’t want a login prompt every few hours — you want a check right before a sensitive action. That’s why Tailscale SSH’s check mode and the Tailscale Slack Accessbot exist: they verify that the user is there only when it actually matters, not just on an arbitrary timer.
And yes, set that OS screen lock aggressively! Now that most OSes can unlock with just a fingerprint or face, there’s no reason to leave your screen unlocked when you walk away.
Security should be continuous, not tied to arbitrary interactive cycles. Instead of nagging users, tools like device posture checks and SCIM-based access control can update security attributes and policies in real time, in the background, without users doing anything. That means you can have updated policies within seconds or minutes; you don’t have to compromise between short reauth times (super annoying) and longer ones (less protection).
* If your device goes offline, is marked lost, or fails a security check, access gets revoked instantly.
* If your role or employment status changes, your access updates automatically.
This approach is smarter and more secure than making users re-enter their credentials over and over.
Frequent logins aren’t making you safer. They’re just annoying you into worse security habits (like password reuse, clicking phishing links, and MFA fatigue). The best security happens quietly in the background, ensuring safety without getting in the way.
At Tailscale, we believe in security that’s adaptive, intelligent, and actually useful — not just security theater. Instead of forcing pointless logins, we make sure authentication happens at the right moments, with as little friction as possible. If you use Tailscale to access your other apps, through tsidp or App Connector, our real-time security checks can flow through to all your other login sessions as well. Even in legacy apps that don’t understand SCIM or device posture.
...
Read the original on tailscale.com »
You’re happily working away, fingers flying, deep in flow, and suddenly, boink, your session has expired. You sigh, re-enter your password (again), complete an MFA challenge (again), maybe approve an email notification (again), and finally — access restored. Until next time.
This wasn’t so bad when it was just passwords; we all got pretty fast at retyping our passwords. But all those MFA challenges really slow us down. And MFA fatigue attacks, a growing vector for phishing, get worse as more legitimate MFA requests arrive.
We all used to believe that changing passwords often was a good idea; turns out the opposite is true. Similarly, we all used to believe that making people log in frequently was good security. If authentication is good, then surely more authentication is better, right? Like taking vitamins — one a day is good, so twenty must be great! Except, well, that’s not how anything works.
Security isn’t about how often you log in. It’s about how well your access is managed in the first place, how fast we can react to policy changes on your account, and how confident we are that your key hasn’t been leaked since the last auth. The good news? You can get strong security guarantees without making users miserable.
Authentication usually boils down to one of two things:
Are you still in physical possession of the device? (For example, Windows Hello PINs, YubiKeys, or smart cards; tests which anyone physically present can likely pass.)Are you the right person? (Passwords, Face ID, Touch ID — things that supposedly nobody but you can replicate, but which don’t prove you’re physically near a given device.)
Identity providers (IdPs) focus mostly on who you are, since their whole job is identity verification. If they require a YubiKey, they might also check device possession, but that’s not really their main gig.
Integrated authentication systems like Apple’s Face ID and Touch ID, and tools like Windows Hello, are interesting because they do both at once. They’re amazing as long as they are securely enrolled and their keys are held in a highly trusted, malware resistant, TPM.
So why do frequent re-logins exist? Usually, it’s because admins aren’t confident that changes will take effect immediately. Sometimes, especially with SAML, an IdP is configured to send policy attributes to apps only during the user-interactive login process, which means they can’t update without a new login. How long are we vulnerable if someone leaves the company or loses their laptop?! But it doesn’t have to be that way.
Most attackers aren’t lurking in your office, waiting for you to step away. They’re remote, so their attack vector is phishing — it’s pretty easy for them to steal your password. As an administrator, the best policy is to assume remote attackers already have your password, and build your systems accordingly. That means the second factor (SMS, email, or preferably a Yubikey or equivalent) is the most important defense against remote attacks.
But, there’s also physical attacks. If someone steals your laptop, usually your screen is already locked. That means open browser sessions won’t do them much good. Random cafe thieves probably don’t have your password. If they do, more logins aren’t much of a defense.
In fact, frequent logins give attackers, both local and remote, more chances to steal your credentials. That’s deadly for security, in addition to creating annoyance for users.
Modern operating systems already handle this problem with screen locks. If your screen locks when you step away, your OS is doing exactly what a frequent login prompt would do, except without annoying you every few hours. Consider enforcing automatic screen lock when you walk away. If your screen is locked, all those other open sessions are safe too.
Some web apps log you out quickly under the assumption that you might be on a shared computer. That makes sense if you’re using an Internet café in 2006. But for most people, web session expiry is just a relic of a bygone era.
A 15-minute session duration makes sense for something highly sensitive and disproportionately valuable, like your bank, where you want that little bit extra. But the “agressively mid-range” expiry times on most websites, like 7 days or 30 days, don’t help anyone with anything. They’re too long to stop real session hijacking before the damage is done, but so short they’re constantly annoying. It’s the worst of both worlds. Security theatre.
If you really need to confirm someone is at their keyboard, you don’t want a login prompt every few hours — you want a check right before a sensitive action. That’s why Tailscale SSH’s check mode and the Tailscale Slack Accessbot exist: they verify that the user is there only when it actually matters, not just on an arbitrary timer.
And yes, set that OS screen lock aggressively! Now that most OSes can unlock with just a fingerprint or face, there’s no reason to leave your screen unlocked when you walk away.
Security should be continuous, not tied to arbitrary interactive cycles. Instead of nagging users, tools like device posture checks and SCIM-based access control can update security attributes and policies in real time, in the background, without users doing anything. That means you can have updated policies within seconds or minutes; you don’t have to compromise between short reauth times (super annoying) and longer ones (less protection).
* If your device goes offline, is marked lost, or fails a security check, access gets revoked instantly.
* If your role or employment status changes, your access updates automatically.
This approach is smarter and more secure than making users re-enter their credentials over and over.
Frequent logins aren’t making you safer. They’re just annoying you into worse security habits (like password reuse, clicking phishing links, and MFA fatigue). The best security happens quietly in the background, ensuring safety without getting in the way.
At Tailscale, we believe in security that’s adaptive, intelligent, and actually useful — not just security theater. Instead of forcing pointless logins, we make sure authentication happens at the right moments, with as little friction as possible. If you use Tailscale to access your other apps, through tsidp or App Connector, our real-time security checks can flow through to all your other login sessions as well. Even in legacy apps that don’t understand SCIM or device posture.
...
Read the original on tailscale.com »
I started my business when I was 21 (I’m now 39). I built custom apps and did consulting for accounting software, invoicing systems, and point-of-sale tools.
Procrastination has always been my biggest struggle. The only way I could get things done was by relying on stress, coming from clients or financial pressure. That worked for a while, but it cost me my health (I burned out) and my business (I went bankrupt).
I noticed that I have no problem spending hours fully focused on a video game. If I can focus on a game, then my brain must also be capable of focusing on other tasks. So I naturally started asking myself why it’s so easy to get hooked by a game, and how I could apply that same effect to tasks I struggle to complete.
It turns out that many of the struggles I’ve had throughout my life are linked to ADHD (Attention Deficit Disorder). The goal of this article isn’t to focus on ADHD, but it’s important to mention. ADHD affects many people, often without them even knowing it, in various ways and at different levels.
To understand what makes a video game addictive, let’s take first-person shooters (FPS) as an example. FPS games are among the most popular and addictive games.
An FPS is built around a simple loop: Aim → Shoot → Hit or Miss. This is the game loop. The outcome, hit or miss, is immediately shown through sounds or visuals. This immediate reaction is called feedback.
For a game to be addictive, the game loop must repeat frequently and give strong feedback. Imagine an FPS where you only meet an enemy every 30 minutes. That wouldn’t be engaging. The loop must repeat quickly to keep you interested.
Feedback in FPS games has improved a lot over time. Early FPS games only showed a simple blood splash when you hit an enemy. Modern games provide much stronger feedback. Now, when you hit an enemy, you might see:
* the crosshair briefly changes to confirm the hit
* damage numbers pop up above the enemy
Note that a game can have several game loops. For example, there can be another loop for finding random equipment for your character. Feedback can also come from outside the main loop, such as loot boxes or side missions.
For a game loop to be addictive, other factors matter too. You need to personally enjoy the type of game, and the challenge must match your skill level. If it’s too easy or too hard, you won’t be hooked. There are dozens of other factors, unique to each player’s personality and preferences.
When all these elements come together, each game loop gives you a small dose of dopamine and creates a state of flow. In this state, we’re fully focused, we lose track of time, and it’s easier to handle complex tasks.
One final point: Video games are easy to start. They require very little motivation or discipline to start playing.
The game loop should repeat often. Feedback from the loop should be strong.Games are easy to start, even with low motivation.
Based on what we’ve just seen, our real-life game loop is made up of completing tasks and habits throughout the day.
Our first goal is to repeat this game loop as many times as possible each day. A simple solution is breaking tasks into smaller parts. Let’s take an example everyone can relate to: cleaning the house. At first glance, you might break down “Clean the house” like this:
* Clean the house
If these three tasks take about an hour, that’s only three game loops. Another problem is starting a task. Cleaning your bedroom might take 20 minutes, which feels too long when you’re not motivated. So you procrastinate.
My solution is breaking things down even further:
* Clean the house
The rule is simple: the more you procrastinate on a task, the more you should break it down into micro-tasks, even ones that take just 2 to 5 minutes in extreme cases.
Now that we have our game loop, we are going to boost the feedback. To do this, I recommend using sticky notes. Write each task on a sticky note. When you finish the task, crumple the note into a ball and throw it into a clear jar.
This gives us extra feedback: crumpling the paper, the satisfying sound, and seeing our progress in the clear jar.
One important point: writing your task on a sticky note makes it real. It’s no longer just words on a screen. It’s harder to procrastinate on something physically in front of you.
Use sticky notes for your tasks. (The task becomes hard to ignore.)Once the task is done, crumple the note into a ball. (Feedback)
To make this system easy and get your day started, begin with simple, routine tasks. Think about role-playing games. Early levels are easy and help build momentum for the harder levels later. That’s why it’s important to put your daily habits on sticky notes. Start your day with easy wins.
For example, the first sticky note waiting for me each morning is to make coffee. When I sit at my computer, my first task is a quick two-minute warmup. I spend one minute typing text in software to measure my typing speed. Then I spend one minute practicing keyboard shortcuts. I do these two tasks every morning without fail.
Between waking up and starting work, I complete around ten sticky notes with easy, routine habits. This builds momentum and makes it easier to keep working throughout the day.
I strongly recommend including these morning habits in your sticky note system. It allows you to start the day with quick and easy tasks. If your first task is the hardest of the day, you’re more likely to procrastinate. But when you start with your simple morning routine, you gain momentum that makes the rest of your day easier.
I also make sure to prepare these tasks the night before. This way, I can immediately start working without having to think about creating sticky notes in the morning.
Use sticky notes with your morning routine to start your day strong. Prepare your sticky notes the night before, so you can immediately start working in the morning without extra planning.
My main advice is to stay flexible with this method. I strongly recommend always starting your day with your first tasks and habits on sticky notes. However, this initial momentum might be enough for you to remain productive for the entire day.
If later in the day you notice you’re starting to procrastinate, immediately return to the system. Take a few minutes to refocus, clearly define the next 3 to 5 tasks, write them down on sticky notes, and start working on them right away. Repeat this as many times as necessary.
Some tasks can’t easily be broken into smaller steps. In that case, break them down by time. For example: “Clean for 10 minutes.”
For complex tasks, you might realize your original plan isn’t ideal once you begin working. That’s fine. You’re already in motion, so continue working flexibly, and use your existing sticky notes whenever you feel you’ve earned feedback.
You might also have tasks you’ve procrastinated on for months or even years, causing them to accumulate significantly. For example, maybe your inbox contains thousands of emails. You’ve probably imagined one day you’ll suddenly feel motivated and clear them all at once. That day will never come. Instead, create a daily sticky note that requires you to process all new emails plus a specific number of older emails. It might take months to clear your inbox, but at least you’ve stopped procrastinating.
Another scenario that regularly happens to me is planning about ten sticky notes, then completely forgetting about them. I end up doing the tasks but without using the sticky notes for feedback. That’s completely fine. In fact, it means I was in a productive flow state, and my objective was accomplished!
I’m sure you’ve already read dozens of articles and watched countless YouTube videos about productivity, procrastination, and efficiency. Now you absolutely need to take action! No excuses.
Try this method! No sticky notes? Grab some paper and scissors, and prepare your tasks for tomorrow. Don’t have a clear jar? Use a regular glass. For the next few days, use this system and see if it helps. Do you find it easier to complete your tasks? Do you forget fewer important things?
Keep testing for a few weeks. Think of it like going to the gym. You won’t become a bodybuilder with huge muscles after your first workout. Turn this system into a habit!
Here’s why this system works so well:
Breaking tasks into small steps allows us to repeat our game loop frequently. Starting the day with easy wins builds momentum and makes it easier to continue.Crumpling completed tasks and tossing them into a clear jar provides satisfying feedback.
Let’s be clear. This system works extremely well. It’s better than anything else I’ve tried in my life. And trust me, I’ve tested many methods that promised to boost productivity. But one big problem remained. It took too long and was painful to write all those sticky notes. Writing only three tasks a day wasn’t enough. I needed twenty, thirty, or sometimes even more. On lazy days, I skipped writing notes, and my productivity collapsed.
One day, I saw a Reddit post about someone using a receipt printer to print daily tasks. I suddenly realized I had the same idea years ago while setting up checkout systems for clients. But, as usual, I procrastinated.
A receipt printer is perfect for replacing my sticky-note system. I can easily print dozens of habits. These printers are fast, and they automatically cut each ticket. They’re also cheap to run. They don’t need ink, only paper. A good printer (For example Epson TM-T20III) costs about $150, and each 80-meter roll costs around $3. One roll can print thousands of tasks.
I created a simple script to print my daily habits. I have a separate list for each weekday because my habits and tasks vary.
Since the printer can handle a large number of tasks, I added habits and tasks I didn’t usually write on sticky notes, making my system even more robust.
I still use sticky notes for my main work, but much of my daily routine is now managed by my receipt printer. After a few weeks, I noticed something important: I haven’t missed a single habit, not even one.
My productivity had already improved significantly with the sticky-note system. But adding the printer made me truly consistent, boosting my productivity even more.
Ironically, I previously built a habit-tracking app but always forgot to use it. With this new system, I haven’t missed tracking my habits even once.
Summary of the benefits of the receipt printer:
Printing tasks using a receipt printer removes the friction of daily preparation. With this system, it will be easy to print more tasks, increasing the system’s efficiency.Massive reduction in the chances of skipping the system for a day due to procrastination.
But there’s still one problem. I still waste time writing tasks on sticky notes during the day. My printing script isn’t practical. If I want to add habits or new tasks, I have to edit the script manually. That’s inconvenient for everyday use, especially when tasks change daily. I needed a simpler solution.
My first idea was to connect existing software to my printer. But I’m not happy with most existing task apps. They don’t easily break down tasks into smaller pieces. Most task apps use a simple, single-level list. Visually splitting tasks into subtasks is impossible. Some task managers offer hierarchical task lists, allowing subtasks. But this creates another problem: the lists become very long and overwhelming.
A second issue is losing the big picture. Some tasks need breaking down, others don’t. This makes task lists messy and hard to manage.
My third problem is that I need software that’s extremely fast to use (ideally keyboard-only for even more speed), so I can create a lot of tasks without feeling frustrated by the UX.
I spent a lot of time searching for a solution. I tested multiple ideas, wasting countless hours. But I finally found a simple yet brilliant solution. Instead of displaying hierarchical tasks vertically, why not do it horizontally, dividing each level into columns? Each time I click a task, its subtasks appear clearly in the next column. This helps me keep the big picture, filtering tasks I don’t need right now.
You are currently reading the print version of this article. On the web version, you’ll find an interactive demo of the concept in this spot.
Test the concept in this interactive demo:
I built custom software based on this idea and connected it directly to my printer. When I procrastinate, I can quickly break a task into subtasks without messing with my project structure, because subtasks appear in another column. And I can easily print just that column.
This method, combined with the printer and my app, has changed my life over the past few months. I now have strong, steady productivity, which is a huge daily victory for someone with ADHD.
Without exaggerating, I believe I’ve doubled or tripled my productivity. Of course, before this system, I sometimes had very productive days. But I also had many days when I did almost no meaningful work tasks. Those very low productivity days have completely vanished from my life.
Video not supported in your browser.
You can watch the video on the web version of this article.
I truly hope this article inspired you, and that you’ll try some of these ideas.
I plan to release my software publicly in the coming weeks. You can subscribe to my newsletter to get notified when it’s available.
...
Read the original on www.laurieherault.com »
Announcing Magistral — the first reasoning model by Mistral AI — excelling in domain-specific, transparent, and multilingual reasoning.
The best human thinking isn’t linear — it weaves through logic, insight, uncertainty, and discovery. Reasoning language models have enabled us to augment and delegate complex thinking and deep understanding to AI, improving our ability to work through problems requiring precise, step-by-step deliberation and analysis.
But this space is still nascent. Lack of specialized depth needed for domain-specific problems, limited transparency, and inconsistent reasoning in the desired language — are just some of the known limitations of early thinking models.
Today, we’re excited to announce our latest contribution to AI research with Magistral — our first reasoning model. Released in both open and enterprise versions, Magistral is designed to think things through — in ways familiar to us — while bringing expertise across professional domains, transparent reasoning that you can follow and verify, along with deep multilingual flexibility.
A one-shot physics simulation showcasing gravity, friction and collisions with Magistral Medium in Preview.
Magistral is a dual-release model focused on real-world reasoning and feedback-driven improvement.
We’re releasing the model in two variants: Magistral Small — a 24B parameter open-source version and Magistral Medium — a more powerful, enterprise version.
Magistral Medium scored 73.6% on AIME2024, and 90% with majority voting @64. Magistral Small scored 70.7% and 83.3% respectively.
Suited for a wide range of enterprise use cases — from structured calculations and programmatic logic to decision trees and rule-based systems.
With the new Think mode and Flash Answers in Le Chat, you can get responses at 10x the speed compared to most competitors.
The release is supported by our latest paper covering comprehensive evaluations of Magistral, our training infrastructure, reinforcement learning algorithm, and novel observations for training reasoning models.
As we’ve open-sourced Magistral Small, we welcome the community to examine, modify and build upon its architecture and reasoning processes to further accelerate the emergence of thinking language models. Our earlier open models have already been leveraged by the community for exciting projects like ether0 and DeepHermes 3.
Magistral is fine-tuned for multi-step logic, improving interpretability and providing a traceable thought process in the user’s language, unlike general-purpose models.
We aim to iterate the model quickly starting with this release. Expect the models to constantly improve.
The model excels in maintaining high-fidelity reasoning across numerous languages. Magistral is especially well-suited to reason in languages including English, French, Spanish, German, Italian, Arabic, Russian, and Simplified Chinese.
Prompt and response in Arabic with Magistral Medium in Preview in Le Chat.
With Flash Answers in Le Chat, Magistral Medium achieves up to 10x faster token throughput than most competitors. This enables real-time reasoning and user feedback, at scale.
Speed comparison of Magistral Medium in Preview in Le Chat against ChatGPT.
Magistral is ideal for general purpose use requiring longer thought processing and better accuracy than with non-reasoning LLMs. From legal research and financial forecasting to software development and creative storytelling — this model solves multi-step challenges where transparency and precision are critical.
Building on our flagship models, Magistral is designed for research, strategic planning, operational optimization, and data-driven decision making — whether executing risk assessment and modelling with multiple factors, or calculating optimal delivery windows under constraints.
Legal, finance, healthcare, and government professionals get traceable reasoning that meets compliance requirements. Every conclusion can be traced back through its logical steps, providing auditability for high-stakes environments with domain-specialized AI.
Magistral enhances coding and development use cases: compared to non-reasoning models, it significantly improves project planning, backend architecture, frontend design, and data engineering through sequenced, multi-step actions involving external tools or API.
Our early tests indicated that Magistral is an excellent creative companion. We highly recommend it for creative writing and storytelling, with the model capable of producing coherent or — if needed — delightfully eccentric copy.
Magistral Small is an open-weight model, and is available for self-deployment under the Apache 2.0 license. You can download it from:
You can try out a preview version of Magistral Medium in Le Chat or via API on La Plateforme.
Magistral Medium is also available on Amazon SageMaker, and soon on IBM WatsonX, Azure AI and Google Cloud Marketplace.
For enterprise and custom solutions, including on-premises deployments, contact our sales team.
...
Read the original on mistral.ai »
The jemalloc memory allocator was first conceived in early 2004, and has been in public use for about 20 years now. Thanks to the nature of open source software licensing, jemalloc will remain publicly available indefinitely. But active upstream development has come to an end. This post briefly describes jemalloc’s development phases, each with some success/failure highlights, followed by some retrospective commentary.
In 2004 I began work on the Lyken programming language in the context of scientific computing. Lyken was an eventual dead end, but its manual memory allocator was functionally complete by May 2005. (The garbage collector which was to leverage its features was never completed.) In September 2005 I started integrating the allocator into FreeBSD, and in March 2006 I removed the allocator from Lyken, in favor of thin wrappers around system allocator functionality.
Why remove the memory allocator from Lyken after so much effort went into it? Well, once the allocator was integrated into FreeBSD, it became apparent that the only feature missing from the system allocator was a mechanism for tracking allocation volume in order to trigger per thread garbage collection. And that could be implemented via thin wrappers using thread-specific data and
dlsym(3). Interestingly, many years later jemalloc even added the statistics gathering that Lyken would have needed.
Back in 2005 the transition to multi-processor computers was ongoing. FreeBSD had Poul-Henning Kamp’s excellent phkmalloc memory allocator, but that allocator had no provisions for parallel thread execution. Lyken’s allocator seemed like an obvious scalability improvement, and with encouragement from friends and colleagues I integrated what quickly became known as jemalloc. Ah, but not so fast! Shortly after integration it became apparent that jemalloc had terrible fragmentation issues under some loads, notably those induced by KDE
applications. Just when I thought I was mostly done, this real-world failure called jemalloc’s viability into question.
In brief, the fragmentation issue arose from using a unified extent allocation approach (i.e. no size class segregation). I had taken basic inspiration from Doug Lea’s
dlmalloc, but without the intertwined battle-tested heuristics that avoided many of the worst fragmentation issues. Much frantic research and experimentation ensued. By the time jemalloc was part of a FreeBSD release, its layout algorithms had completely changed to use size-segregated regions, as described in the 2006 BSDCan jemalloc
paper.
In November 2007, Mozilla Firefox 3 was nearing release, and high fragmentation was an unresolved issue, especially on Microsoft Windows. Thus began a year of working with Mozilla on memory allocation. Porting jemalloc to Linux was trivial, but Windows was another matter. The canonical jemalloc sources were in the FreeBSD libc library, so we essentially forked jemalloc and added portability code, upstreaming anything that was relevant to FreeBSD. The entire implementation was still in one file, which reduced the friction of fork maintenance, but the implementation complexity definitely surpassed what is reasonable in a single file sometime during this phase of development.
Years later, Mozilla developers made significant contributions to the upstream jemalloc in an effort to move away from their fork. Unfortunately, Mozilla benchmarks consistently showed that the forked version outperformed the upstream version. I don’t know if this was due to overfitting to a local optimum or an actual indication of performance regression, but it remains one of my biggest jemalloc disappointments.
When I started work at Facebook in 2009, I was surprised to discover that the biggest impediment to ubiquitous jemalloc use in Facebook infrastructure was instrumentation. Critical internal services were in the awkward situation of depending on jemalloc to keep memory fragmentation under control, but engineers needed to debug memory leaks with tcmalloc and the pprof heap profiling tool that is part of
gperftools. pprof-compatible heap profiling functionality headlined the jemalloc 1.0.0 release.
jemalloc development migrated to GitHub and continued sporadically for the next few years as issues and opportunities arose. Other developers started contributing significant functionality. Version 3.0.0 introduced extensive testing infrastructure, as well as
Valgrind support. The 4.x release series introduced decay-based purging and
JSON-formatted telemetry. The 5.x series transitioned from “chunks” to “extents” to pave the way for better interaction with 2 MiB huge pages.
Somewhat more controversially, I removed Valgrind support in 5.0.0 because it was a significant maintenance complication (numerous tendrils in subtle places), and it was unused inside Facebook; other tools like pprof and MemorySanitizer
dominated. I had received very little feedback about Valgrind support, and extrapolated that it was not being used. In retrospect, that seems not to have been the case. In particular, the Rust
language directly incorporated jemalloc into compiled programs, and I think there was some overlap between Rust developers and Valgrind developers. People were angry. jemalloc was probably booted from Rust binaries sooner than the natural course of development might have otherwise dictated.
Facebook’s internal telemetry is a wonder to behold, and it is a massive boon to have performance data from myriad services informing memory allocator development. I don’t think it’s an accident that two of the fastest memory allocators of the past decade (tcmalloc and jemalloc) benefit from such data. Even “simple” things like fast-path optimizations are much easier to get right when there are aggregated Linux perf data on hand. Harder things like fragmentation avoidance are still hard, but if thousands of distinct workflows behave well with no outlier regressions, then a change is probably safe. jemalloc has benefited immensely from being integral to the Facebook infrastructure in terms of performance, resilience, and consistent behavior. Additionally, jemalloc’s own integrated statistics reporting capabilities arose directly in response to this ubiquitous telemetry environment, and this turned out to generally benefit both jemalloc development and non-Facebook application tuning/debugging far in excess of the implementation effort required.
During my last year at Facebook I was encouraged to build a small jemalloc team so that we could tackle some big tasks that would have been otherwise daunting. On top of major performance improvements, we got things like continuous integration testing and comprehensive telemetry. When I left Facebook in 2017, the jemalloc team carried on doing excellent development and maintenance work for several years, almost entirely without my involvement, under the leadership of my esteemed colleague, Qi Wang, and as evidenced by the commit history, with the excellent contributions of many others.
The nature of jemalloc development noticeably shifted around the time that Facebook rebranded itself as Meta. Facebook infrastructure engineering reduced investment in core technology, instead emphasizing return on investment. This is apparent in the jemalloc commit history. In particular, the seeds for principled huge page allocation (HPA) were sown way back in 2016! HPA work continued apace for several years, slowed, then stagnated as tweaks piled on top of each other without the requisite refactoring that keeps a codebase healthy. This feature trajectory recently cratered. The heartbreak for me is somewhat blunted since I have not been closely involved for years, but as a result of recent changes within Meta we no longer have anyone shepherding long-term jemalloc development with an eye toward general utility.
I don’t want to dwell on drama, but it is perhaps worth mentioning that we reached a sad end for jemalloc in the hands of Facebook/Meta even though most of the people involved were acting in good faith. Corporate cultures shift in compliance with both external and internal pressures. And people find themselves in impossible situations where the main choices are 1) make poor decisions under extreme pressure, 2) comply under extreme pressure, or 3) get routed around. As individuals we sometimes have enough influence to slow organizational degradation, maybe even contribute to isolated renaissances, but none of us can prevent the inevitable.
I remain very grateful to my former colleagues for all their excellent work on jemalloc, and Facebook/Meta in general for investing so much, for so long.
What now? As far as I am concerned, “upstream” jemalloc development has concluded. Meta’s needs stopped aligning well with those of external uses some time ago, and they are better off doing their own thing. Were I to reengage, the first step would be at least hundreds of hours of refactoring to pay off accrued technical debt. And I’m not sufficiently excited by what would come after to pay such a high upfront cost. Perhaps others will create viable forks, whether from the dev branch or from the 5.3.0 release (already three years old!).
In above sections I mentioned several phase-specific failures, but there were some generic failures that surprised me despite a career focused on open source development.
* As mentioned, removing Valgrind caused some bad sentiment. But the root of the problem is lack of
awareness about external uses and needs. I probably would have worked with others to preserve
Valgrind support if I’d known that it mattered to anyone. As another example, I was completely
unaware of jemalloc’s use as the
Android memory allocator for perhaps
two years. And years later, unaware of its replacement until after the fact.
* Even though jemalloc development remained completely out in the open (not siloed inside Facebook),
the project never grew to retain primary contributors from other organizations. The Mozilla effort
by Mike Hommey to move Firefox to the upstream jemalloc was a near miss. Efforts by others to
transition to a CMake-based build system stalled multiple times, and never
crossed the finish line. I knew from hard experience with
Darwin that internally siloed open
source projects cannot thrive (HHVM was a repeat lesson), but jemalloc needed
more than open development to thrive as an independent project.
jemalloc was an odd diversion for me, since I have been a strong proponent of garbage collection over manual memory management for over 25 years. Personally I’m happy to be working again on garbage-collected systems, but jemalloc was a tremendously fulfilling project. Thank you to everyone who made this project so worthwhile, collaborators, supporters, and users alike.
...
Read the original on jasone.github.io »
I’m very excited to announce that I have recently joined Turso as a software engineer. For many in the field, including myself, getting to work on databases and solve unique challenges with such a talented team would be a dream job, but it is that much more special to me because of my unusual and unlikely circumstances. As difficult as it might be to believe, I am currently incarcerated and I landed this job from my cell in state prison. If you don’t know me, let me tell you more about how I got here.
Nearly two years have passed since I published How I got here to my blog. That post was my first real contact with the outside world in years, as I’d been off all social media and the internet since 2017. The response and support I would receive from the tech community caught me completely off guard.
A brief summary is that I’m currently serving prison time for poor decisions and lifestyle choices I made in my twenties, all related to drugs. Three years ago, I enrolled in a prison college program that came with the unique opportunity to access a computer with limited internet access. This immediately reignited a teenage love for programming and a lightbulb immediately lit up: that this would be my way out of the mess I had gotten myself into over the past 15 years. I quickly outgrew the curriculum, preferring instead to spend ~15+ hours a day on projects and open source contributions.
Through fortunate timing and lots of hard work, I was selected to be one of the first participants in the Maine Dept of Correction’s remote work program, where residents who meet certain requirements are allowed to seek out remote employment opportunities. I landed a software engineering job at a startup called Unlocked Labs building education solutions for incarcerated learners, while contributing to open source on the side. After just a year, I was leading their development team.
Last December I was between side-projects and browsing Hacker News when I discovered Project Limbo, an effort by Turso to rewrite SQLite from scratch. I’d never worked on relational databases, but some experience with a cache had recently sparked an interest in storage engines. Luckily for me I saw that the project was fairly young with plenty of low hanging fruit to cut my teeth on.
To put this entirely into perspective for some of you may be difficult, but in prison there isn’t exactly a whole lot to do and programming absolutely consumes my life. I either write code or manage Kubernetes clusters or other infrastructure for about 90 hours a week, and my only entertainment is a daily hour of tech/programming YouTube; mostly consisting of The Primeagen, whose story was a huge inspiration to me early on.
Through Prime, I had known about Turso since the beginning and had watched several interviews with Glauber and Pekka discussing their Linux kernel backgrounds and talking about the concept of distributed, multi-tenant SQLite. These were folks I’d looked up to for years and definitely could not have imagined that I would eventually be in any position to be contributing meaningfully to such an ambitious project of theirs. So needless to say, for those first PR’s, just the thought of a kernel maintainer reviewing my code had made me quite nervous.
Helping build Limbo quickly became my new obsession. I split my time between my job and diving deep into SQLite source code, academic papers on database internals, and Andy Pavlo’s CMU lectures. I was active on the Turso Discord but I don’t think I considered whether anyone was aware that one of the top contributors was doing so from a prison cell. My story and information are linked on my GitHub, but it’s subtle enough where you could miss it if you didn’t read the whole profile. A couple months later, I got a Discord message from Glauber introducing himself and asking if we could meet.
In January, Glauber’s tweet about our interaction caught the attention of The Primeagen, and he ended up reading my blog post on his stream, bringing a whole lot of new attention to it.
To this day I receive semi-regular emails either from developers, college kids or others who maybe have either gone through addiction or similar circumstances, or just want to reach out for advice on how to best start contributing to open source or optimize their learning path.
I’m incredibly proud to be an example to others of how far hard work, determination and discipline will get you, and will be forever grateful for the opportunities given to me by the Maine Dept of Corrections to even be able to work hard in the first place, and to Unlocked Labs for giving me a chance and hiring me at a time when most assuredly no-one else would.
I’m also incredibly proud to announce that I am now working for Turso full time, something I would never have dreamed would be possible just a few years ago, I’m very excited to be a part of the team and to get to help build the modern evolution of SQLite.
Although some recent bad news from the court means that I won’t be coming home as early as my family and I had hoped, my only choice is to view this as a blessing and for the next 10 months, will instead just be able to continue to dedicate time and focus to advancing my career at such a level that just wouldn’t be possible otherwise.
Thank you to everyone who has taken the time to reach out over the past couple years, to my team at Unlocked Labs, and especially my parents. Thanks to Turso for the opportunity and to all the other companies with fair chance hiring policies who believe that people deserve a second chance. This journey has been totally surreal and every day I am still in awe of how far my life has come from the life I lived even just a few years ago.
...
Read the original on turso.tech »
ROME (AP) — Spyware from a U. S.-backed Israeli company was used to target the phones of at least three prominent journalists in Europe, two of whom are editors at an investigative news site in Italy, according to digital researchers at Citizen Lab, citing new forensic evidence of the attacks.
The findings come amid a growing questions about what role the government of Italian Prime Minister Giorgia Meloni may have played in spying on journalists and civil society activists critical of her leadership, and raised new concerns about the potential for abuse of commercial spyware, even in democratic countries.
“Any attempts to illegally access data of citizens, including journalists and political opponents, is unacceptable, if confirmed,” the European Union’s executive branch said in a statement Wednesday in response to questions from members of parliament. The European Commission “will use all the tools at its disposal to ensure the effective application of EU law.”
Meloni’s office declined to comment Thursday, but a prominent member of her Cabinet has said that Italy “rigorously respected” the law and that the government hadn’t illegally spied on journalists.
The company behind the hacks, Paragon Solutions, has sought to position itself as a virtuous player in the mercenary spyware industry and won U. S. government contracts, The Associated Press found.
Backed by former Israeli Prime Minister Ehud Barak, Paragon was reportedly acquired by AE Industrial Partners, a private investment firm based in Florida, in a December deal worth at least $500 million, pending regulatory approvals. AE Industrial Partners didn’t directly respond to requests for comment on the deal.
Paragon’s spyware, Graphite, was used to target around 90 WhatsApp users from more than two dozen countries, primarily in Europe, Meta said in January. Since then, there’s been a scramble to figure out who was hacked and who was responsible.
“We’ve seen first-hand how commercial spyware can be weaponized to target journalists and civil society, and these companies must be held accountable,” a spokesperson for WhatsApp told AP in an email. “WhatsApp will continue to protect peoples’ ability to communicate privately.” Meta said the vulnerability has been patched and they have not detected subsequent attacks. Meta also sent a cease-and-desist letter to Paragon. Last month, a California court awarded Meta $168 million in damages from Israel’s NSO Group, whose spyware was used to hack 1,400 WhatsApp accounts, including of journalists, activists and government officials.
“It is unacceptable in a democratic country that journalists are spied on without knowing the reason. We do not know how many there are and if there are others,” Vittorio di Trapani, president of the Italian journalists’ union FNSI, told the AP. “The EU should intervene. The democracy of a founding country of the union and therefore of the whole of Europe is at stake.”
The Citizen Lab’s findings, released today, show that the use of spyware against journalists has continued, despite the backlash against NSO Group, and establish for the first time that Paragon was able to successfully infect Apple devices.
Ciro Pellegrino, who heads the Naples newsroom of an investigative news outlet called Fanpage.it, received a notice on April 29 that his iPhone had been targeted.
Last year, Fanpage secretly infiltrated the youth wing of Meloni’s Brothers of Italy party and filmed some of them making fascist and racist remarks. Pellegrino’s colleague, Fanpage editor-in-chief Francesco Cancellato, also received a notice from Meta that his Android device had been targeted by Paragon spyware, though forensic evidence that his phone was actually infected with Graphite hasn’t yet surfaced, according to Citizen Lab.
The Citizen Lab’s report today also revealed a third case, of a “prominent European journalist,” who asked to remain anonymous, but is connected to the Italian cluster by forensic evidence unearthed by researchers at the laboratory, which is run out of the Munk School at the University of Toronto. The Citizen Lab, which has analyzed all the devices, said the attack came via iMessage, and that Apple has patched the vulnerability. Apple did not respond immediately to requests for comment.
“Paragon is now mired in exactly the kind of abuse scandal that NSO Group is notorious for,” said John Scott-Railton, a senior researcher at the Citizen Lab. “This shows the industry and its way of doing business is the problem. It’s not just a few bad apples.”
Paragon’s spyware is especially stealthy because it can compromise a device without any action from the user. Similar to the NSO Group’s notorious Pegasus spyware, which has been blacklisted by the U. S. government, Graphite allows the operator to covertly access applications, including encrypted messengers like Signal and WhatsApp.
“There’s no link to click, attachment to download, file to open or mistake to make,” Scott-Railton said. “One moment the phone is yours, and the next minute its data is streaming to an attacker.”
COPASIR, the parliamentary committee overseeing the Italian secret services, took the rare step last week of making public the results of its investigation into the government’s use of Paragon. The COPASIR report said that Italian intelligence services hadn’t spied on Cancellato, the editor of Fanpage.
The report did confirm the surveillance, with tools including Graphite, of civil society activists, but said they had been targeted legally and with government authorization — not as activists but over their work related to irregular immigration and national security.
Giovanni Donzelli, vice president of COPASIR and a prominent member of Meloni’s Brothers of Italy party, declined further comment Thursday, saying the parliamentary report was “more relevant than an analysis done by a privately funded Canadian laboratory.”
Citizen Lab says it’s “rigorously independent,” and doesn’t accept research funding from governments or companies.
Italy and Paragon both say they’ve terminated their relationship, but offer starkly different versions of the breakup.
Paragon referred questions to a statement it gave to Israeli newspaper Haaretz, in which the company said that it stopped providing spyware to Italy after the government declined its offer to help investigate Cancellato’s case. Italian authorities, however, said they had rejected Paragon’s offer over national security concerns and ended the relationship following media outcry.
Paragon has been keen to deflect reputational damage that could, in theory, impact its contracts with the U. S. government.
A 2023 executive order, which so far hasn’t been overturned by U. S. President Donald Trump, prohibits federal government departments and agencies from acquiring commercial spyware that has been misused by foreign governments, including to limit freedom of expression and political dissent.
The U. S. Department of Homeland Security awarded Paragon a one-year, $2 million contract last September for operations and support of U.S. Immigration and Customs Enforcement, public records show.
The U. S. Drug Enforcement Administration has also reportedly used the spyware. In December 2022, Adam Schiff, the California Democrat who at the time chaired the House Intelligence Committee, wrote to the administrator of the U.S. Drug Enforcement Administration questioning whether the DEA’s use of Graphite spyware undermined efforts to deter the “broad proliferation of powerful surveillance capabilities to autocratic regimes and others who may misuse them.”
Byron Tau in Washington, and Lorne Cook in Brussels, contributed to this report.
...
Read the original on apnews.com »
The Big Bang is often described as the explosive birth of the universe — a singular moment when space, time and matter sprang into existence. But what if this was not the beginning at all? What if our universe emerged from something else — something more familiar and radical at the same time?
In a new paper, published in Physical Review D, my colleagues and I propose a striking alternative. Our calculations suggest the Big Bang was not the start of everything, but rather the outcome of a gravitational crunch or collapse that formed a very massive black hole — followed by a bounce inside it.
This idea, which we call the black hole universe, offers a radically different view of cosmic origins, yet it is grounded entirely in known physics and observations.
Today’s standard cosmological model, based on the Big Bang and cosmic inflation (the idea that the early universe rapidly blew up in size), has been remarkably successful in explaining the structure and evolution of the universe. But it comes at a price: it leaves some of the most fundamental questions unanswered.
For one, the Big Bang model begins with a singularity — a point of infinite density where the laws of physics break down. This is not just a technical glitch; it’s a deep theoretical problem that suggests we don’t really understand the beginning at all.
To explain the universe’s large-scale structure, physicists introduced a brief phase of rapid expansion into the early universe called cosmic inflation, powered by an unknown field with strange properties. Later, to explain the accelerating expansion observed today, they added another “mysterious” component: dark energy.
In short, the standard model of cosmology works well — but only by introducing new ingredients we have never observed directly. Meanwhile, the most basic questions remain open: where did everything come from? Why did it begin this way? And why is the universe so flat, smooth, and large?
Our new model tackles these questions from a different angle — by looking inward instead of outward. Instead of starting with an expanding universe and trying to trace back how it began, we consider what happens when an overly dense collection of matter collapses under gravity.
This is a familiar process: stars collapse into black holes, which are among the most well-understood objects in physics. But what happens inside a black hole, beyond the event horizon from which nothing can escape, remains a mystery.
In 1965, the British physicist Roger Penrose proved that under very general conditions, gravitational collapse must lead to a singularity. This result, extended by the late British physicist Stephen Hawking and others, underpins the idea that singularities — like the one at the Big Bang — are unavoidable.
The idea helped win Penrose a share of the 2020 Nobel prize in physics and inspired Hawking’s global bestseller A Brief History of Time: From the Big Bang to Black Holes. But there’s a caveat. These “singularity theorems” rely on “classical physics” which describes ordinary macroscopic objects. If we include the effects of quantum mechanics, which rules the tiny microcosmos of atoms and particles, as we must at extreme densities, the story may change.
In our new paper, we show that gravitational collapse does not have to end in a singularity. We find an exact analytical solution — a mathematical result with no approximations. Our maths show that as we approach the potential singularity, the size of the universe changes as a (hyperbolic) function of cosmic time.
This simple mathematical solution describes how a collapsing cloud of matter can reach a high-density state and then bounce, rebounding outward into a new expanding phase.
But how come Penrose’s theorems forbid out such outcomes? It’s all down to a rule called the quantum exclusion principle, which states that no two identical particles known as fermions can occupy the same quantum state (such as angular momentum, or “spin”).
And we show that this rule prevents the particles in the collapsing matter from being squeezed indefinitely. As a result, the collapse halts and reverses. The bounce is not only possible — it’s inevitable under the right conditions.
Crucially, this bounce occurs entirely within the framework of general relativity, which applies on large scales such as stars and galaxies, combined with the basic principles of quantum mechanics — no exotic fields, extra dimensions or speculative physics required.
What emerges on the other side of the bounce is a universe remarkably like our own. Even more surprisingly, the rebound naturally produces the two separate phases of accelerated expansion — inflation and dark energy — driven not by a hypothetical fields but by the physics of the bounce itself.
One of the strengths of this model is that it makes testable predictions. It predicts a small but non-zero amount of positive spatial curvature — meaning the universe is not exactly flat, but slightly curved, like the surface of the Earth.
This is simply a relic of the initial small over-density that triggered the collapse. If future observations, such as the ongoing Euclid mission, confirm a small positive curvature, it would be a strong hint that our universe did indeed emerge from such a bounce. It also makes predictions about the current universe’s rate of expansion, something that has already been verified.
This model does more than fix technical problems with standard cosmology. It could also shed new light on other deep mysteries in our understanding of the early universe — such as the origin of supermassive black holes, the nature of dark matter, or the hierarchical formation and evolution of galaxies.
These questions will be explored by future space missions such as Arrakhis, which will study diffuse features such as stellar halos (a spherical structure of stars and globular clusters surrounding galaxies) and satellite galaxies (smaller galaxies that orbit larger ones) that are difficult to detect with traditional telescopes from Earth and will help us understand dark matter and galaxy evolution.
These phenomena might also be linked to relic compact objects — such as black holes — that formed during the collapsing phase and survived the bounce.
The black hole universe also offers a new perspective on our place in the cosmos. In this framework, our entire observable universe lies inside the interior of a black hole formed in some larger “parent” universe.
We are not special, no more than Earth was in the geocentric worldview that led Galileo (the astronomer who suggested the Earth revolves around the Sun in the 16th and 17th centuries) to be placed under house arrest.
We are not witnessing the birth of everything from nothing, but rather the continuation of a cosmic cycle — one shaped by gravity, quantum mechanics, and the deep interconnections between them.
...
Read the original on www.port.ac.uk »
I’ve never shared this story publicly before—how I convinced HP’s board to acquire Palm for $1.2 billion, then watched as they destroyed it while I was confined to bed recovering from surgery.
This isn’t just another tech failure analysis. I was the HP Chief Technology Officer who led the technical due diligence on Palm. I presented to Mark Hurd and the HP board, making the case for moving forward with the acquisition. I believed we were buying the future of mobile computing.
Then I watched it all fall apart from the worst possible vantage point—lying in bed during a eight-week recovery, helpless to intervene as everything I’d worked to build got dismantled in real time.
This is the story of how smart people destroyed $1.2 billion in innovation value in just 49 days. It’s about the brutal personal cost of being blamed for a disaster that happened while you’re recovering from surgery. And it’s about why I still believe in HP despite everything that went wrong.
In early 2010, HP was desperately seeking mobile platform capabilities. We knew the computing world was shifting toward mobile, and our traditional PC business faced real threats from tablets and smartphones. We needed to be there.
Palm was struggling financially, but they possessed something genuinely special in WebOS—true multitasking when iOS and Android couldn’t handle it, elegant user interface design, and breakthrough technology architecture buried inside a failing business.
As CTO, I led the technical due diligence process. I spent weeks embedded with the Palm engineering team in Sunnyvale, crawling through their code base, understanding their platform architecture, and assessing the quality of their technical talent. The deeper I dug, the more convinced I became that this wasn’t just another mobile operating system.
My conclusion was unambiguous: WebOS represented a breakthrough platform technology that could differentiate HP in the emerging mobile computing market. The technology was solid. The team was exceptional. The platform vision was compelling.
I presented this assessment to Mark Hurd and the board with complete conviction. This wasn’t about buying a struggling phone company—it was our strategic entry into the future of computing platforms. I believed every word of my presentation because I had seen the technology’s potential firsthand.
The board agreed with my recommendation. In April 2010, we announced the $1.2 billion acquisition. I felt proud of the technical work we’d done and excited about what we could build together.
After the acquisition closed in July 2010, my role shifted to helping the Palm team leverage HP’s massive capabilities. We had global manufacturing scale, established supply chain relationships, and a consumer and enterprise customer base that Palm had never been able to access as an independent company.
I spent countless hours working with the Palm leadership team, mapping out integration plans and identifying strategic synergies. We discussed how WebOS could expand beyond smartphones into tablets, potentially integrate with HP’s PC platforms, and even find applications in our printer ecosystem.
Everything seemed aligned for success.
Then life intervened in the worst possible way.
Everything seemed aligned for success until the first disaster struck. In August 2010—just one month after we closed the Palm acquisition—Mark Hurd was forced to resign as CEO. The board replaced him with Leo Apotheker, former CEO of SAP, who brought a completely different strategic vision to HP.
Apotheker’s plan was radical: transform HP from a hardware company into a software and services company, similar to IBM’s transformation years earlier. He wanted to exit or minimize HP’s hardware businesses—PCs, printers, and by extension, mobile devices like the TouchPad. In his mind, WebOS represented exactly the kind of hardware distraction he wanted to eliminate.
I assumed the strategic rationale for the acquisition remained sound despite the leadership change. The technology hadn’t changed. The market opportunity was still there. But I was wrong about the continuity of strategic vision.
Then, in late June 2011, life intervened in the worst possible way. I faced a medical emergency requiring immediate surgery and a eight-week recovery period confined to bed. You don’t schedule medical emergencies—and I had to step away from my integration work with Palm just as the most critical decisions were being made about the platform’s future.
While I was recovering at home, unable to participate in meetings or provide strategic input, the entire mobile computing landscape at HP began to unravel.
On July 1, 2011, HP launched the TouchPad tablet running WebOS 3.0. I watched the launch from my bed, hoping to see the culmination of all our technical work and strategic planning. Instead, I witnessed the beginning of one of the fastest product failures in tech history.
The launch was botched from the start. HP priced the TouchPad at $499 to compete directly with the iPad, but without the app ecosystem or marketing muscle to justify that premium. The device felt rushed to market, lacking the polish that could have helped it compete. Consumer reviews were mixed at best.
Initial sales numbers were devastating: HP sold only 25,000 TouchPads out of 270,000 units shipped to retailers. While Apple was selling 9 million iPads that same quarter, TouchPads were gathering dust on store shelves.
Then came the announcement that changed everything.
On August 18, 2011—just 49 days after the TouchPad launch—HP announced it would discontinue all WebOS devices immediately. I was still confined to bed, watching my technical due diligence work and strategic recommendations get dismantled in real time through news reports and industry analysis.
Forty-nine days. That’s not enough time to fix launch problems, build developer momentum, or establish retail partnerships. Platform businesses typically need 18-24 months to demonstrate real market traction. But Leo Apotheker wasn’t thinking about platform timelines—he was thinking about his software transformation strategy.
The most painful part wasn’t just the speed of the decision, but learning that Apotheker had made the discontinuation choice without even informing the Palm team beforehand. According to multiple reports, he seemed eager to kill off a product that didn’t fit his vision of HP as a software company.
I felt helpless. Betrayed. And somehow, I was responsible for not being there to fight for what I knew was breakthrough technology.
My first day back at HP will be burned into my memory forever. I was simply trying to grab lunch in the cafeteria at HP Labs when I found myself surrounded by what felt like the entire technical staff. They weren’t there to welcome me back—they were there to hold me accountable.
The scene was intense and unambiguous. Engineers and researchers who had watched the WebOS disaster unfold were pointing fingers and raising voices. Their message was crystal clear and brutal: “You can never take leave again—EVER!”
Their exact words still echo in my mind: “The CEO and board need adult supervision.”
Think about the implications of that statement. HP’s own technical staff, the people closest to our innovation work, believed that senior leadership couldn’t be trusted to make sound technology decisions without someone there to provide oversight and guidance.
They weren’t wrong. The numbers proved it in the most painful way possible.
But hearing it directed at me personally—being blamed for not providing “adult supervision” while I was recovering from surgery—was devastating. I had recommended the acquisition based on solid technical analysis. I had worked to integrate the teams and technology. But because I wasn’t there during the critical 49 days when the decision was made to kill WebOS, somehow the failure became my responsibility.
Standing in that cafeteria, listening to my colleagues blame me for not being there to prevent the disaster, I realized the fundamental problem wasn’t my absence. It was a systematic mismatch between Leo Apotheker’s experience and the role he was asked to fill.
SAP’s annual revenue while Leo served as its CEO was approximately $15 billion. The HP board hired a CEO whose largest organizational experience was running a company smaller than HP’s smallest division. Based purely on revenue management experience, Apotheker wouldn’t have qualified to be a Executive Vice President at HP, yet the board put him in charge of a $125 billion technology company.
This wasn’t just a cultural mismatch—it was a fundamental scale and complexity mismatch that should have been immediately obvious to any functioning board. But nobody asked the right questions about whether Leo’s enterprise software background prepared him to evaluate consumer platform technologies such as WebOS, and I wasn’t there to provide what my colleagues called “adult supervision.”
When I decided to “retire” from HP, they offered me a separation bonus—a significant financial package that would have made my transition easier. But there was a catch: accepting it would have restricted what I could say publicly about my experiences at the company.
I’ve spent my career believing that the truth about innovation decisions—both successes and failures—should be shared so others can learn from them. Taking money to stay quiet about systematic thinking errors that destroyed breakthrough technology felt like betraying everything I believed about how innovation should work.
The decision cost me financially, but it preserved my ability to tell stories like this one. Stories that might help other leaders avoid similar disasters.
Disclaimer: The following reflects my personal investment decisions and relationship with HP leadership. This is not financial advice, and you should consult with a qualified financial advisor before making any investment decisions.
Here’s what might surprise you: I haven’t sold a single HP share since leaving the company. Despite watching the WebOS disaster unfold, despite being blamed for not preventing it, despite everything that went wrong during that period, I still believe in HP as an organization.
I take every opportunity to remind Enrique Lores, HP’s current CEO, and Antonio Neri, CEO of HPE, about this fact. Both men were peers of mine when I was at HP. We worked closely together as part of the leadership team that made HP #1 in market share for consumer and commercial PCs & laptops, printers, and servers—helping drive HP to Fortune #11 during that period. They are natural leaders who understand HP’s culture and values from the inside, having come up through the organization rather than being parachuted in from outside.
Enrique and Antonio represent what HP is at its best: technical excellence combined with strategic thinking, innovation grounded in operational discipline, and leadership that understands both the technology and the business. They’re the kind of leaders who would have asked different questions about WebOS, who would have applied better decision frameworks to evaluate platform technology under uncertainty.
My continued shareholding isn’t just a matter of financial confidence—it’s a statement of faith in what HP can become when the right leadership applies systematic thinking to innovation decisions.
The WebOS failure taught me something fundamental about innovation decisions that I hadn’t fully understood before: intelligence and good intentions don’t predict decision quality. Systematic thinking frameworks do.
Leo Apotheker wasn’t stupid. The HP board wasn’t incompetent. The financial analysts weren’t malicious. But they all used flawed thinking frameworks to evaluate breakthrough technology under pressure, and those systematic errors destroyed $1.2 billion in innovation value.
The thinking errors were predictable and preventable:
Solving the wrong problem. Apotheker was asking “How do I transform HP into a software company?” when the right question was “How do we build sustainable competitive advantage in mobile computing platforms?” His entire strategic framework was about exiting hardware businesses, not building platform advantages.
Identity-driven decision making. His background at SAP shaped his entire approach to HP’s portfolio. He saw WebOS as a hardware distraction from his software transformation vision.
Tunnel vision under pressure. During this same period, he became laser-focused on acquiring Autonomy for $10.3 billion—a software company that fit his transformation vision perfectly. Everything else, including breakthrough mobile technology, felt like a distraction from this software-focused strategy. That Autonomy acquisition later required more than an $8 billion write-down, but at the time it consumed all of leadership’s attention.
Timeline compression under stress. Forty-nine days isn’t enough time to fairly evaluate platform technology, but pressure to show decisive leadership compressed the decision timeline artificially.
These errors weren’t unique to HP or to Apotheker. I’ve seen identical patterns across multiple companies and industries. Brilliant engineers kill breakthrough prototypes because they don’t fit current product roadmaps. Marketing teams reject game-changing concepts because they can’t explain them using existing frameworks. CEOs avoid platform investments because the path to profitability isn’t immediately clear.
Lying in bed during those eight weeks, watching the WebOS disaster unfold while being powerless to intervene, I first performed an autopsy of how we got to such a bad decision. Was there a framework that could have led to better decisions? That analysis eventually became a systematic approach to innovation decisions that could prevent these predictable errors.
The framework that emerged from this painful experience is something I call DECIDE—a systematic thinking process specifically designed for innovation decisions under uncertainty:
Define the real decision (not just the obvious question)
Examine your thinking process for cognitive biases
Challenge your assumptions systematically
Identify decision traps specific to innovation contexts
Design multiple genuinely different options
Evaluate with evidence frameworks appropriate for breakthrough technology
This isn’t theoretical academic stuff. It’s a practical framework born from watching smart people make systematic thinking errors that destroyed breakthrough technology value. It’s designed to prevent the specific cognitive errors that killed WebOS and continue to kill innovation investments across industries.
Next Wednesday (6/11/2025) , I’ll walk you through exactly how to apply the DECIDE framework to your current innovation decisions. I’ll show you the specific tools and questions that can help you avoid the thinking traps that consistently destroy breakthrough technology value.
Sometimes I wonder what would have happened if I hadn’t needed surgery during those critical weeks. Would I have been able to convince the leadership team to give WebOS more time? Could I have provided the “adult supervision” my colleagues said was missing? Would better thinking frameworks have changed the outcome?
I’ll never know for certain. But I do know this: the technology was sound, the market opportunity was real, and the decision to kill WebOS after 49 days was based on systematic thinking errors that could have been prevented with better decision frameworks.
But here’s the final piece of the story: Leo Apotheker was fired on September 22, 2011—just 35 days after shutting down WebOS and eleven months after taking over as CEO. The board finally recognized the systematic thinking errors that had destroyed billions in value, but it was too late for WebOS.
The human cost of these decisions goes beyond stock prices and quarterly reports. There are real people who believed in breakthrough technology, fought for innovation, and had to watch it get destroyed by preventable thinking errors.
In my case, I announced my retirement from HP on October 31, 2011 via my blog (Wired Magazine). My last day at HP was December 31, 2011.
WebOS technology eventually found success when LG licensed it for smart TV platforms. The core architecture and UI influenced every subsequent mobile operating system. HP could have owned that platform innovation and the ecosystem value it created.
What breakthrough technology or innovation opportunity is your team evaluating right now? Before you make any irreversible choices, ask yourself: are you applying systematic thinking frameworks to this decision, or are you relying on intuition and conventional business metrics that might mislead in innovation contexts?
Because here’s what I learned from watching the WebOS disaster unfold while confined to bed, helpless to intervene: when you have breakthrough technology in your hands, the quality of your decision-making process matters more than the quality of your technology.
Intelligence and good intentions aren’t enough. You need systematic frameworks for thinking clearly about innovation decisions under uncertainty.
Wednesday, I’ll show you exactly how to build and apply those frameworks. The tools exist to prevent these disasters. The question is whether you’ll use them before it’s too late.
What innovation decision is keeping you up at night? Reply and let me know—I read every response and often use reader questions to shape future Studio Notes.
...
Read the original on philmckinney.substack.com »
More than 700 Marines based out of the Marine Corps Air Ground Combat Center in California have been mobilized to respond to the protests in Los Angeles, and the troops will join the thousands of National Guard members who were activated by President Donald Trump over the weekend without the consent of California’s governor or LA’s mayor.
The deployment of the full Marine battalion marks a significant escalation in Trump’s use of the military as a show of force against protesters, but it is still unclear what their specific task will be once in LA, sources told CNN. Like the National Guard troops, they are prohibited from conducting law enforcement activity such as making arrests unless Trump invokes the Insurrection Act, which permits the president to use the military to end an insurrection or rebellion of federal power.
The Marines being activated are with 2nd battalion, 7th Marines, 1st Marine division, according to US Northern Command. The activation is “intended to provide Task Force 51 with adequate numbers of forces to provide continuous coverage of the area in support of the lead federal agency,” NORTHCOM said in statement, referring to US Army north’s contingency command post.
One of the people familiar with the Marine mobilization said they will be augmenting the guard presence on the ground in LA.
Approximately 1,700 National Guard members are now operating in the greater Los Angeles area, two days after Trump’s Saturday memorandum deploying 2,000 service members, according to a statement from NORTHCOM. On Monday evening, the Pentagon announced that Trump ordered the deployment of an additional batch of 2,000 more National Guard members. It is unclear when the rest of the initial group, or the new troops announced Monday, would arrive in Los Angeles.
The Marines are expected to bolster some of the guard members who have been deployed to LA in the last two days, this person said.
And while the person familiar stressed that the Marines were being deployed only to augment the forces already there, the image of US Marines mobilizing inside the United States will stand in contrast to National Guardsmen who more routinely respond to domestic issues. While some Marines have been assisting in border security at the southern border, one US official said Marines have not been mobilized within the US like they are in California now since the 1992 riots in Los Angeles.
While the Marines’ tasks have not been specified publicly, they could include assignments like crowd control or establishing perimeter security. Lawyers within the Defense Department are also still finalizing language around the use-of-force guidelines for the troops being mobilized, but the person familiar said it will likely mirror the military’s standing rules of the use of force.
California Gov. Gavin Newsom described the involvement of Marines as “unwarranted” and “unprecedented.”
“The level of escalation is completely unwarranted, uncalled for, and unprecedented — mobilizing the best in class branch of the U. S. military against its own citizens,” Newsom said in a statement linking to a news story about the Marines mobilizing.
Newsom disputed the characterization as a “deployment,” which the governor described as different from mobilization. US Northern Command said in their statement, however, that the Marines will “seamlessly integrate” with National Guard forces “protecting federal personnel and federal property in the greater Los Angeles area.”
Los Angeles Police Chief Jim McDonnell called for “open and continuous lines of communication” between all agencies responding to protests in the city ahead of the deployment of US Marines.
McDonell said in a statement that his agency and other partner agencies have experience dealing with large-scale demonstrations and safety remains a top priority for them.
That communication will “prevent confusion, avoid escalation, and ensure a coordinated, lawful, and orderly response during this critical time,” McDonnell stressed.
This story and headline have been updated with additional developments.
CNN’s Cindy Von Quednow and Danya Gainor contributed to this report.
...
Read the original on www.cnn.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.