10 interesting stories served every morning and every evening.




1 1,248 shares, 0 trendiness

Frequent reauth doesn't make you more secure

You’re hap­pily work­ing away, fin­gers fly­ing, deep in flow, and sud­denly, boink, your ses­sion has ex­pired. You sigh, re-en­ter your pass­word (again), com­plete an MFA chal­lenge (again), maybe ap­prove an email no­ti­fi­ca­tion (again), and fi­nally — ac­cess re­stored. Until next time.

This was­n’t so bad when it was just pass­words; we all got pretty fast at re­typ­ing our pass­words. But all those MFA chal­lenges re­ally slow us down. And MFA fa­tigue at­tacks, a grow­ing vec­tor for phish­ing, get worse as more le­git­i­mate MFA re­quests ar­rive.

We all used to be­lieve that chang­ing pass­words of­ten was a good idea; turns out the op­po­site is true. Similarly, we all used to be­lieve that mak­ing peo­ple log in fre­quently was good se­cu­rity. If au­then­ti­ca­tion is good, then surely more au­then­ti­ca­tion is bet­ter, right? Like tak­ing vi­t­a­mins — one a day is good, so twenty must be great! Except, well, that’s not how any­thing works.

Security is­n’t about how of­ten you log in. It’s about how well your ac­cess is man­aged in the first place, how fast we can re­act to pol­icy changes on your ac­count, and how con­fi­dent we are that your key has­n’t been leaked since the last auth. The good news? You can get strong se­cu­rity guar­an­tees with­out mak­ing users mis­er­able.

Authentication usu­ally boils down to one of two things:

Are you still in phys­i­cal pos­ses­sion of the de­vice? (For ex­am­ple, Windows Hello PINs, YubiKeys, or smart cards; tests which any­one phys­i­cally pre­sent can likely pass.)Are you the right per­son? (Passwords, Face ID, Touch ID — things that sup­pos­edly no­body but you can repli­cate, but which don’t prove you’re phys­i­cally near a given de­vice.)

Identity providers (IdPs) fo­cus mostly on who you are, since their whole job is iden­tity ver­i­fi­ca­tion. If they re­quire a YubiKey, they might also check de­vice pos­ses­sion, but that’s not re­ally their main gig.

Integrated au­then­ti­ca­tion sys­tems like Apple’s Face ID and Touch ID, and tools like Windows Hello, are in­ter­est­ing be­cause they do both at once. They’re amaz­ing as long as they are se­curely en­rolled and their keys are held in a highly trusted, mal­ware re­sis­tant, TPM.

So why do fre­quent re-lo­gins ex­ist? Usually, it’s be­cause ad­mins aren’t con­fi­dent that changes will take ef­fect im­me­di­ately. Sometimes, es­pe­cially with SAML, an IdP is con­fig­ured to send pol­icy at­trib­utes to apps only dur­ing the user-in­ter­ac­tive lo­gin process, which means they can’t up­date with­out a new lo­gin. How long are we vul­ner­a­ble if some­one leaves the com­pany or loses their lap­top?! But it does­n’t have to be that way.

Most at­tack­ers aren’t lurk­ing in your of­fice, wait­ing for you to step away. They’re re­mote, so their at­tack vec­tor is phish­ing — it’s pretty easy for them to steal your pass­word. As an ad­min­is­tra­tor, the best pol­icy is to as­sume re­mote at­tack­ers al­ready have your pass­word, and build your sys­tems ac­cord­ingly. That means the sec­ond fac­tor (SMS, email, or prefer­ably a Yubikey or equiv­a­lent) is the most im­por­tant de­fense against re­mote at­tacks.

But, there’s also phys­i­cal at­tacks. If some­one steals your lap­top, usu­ally your screen is al­ready locked. That means open browser ses­sions won’t do them much good. Random cafe thieves prob­a­bly don’t have your pass­word. If they do, more lo­gins aren’t much of a de­fense.

In fact, fre­quent lo­gins give at­tack­ers, both lo­cal and re­mote, more chances to steal your cre­den­tials. That’s deadly for se­cu­rity, in ad­di­tion to cre­at­ing an­noy­ance for users.

Modern op­er­at­ing sys­tems al­ready han­dle this prob­lem with screen locks. If your screen locks when you step away, your OS is do­ing ex­actly what a fre­quent lo­gin prompt would do, ex­cept with­out an­noy­ing you every few hours. Consider en­forc­ing au­to­matic screen lock when you walk away. If your screen is locked, all those other open ses­sions are safe too.

Some web apps log you out quickly un­der the as­sump­tion that you might be on a shared com­puter. That makes sense if you’re us­ing an Internet café in 2006. But for most peo­ple, web ses­sion ex­piry is just a relic of a by­gone era.

A 15-minute ses­sion du­ra­tion makes sense for some­thing highly sen­si­tive and dis­pro­por­tion­ately valu­able, like your bank, where you want that lit­tle bit ex­tra. But the agressively mid-range” ex­piry times on most web­sites, like 7 days or 30 days, don’t help any­one with any­thing. They’re too long to stop real ses­sion hi­jack­ing be­fore the dam­age is done, but so short they’re con­stantly an­noy­ing. It’s the worst of both worlds. Security the­atre.

If you re­ally need to con­firm some­one is at their key­board, you don’t want a lo­gin prompt every few hours — you want a check right be­fore a sen­si­tive ac­tion. That’s why Tailscale SSHs check mode and the Tailscale Slack Accessbot ex­ist: they ver­ify that the user is there only when it ac­tu­ally mat­ters, not just on an ar­bi­trary timer.

And yes, set that OS screen lock ag­gres­sively! Now that most OSes can un­lock with just a fin­ger­print or face, there’s no rea­son to leave your screen un­locked when you walk away.

Security should be con­tin­u­ous, not tied to ar­bi­trary in­ter­ac­tive cy­cles. Instead of nag­ging users, tools like de­vice pos­ture checks and SCIM-based ac­cess con­trol can up­date se­cu­rity at­trib­utes and poli­cies in real time, in the back­ground, with­out users do­ing any­thing. That means you can have up­dated poli­cies within sec­onds or min­utes; you don’t have to com­pro­mise be­tween short reauth times (super an­noy­ing) and longer ones (less pro­tec­tion).

* If your de­vice goes of­fline, is marked lost, or fails a se­cu­rity check, ac­cess gets re­voked in­stantly.

* If your role or em­ploy­ment sta­tus changes, your ac­cess up­dates au­to­mat­i­cally.

This ap­proach is smarter and more se­cure than mak­ing users re-en­ter their cre­den­tials over and over.

Frequent lo­gins aren’t mak­ing you safer. They’re just an­noy­ing you into worse se­cu­rity habits (like pass­word reuse, click­ing phish­ing links, and MFA fa­tigue). The best se­cu­rity hap­pens qui­etly in the back­ground, en­sur­ing safety with­out get­ting in the way.

At Tailscale, we be­lieve in se­cu­rity that’s adap­tive, in­tel­li­gent, and ac­tu­ally use­ful — not just se­cu­rity the­ater. Instead of forc­ing point­less lo­gins, we make sure au­then­ti­ca­tion hap­pens at the right mo­ments, with as lit­tle fric­tion as pos­si­ble. If you use Tailscale to ac­cess your other apps, through tsidp or App Connector, our real-time se­cu­rity checks can flow through to all your other lo­gin ses­sions as well. Even in legacy apps that don’t un­der­stand SCIM or de­vice pos­ture.

...

Read the original on tailscale.com »

2 1,076 shares, 39 trendiness

Frequent reauth doesn't make you more secure

You’re hap­pily work­ing away, fin­gers fly­ing, deep in flow, and sud­denly, boink, your ses­sion has ex­pired. You sigh, re-en­ter your pass­word (again), com­plete an MFA chal­lenge (again), maybe ap­prove an email no­ti­fi­ca­tion (again), and fi­nally — ac­cess re­stored. Until next time.

This was­n’t so bad when it was just pass­words; we all got pretty fast at re­typ­ing our pass­words. But all those MFA chal­lenges re­ally slow us down. And MFA fa­tigue at­tacks, a grow­ing vec­tor for phish­ing, get worse as more le­git­i­mate MFA re­quests ar­rive.

We all used to be­lieve that chang­ing pass­words of­ten was a good idea; turns out the op­po­site is true. Similarly, we all used to be­lieve that mak­ing peo­ple log in fre­quently was good se­cu­rity. If au­then­ti­ca­tion is good, then surely more au­then­ti­ca­tion is bet­ter, right? Like tak­ing vi­t­a­mins — one a day is good, so twenty must be great! Except, well, that’s not how any­thing works.

Security is­n’t about how of­ten you log in. It’s about how well your ac­cess is man­aged in the first place, how fast we can re­act to pol­icy changes on your ac­count, and how con­fi­dent we are that your key has­n’t been leaked since the last auth. The good news? You can get strong se­cu­rity guar­an­tees with­out mak­ing users mis­er­able.

Authentication usu­ally boils down to one of two things:

Are you still in phys­i­cal pos­ses­sion of the de­vice? (For ex­am­ple, Windows Hello PINs, YubiKeys, or smart cards; tests which any­one phys­i­cally pre­sent can likely pass.)Are you the right per­son? (Passwords, Face ID, Touch ID — things that sup­pos­edly no­body but you can repli­cate, but which don’t prove you’re phys­i­cally near a given de­vice.)

Identity providers (IdPs) fo­cus mostly on who you are, since their whole job is iden­tity ver­i­fi­ca­tion. If they re­quire a YubiKey, they might also check de­vice pos­ses­sion, but that’s not re­ally their main gig.

Integrated au­then­ti­ca­tion sys­tems like Apple’s Face ID and Touch ID, and tools like Windows Hello, are in­ter­est­ing be­cause they do both at once. They’re amaz­ing as long as they are se­curely en­rolled and their keys are held in a highly trusted, mal­ware re­sis­tant, TPM.

So why do fre­quent re-lo­gins ex­ist? Usually, it’s be­cause ad­mins aren’t con­fi­dent that changes will take ef­fect im­me­di­ately. Sometimes, es­pe­cially with SAML, an IdP is con­fig­ured to send pol­icy at­trib­utes to apps only dur­ing the user-in­ter­ac­tive lo­gin process, which means they can’t up­date with­out a new lo­gin. How long are we vul­ner­a­ble if some­one leaves the com­pany or loses their lap­top?! But it does­n’t have to be that way.

Most at­tack­ers aren’t lurk­ing in your of­fice, wait­ing for you to step away. They’re re­mote, so their at­tack vec­tor is phish­ing — it’s pretty easy for them to steal your pass­word. As an ad­min­is­tra­tor, the best pol­icy is to as­sume re­mote at­tack­ers al­ready have your pass­word, and build your sys­tems ac­cord­ingly. That means the sec­ond fac­tor (SMS, email, or prefer­ably a Yubikey or equiv­a­lent) is the most im­por­tant de­fense against re­mote at­tacks.

But, there’s also phys­i­cal at­tacks. If some­one steals your lap­top, usu­ally your screen is al­ready locked. That means open browser ses­sions won’t do them much good. Random cafe thieves prob­a­bly don’t have your pass­word. If they do, more lo­gins aren’t much of a de­fense.

In fact, fre­quent lo­gins give at­tack­ers, both lo­cal and re­mote, more chances to steal your cre­den­tials. That’s deadly for se­cu­rity, in ad­di­tion to cre­at­ing an­noy­ance for users.

Modern op­er­at­ing sys­tems al­ready han­dle this prob­lem with screen locks. If your screen locks when you step away, your OS is do­ing ex­actly what a fre­quent lo­gin prompt would do, ex­cept with­out an­noy­ing you every few hours. Consider en­forc­ing au­to­matic screen lock when you walk away. If your screen is locked, all those other open ses­sions are safe too.

Some web apps log you out quickly un­der the as­sump­tion that you might be on a shared com­puter. That makes sense if you’re us­ing an Internet café in 2006. But for most peo­ple, web ses­sion ex­piry is just a relic of a by­gone era.

A 15-minute ses­sion du­ra­tion makes sense for some­thing highly sen­si­tive and dis­pro­por­tion­ately valu­able, like your bank, where you want that lit­tle bit ex­tra. But the agressively mid-range” ex­piry times on most web­sites, like 7 days or 30 days, don’t help any­one with any­thing. They’re too long to stop real ses­sion hi­jack­ing be­fore the dam­age is done, but so short they’re con­stantly an­noy­ing. It’s the worst of both worlds. Security the­atre.

If you re­ally need to con­firm some­one is at their key­board, you don’t want a lo­gin prompt every few hours — you want a check right be­fore a sen­si­tive ac­tion. That’s why Tailscale SSHs check mode and the Tailscale Slack Accessbot ex­ist: they ver­ify that the user is there only when it ac­tu­ally mat­ters, not just on an ar­bi­trary timer.

And yes, set that OS screen lock ag­gres­sively! Now that most OSes can un­lock with just a fin­ger­print or face, there’s no rea­son to leave your screen un­locked when you walk away.

Security should be con­tin­u­ous, not tied to ar­bi­trary in­ter­ac­tive cy­cles. Instead of nag­ging users, tools like de­vice pos­ture checks and SCIM-based ac­cess con­trol can up­date se­cu­rity at­trib­utes and poli­cies in real time, in the back­ground, with­out users do­ing any­thing. That means you can have up­dated poli­cies within sec­onds or min­utes; you don’t have to com­pro­mise be­tween short reauth times (super an­noy­ing) and longer ones (less pro­tec­tion).

* If your de­vice goes of­fline, is marked lost, or fails a se­cu­rity check, ac­cess gets re­voked in­stantly.

* If your role or em­ploy­ment sta­tus changes, your ac­cess up­dates au­to­mat­i­cally.

This ap­proach is smarter and more se­cure than mak­ing users re-en­ter their cre­den­tials over and over.

Frequent lo­gins aren’t mak­ing you safer. They’re just an­noy­ing you into worse se­cu­rity habits (like pass­word reuse, click­ing phish­ing links, and MFA fa­tigue). The best se­cu­rity hap­pens qui­etly in the back­ground, en­sur­ing safety with­out get­ting in the way.

At Tailscale, we be­lieve in se­cu­rity that’s adap­tive, in­tel­li­gent, and ac­tu­ally use­ful — not just se­cu­rity the­ater. Instead of forc­ing point­less lo­gins, we make sure au­then­ti­ca­tion hap­pens at the right mo­ments, with as lit­tle fric­tion as pos­si­ble. If you use Tailscale to ac­cess your other apps, through tsidp or App Connector, our real-time se­cu­rity checks can flow through to all your other lo­gin ses­sions as well. Even in legacy apps that don’t un­der­stand SCIM or de­vice pos­ture.

...

Read the original on tailscale.com »

3 949 shares, 38 trendiness

A receipt printer cured my procrastination [ADHD]

I started my busi­ness when I was 21 (I’m now 39). I built cus­tom apps and did con­sult­ing for ac­count­ing soft­ware, in­voic­ing sys­tems, and point-of-sale tools.

Procrastination has al­ways been my biggest strug­gle. The only way I could get things done was by re­ly­ing on stress, com­ing from clients or fi­nan­cial pres­sure. That worked for a while, but it cost me my health (I burned out) and my busi­ness (I went bank­rupt).

I no­ticed that I have no prob­lem spend­ing hours fully fo­cused on a video game. If I can fo­cus on a game, then my brain must also be ca­pa­ble of fo­cus­ing on other tasks. So I nat­u­rally started ask­ing my­self why it’s so easy to get hooked by a game, and how I could ap­ply that same ef­fect to tasks I strug­gle to com­plete.

It turns out that many of the strug­gles I’ve had through­out my life are linked to ADHD (Attention Deficit Disorder). The goal of this ar­ti­cle is­n’t to fo­cus on ADHD, but it’s im­por­tant to men­tion. ADHD af­fects many peo­ple, of­ten with­out them even know­ing it, in var­i­ous ways and at dif­fer­ent lev­els.

To un­der­stand what makes a video game ad­dic­tive, let’s take first-per­son shoot­ers (FPS) as an ex­am­ple. FPS games are among the most pop­u­lar and ad­dic­tive games.

An FPS is built around a sim­ple loop: Aim → Shoot → Hit or Miss. This is the game loop. The out­come, hit or miss, is im­me­di­ately shown through sounds or vi­su­als. This im­me­di­ate re­ac­tion is called feed­back.

For a game to be ad­dic­tive, the game loop must re­peat fre­quently and give strong feed­back. Imagine an FPS where you only meet an en­emy every 30 min­utes. That would­n’t be en­gag­ing. The loop must re­peat quickly to keep you in­ter­ested.

Feedback in FPS games has im­proved a lot over time. Early FPS games only showed a sim­ple blood splash when you hit an en­emy. Modern games pro­vide much stronger feed­back. Now, when you hit an en­emy, you might see:

* the crosshair briefly changes to con­firm the hit

* dam­age num­bers pop up above the en­emy

Note that a game can have sev­eral game loops. For ex­am­ple, there can be an­other loop for find­ing ran­dom equip­ment for your char­ac­ter. Feedback can also come from out­side the main loop, such as loot boxes or side mis­sions.

For a game loop to be ad­dic­tive, other fac­tors mat­ter too. You need to per­son­ally en­joy the type of game, and the chal­lenge must match your skill level. If it’s too easy or too hard, you won’t be hooked. There are dozens of other fac­tors, unique to each play­er’s per­son­al­ity and pref­er­ences.

When all these el­e­ments come to­gether, each game loop gives you a small dose of dopamine and cre­ates a state of flow. In this state, we’re fully fo­cused, we lose track of time, and it’s eas­ier to han­dle com­plex tasks.

One fi­nal point: Video games are easy to start. They re­quire very lit­tle mo­ti­va­tion or dis­ci­pline to start play­ing.

The game loop should re­peat of­ten. Feedback from the loop should be strong.Games are easy to start, even with low mo­ti­va­tion.

Based on what we’ve just seen, our real-life game loop is made up of com­plet­ing tasks and habits through­out the day.

Our first goal is to re­peat this game loop as many times as pos­si­ble each day. A sim­ple so­lu­tion is break­ing tasks into smaller parts. Let’s take an ex­am­ple every­one can re­late to: clean­ing the house. At first glance, you might break down Clean the house” like this:

* Clean the house

If these three tasks take about an hour, that’s only three game loops. Another prob­lem is start­ing a task. Cleaning your bed­room might take 20 min­utes, which feels too long when you’re not mo­ti­vated. So you pro­cras­ti­nate.

My so­lu­tion is break­ing things down even fur­ther:

* Clean the house

The rule is sim­ple: the more you pro­cras­ti­nate on a task, the more you should break it down into mi­cro-tasks, even ones that take just 2 to 5 min­utes in ex­treme cases.

Now that we have our game loop, we are go­ing to boost the feed­back. To do this, I rec­om­mend us­ing sticky notes. Write each task on a sticky note. When you fin­ish the task, crum­ple the note into a ball and throw it into a clear jar.

This gives us ex­tra feed­back: crum­pling the pa­per, the sat­is­fy­ing sound, and see­ing our progress in the clear jar.

One im­por­tant point: writ­ing your task on a sticky note makes it real. It’s no longer just words on a screen. It’s harder to pro­cras­ti­nate on some­thing phys­i­cally in front of you.

Use sticky notes for your tasks. (The task be­comes hard to ig­nore.)Once the task is done, crum­ple the note into a ball. (Feedback)

To make this sys­tem easy and get your day started, be­gin with sim­ple, rou­tine tasks. Think about role-play­ing games. Early lev­els are easy and help build mo­men­tum for the harder lev­els later. That’s why it’s im­por­tant to put your daily habits on sticky notes. Start your day with easy wins.

For ex­am­ple, the first sticky note wait­ing for me each morn­ing is to make cof­fee. When I sit at my com­puter, my first task is a quick two-minute warmup. I spend one minute typ­ing text in soft­ware to mea­sure my typ­ing speed. Then I spend one minute prac­tic­ing key­board short­cuts. I do these two tasks every morn­ing with­out fail.

Between wak­ing up and start­ing work, I com­plete around ten sticky notes with easy, rou­tine habits. This builds mo­men­tum and makes it eas­ier to keep work­ing through­out the day.

I strongly rec­om­mend in­clud­ing these morn­ing habits in your sticky note sys­tem. It al­lows you to start the day with quick and easy tasks. If your first task is the hard­est of the day, you’re more likely to pro­cras­ti­nate. But when you start with your sim­ple morn­ing rou­tine, you gain mo­men­tum that makes the rest of your day eas­ier.

I also make sure to pre­pare these tasks the night be­fore. This way, I can im­me­di­ately start work­ing with­out hav­ing to think about cre­at­ing sticky notes in the morn­ing.

Use sticky notes with your morn­ing rou­tine to start your day strong. Prepare your sticky notes the night be­fore, so you can im­me­di­ately start work­ing in the morn­ing with­out ex­tra plan­ning.

My main ad­vice is to stay flex­i­ble with this method. I strongly rec­om­mend al­ways start­ing your day with your first tasks and habits on sticky notes. However, this ini­tial mo­men­tum might be enough for you to re­main pro­duc­tive for the en­tire day.

If later in the day you no­tice you’re start­ing to pro­cras­ti­nate, im­me­di­ately re­turn to the sys­tem. Take a few min­utes to re­fo­cus, clearly de­fine the next 3 to 5 tasks, write them down on sticky notes, and start work­ing on them right away. Repeat this as many times as nec­es­sary.

Some tasks can’t eas­ily be bro­ken into smaller steps. In that case, break them down by time. For ex­am­ple: Clean for 10 min­utes.”

For com­plex tasks, you might re­al­ize your orig­i­nal plan is­n’t ideal once you be­gin work­ing. That’s fine. You’re al­ready in mo­tion, so con­tinue work­ing flex­i­bly, and use your ex­ist­ing sticky notes when­ever you feel you’ve earned feed­back.

You might also have tasks you’ve pro­cras­ti­nated on for months or even years, caus­ing them to ac­cu­mu­late sig­nif­i­cantly. For ex­am­ple, maybe your in­box con­tains thou­sands of emails. You’ve prob­a­bly imag­ined one day you’ll sud­denly feel mo­ti­vated and clear them all at once. That day will never come. Instead, cre­ate a daily sticky note that re­quires you to process all new emails plus a spe­cific num­ber of older emails. It might take months to clear your in­box, but at least you’ve stopped pro­cras­ti­nat­ing.

Another sce­nario that reg­u­larly hap­pens to me is plan­ning about ten sticky notes, then com­pletely for­get­ting about them. I end up do­ing the tasks but with­out us­ing the sticky notes for feed­back. That’s com­pletely fine. In fact, it means I was in a pro­duc­tive flow state, and my ob­jec­tive was ac­com­plished!

I’m sure you’ve al­ready read dozens of ar­ti­cles and watched count­less YouTube videos about pro­duc­tiv­ity, pro­cras­ti­na­tion, and ef­fi­ciency. Now you ab­solutely need to take ac­tion! No ex­cuses.

Try this method! No sticky notes? Grab some pa­per and scis­sors, and pre­pare your tasks for to­mor­row. Don’t have a clear jar? Use a reg­u­lar glass. For the next few days, use this sys­tem and see if it helps. Do you find it eas­ier to com­plete your tasks? Do you for­get fewer im­por­tant things?

Keep test­ing for a few weeks. Think of it like go­ing to the gym. You won’t be­come a body­builder with huge mus­cles af­ter your first work­out. Turn this sys­tem into a habit!

Here’s why this sys­tem works so well:

Breaking tasks into small steps al­lows us to re­peat our game loop fre­quently. Starting the day with easy wins builds mo­men­tum and makes it eas­ier to con­tinue.Crum­pling com­pleted tasks and toss­ing them into a clear jar pro­vides sat­is­fy­ing feed­back.

Let’s be clear. This sys­tem works ex­tremely well. It’s bet­ter than any­thing else I’ve tried in my life. And trust me, I’ve tested many meth­ods that promised to boost pro­duc­tiv­ity. But one big prob­lem re­mained. It took too long and was painful to write all those sticky notes. Writing only three tasks a day was­n’t enough. I needed twenty, thirty, or some­times even more. On lazy days, I skipped writ­ing notes, and my pro­duc­tiv­ity col­lapsed.

One day, I saw a Reddit post about some­one us­ing a re­ceipt printer to print daily tasks. I sud­denly re­al­ized I had the same idea years ago while set­ting up check­out sys­tems for clients. But, as usual, I pro­cras­ti­nated.

A re­ceipt printer is per­fect for re­plac­ing my sticky-note sys­tem. I can eas­ily print dozens of habits. These print­ers are fast, and they au­to­mat­i­cally cut each ticket. They’re also cheap to run. They don’t need ink, only pa­per. A good printer (For ex­am­ple Epson TM-T20III) costs about $150, and each 80-meter roll costs around $3. One roll can print thou­sands of tasks.

I cre­ated a sim­ple script to print my daily habits. I have a sep­a­rate list for each week­day be­cause my habits and tasks vary.

Since the printer can han­dle a large num­ber of tasks, I added habits and tasks I did­n’t usu­ally write on sticky notes, mak­ing my sys­tem even more ro­bust.

I still use sticky notes for my main work, but much of my daily rou­tine is now man­aged by my re­ceipt printer. After a few weeks, I no­ticed some­thing im­por­tant: I haven’t missed a sin­gle habit, not even one.

My pro­duc­tiv­ity had al­ready im­proved sig­nif­i­cantly with the sticky-note sys­tem. But adding the printer made me truly con­sis­tent, boost­ing my pro­duc­tiv­ity even more.

Ironically, I pre­vi­ously built a habit-track­ing app but al­ways for­got to use it. With this new sys­tem, I haven’t missed track­ing my habits even once.

Summary of the ben­e­fits of the re­ceipt printer:

Printing tasks us­ing a re­ceipt printer re­moves the fric­tion of daily prepa­ra­tion. With this sys­tem, it will be easy to print more tasks, in­creas­ing the sys­tem’s ef­fi­ciency.Mas­sive re­duc­tion in the chances of skip­ping the sys­tem for a day due to pro­cras­ti­na­tion.

But there’s still one prob­lem. I still waste time writ­ing tasks on sticky notes dur­ing the day. My print­ing script is­n’t prac­ti­cal. If I want to add habits or new tasks, I have to edit the script man­u­ally. That’s in­con­ve­nient for every­day use, es­pe­cially when tasks change daily. I needed a sim­pler so­lu­tion.

My first idea was to con­nect ex­ist­ing soft­ware to my printer. But I’m not happy with most ex­ist­ing task apps. They don’t eas­ily break down tasks into smaller pieces. Most task apps use a sim­ple, sin­gle-level list. Visually split­ting tasks into sub­tasks is im­pos­si­ble. Some task man­agers of­fer hi­er­ar­chi­cal task lists, al­low­ing sub­tasks. But this cre­ates an­other prob­lem: the lists be­come very long and over­whelm­ing.

A sec­ond is­sue is los­ing the big pic­ture. Some tasks need break­ing down, oth­ers don’t. This makes task lists messy and hard to man­age.

My third prob­lem is that I need soft­ware that’s ex­tremely fast to use (ideally key­board-only for even more speed), so I can cre­ate a lot of tasks with­out feel­ing frus­trated by the UX.

I spent a lot of time search­ing for a so­lu­tion. I tested mul­ti­ple ideas, wast­ing count­less hours. But I fi­nally found a sim­ple yet bril­liant so­lu­tion. Instead of dis­play­ing hi­er­ar­chi­cal tasks ver­ti­cally, why not do it hor­i­zon­tally, di­vid­ing each level into columns? Each time I click a task, its sub­tasks ap­pear clearly in the next col­umn. This helps me keep the big pic­ture, fil­ter­ing tasks I don’t need right now.

You are cur­rently read­ing the print ver­sion of this ar­ti­cle. On the web ver­sion, you’ll find an in­ter­ac­tive demo of the con­cept in this spot.

Test the con­cept in this in­ter­ac­tive demo:

I built cus­tom soft­ware based on this idea and con­nected it di­rectly to my printer. When I pro­cras­ti­nate, I can quickly break a task into sub­tasks with­out mess­ing with my pro­ject struc­ture, be­cause sub­tasks ap­pear in an­other col­umn. And I can eas­ily print just that col­umn.

This method, com­bined with the printer and my app, has changed my life over the past few months. I now have strong, steady pro­duc­tiv­ity, which is a huge daily vic­tory for some­one with ADHD.

Without ex­ag­ger­at­ing, I be­lieve I’ve dou­bled or tripled my pro­duc­tiv­ity. Of course, be­fore this sys­tem, I some­times had very pro­duc­tive days. But I also had many days when I did al­most no mean­ing­ful work tasks. Those very low pro­duc­tiv­ity days have com­pletely van­ished from my life.

Video not sup­ported in your browser.

You can watch the video on the web ver­sion of this ar­ti­cle.

I truly hope this ar­ti­cle in­spired you, and that you’ll try some of these ideas.

I plan to re­lease my soft­ware pub­licly in the com­ing weeks. You can sub­scribe to my newslet­ter to get no­ti­fied when it’s avail­able.

...

Read the original on www.laurieherault.com »

4 819 shares, 33 trendiness

Mistral AI

Announcing Magistral — the first rea­son­ing model by Mistral AI — ex­celling in do­main-spe­cific, trans­par­ent, and mul­ti­lin­gual rea­son­ing.

The best hu­man think­ing is­n’t lin­ear — it weaves through logic, in­sight, un­cer­tainty, and dis­cov­ery. Reasoning lan­guage mod­els have en­abled us to aug­ment and del­e­gate com­plex think­ing and deep un­der­stand­ing to AI, im­prov­ing our abil­ity to work through prob­lems re­quir­ing pre­cise, step-by-step de­lib­er­a­tion and analy­sis.

But this space is still nascent. Lack of spe­cial­ized depth needed for do­main-spe­cific prob­lems, lim­ited trans­parency, and in­con­sis­tent rea­son­ing in the de­sired lan­guage — are just some of the known lim­i­ta­tions of early think­ing mod­els.

Today, we’re ex­cited to an­nounce our lat­est con­tri­bu­tion to AI re­search with Magistral — our first rea­son­ing model. Released in both open and en­ter­prise ver­sions, Magistral is de­signed to think things through — in ways fa­mil­iar to us — while bring­ing ex­per­tise across pro­fes­sional do­mains, trans­par­ent rea­son­ing that you can fol­low and ver­ify, along with deep mul­ti­lin­gual flex­i­bil­ity.

A one-shot physics sim­u­la­tion show­cas­ing grav­ity, fric­tion and col­li­sions with Magistral Medium in Preview.

Magistral is a dual-re­lease model fo­cused on real-world rea­son­ing and feed­back-dri­ven im­prove­ment.

We’re re­leas­ing the model in two vari­ants: Magistral Small — a 24B pa­ra­me­ter open-source ver­sion and Magistral Medium — a more pow­er­ful, en­ter­prise ver­sion.

Magistral Medium scored 73.6% on AIME2024, and 90% with ma­jor­ity vot­ing @64. Magistral Small scored 70.7% and 83.3% re­spec­tively.

Suited for a wide range of en­ter­prise use cases — from struc­tured cal­cu­la­tions and pro­gram­matic logic to de­ci­sion trees and rule-based sys­tems.

With the new Think mode and Flash Answers in Le Chat, you can get re­sponses at 10x the speed com­pared to most com­peti­tors.

The re­lease is sup­ported by our lat­est pa­per cov­er­ing com­pre­hen­sive eval­u­a­tions of Magistral, our train­ing in­fra­struc­ture, re­in­force­ment learn­ing al­go­rithm, and novel ob­ser­va­tions for train­ing rea­son­ing mod­els.

As we’ve open-sourced Magistral Small, we wel­come the com­mu­nity to ex­am­ine, mod­ify and build upon its ar­chi­tec­ture and rea­son­ing processes to fur­ther ac­cel­er­ate the emer­gence of think­ing lan­guage mod­els. Our ear­lier open mod­els have al­ready been lever­aged by the com­mu­nity for ex­cit­ing pro­jects like ether0 and DeepHermes 3.

Magistral is fine-tuned for multi-step logic, im­prov­ing in­ter­pretabil­ity and pro­vid­ing a trace­able thought process in the user’s lan­guage, un­like gen­eral-pur­pose mod­els.

We aim to it­er­ate the model quickly start­ing with this re­lease. Expect the mod­els to con­stantly im­prove.

The model ex­cels in main­tain­ing high-fi­delity rea­son­ing across nu­mer­ous lan­guages. Magistral is es­pe­cially well-suited to rea­son in lan­guages in­clud­ing English, French, Spanish, German, Italian, Arabic, Russian, and Simplified Chinese.

Prompt and re­sponse in Arabic with Magistral Medium in Preview in Le Chat.

With Flash Answers in Le Chat, Magistral Medium achieves up to 10x faster to­ken through­put than most com­peti­tors. This en­ables real-time rea­son­ing and user feed­back, at scale.

Speed com­par­i­son of Magistral Medium in Preview in Le Chat against ChatGPT.

Magistral is ideal for gen­eral pur­pose use re­quir­ing longer thought pro­cess­ing and bet­ter ac­cu­racy than with non-rea­son­ing LLMs. From le­gal re­search and fi­nan­cial fore­cast­ing to soft­ware de­vel­op­ment and cre­ative sto­ry­telling — this model solves multi-step chal­lenges where trans­parency and pre­ci­sion are crit­i­cal.

Building on our flag­ship mod­els, Magistral is de­signed for re­search, strate­gic plan­ning, op­er­a­tional op­ti­miza­tion, and data-dri­ven de­ci­sion mak­ing — whether ex­e­cut­ing risk as­sess­ment and mod­el­ling with mul­ti­ple fac­tors, or cal­cu­lat­ing op­ti­mal de­liv­ery win­dows un­der con­straints.

Legal, fi­nance, health­care, and gov­ern­ment pro­fes­sion­als get trace­able rea­son­ing that meets com­pli­ance re­quire­ments. Every con­clu­sion can be traced back through its log­i­cal steps, pro­vid­ing au­ditabil­ity for high-stakes en­vi­ron­ments with do­main-spe­cial­ized AI.

Magistral en­hances cod­ing and de­vel­op­ment use cases: com­pared to non-rea­son­ing mod­els, it sig­nif­i­cantly im­proves pro­ject plan­ning, back­end ar­chi­tec­ture, fron­tend de­sign, and data en­gi­neer­ing through se­quenced, multi-step ac­tions in­volv­ing ex­ter­nal tools or API.

Our early tests in­di­cated that Magistral is an ex­cel­lent cre­ative com­pan­ion. We highly rec­om­mend it for cre­ative writ­ing and sto­ry­telling, with the model ca­pa­ble of pro­duc­ing co­her­ent or — if needed — de­light­fully ec­cen­tric copy.

Magistral Small is an open-weight model, and is avail­able for self-de­ploy­ment un­der the Apache 2.0 li­cense. You can down­load it from:

You can try out a pre­view ver­sion of Magistral Medium in Le Chat or via API on La Plateforme.

Magistral Medium is also avail­able on Amazon SageMaker, and soon on IBM WatsonX, Azure AI and Google Cloud Marketplace.

For en­ter­prise and cus­tom so­lu­tions, in­clud­ing on-premises de­ploy­ments, con­tact our sales team.

...

Read the original on mistral.ai »

5 702 shares, 27 trendiness

Jason Evans

The je­mal­loc mem­ory al­lo­ca­tor was first con­ceived in early 2004, and has been in pub­lic use for about 20 years now. Thanks to the na­ture of open source soft­ware li­cens­ing, je­mal­loc will re­main pub­licly avail­able in­def­i­nitely. But ac­tive up­stream de­vel­op­ment has come to an end. This post briefly de­scribes je­mal­loc’s de­vel­op­ment phases, each with some suc­cess/​fail­ure high­lights, fol­lowed by some ret­ro­spec­tive com­men­tary.

In 2004 I be­gan work on the Lyken pro­gram­ming lan­guage in the con­text of sci­en­tific com­put­ing. Lyken was an even­tual dead end, but its man­ual mem­ory al­lo­ca­tor was func­tion­ally com­plete by May 2005. (The garbage col­lec­tor which was to lever­age its fea­tures was never com­pleted.) In September 2005 I started in­te­grat­ing the al­lo­ca­tor into FreeBSD, and in March 2006 I re­moved the al­lo­ca­tor from Lyken, in fa­vor of thin wrap­pers around sys­tem al­lo­ca­tor func­tion­al­ity.

Why re­move the mem­ory al­lo­ca­tor from Lyken af­ter so much ef­fort went into it? Well, once the al­lo­ca­tor was in­te­grated into FreeBSD, it be­came ap­par­ent that the only fea­ture miss­ing from the sys­tem al­lo­ca­tor was a mech­a­nism for track­ing al­lo­ca­tion vol­ume in or­der to trig­ger per thread garbage col­lec­tion. And that could be im­ple­mented via thin wrap­pers us­ing thread-spe­cific data and

dl­sym(3). Interestingly, many years later je­mal­loc even added the sta­tis­tics gath­er­ing that Lyken would have needed.

Back in 2005 the tran­si­tion to multi-proces­sor com­put­ers was on­go­ing. FreeBSD had Poul-Henning Kamp’s ex­cel­lent phk­mal­loc mem­ory al­lo­ca­tor, but that al­lo­ca­tor had no pro­vi­sions for par­al­lel thread ex­e­cu­tion. Lyken’s al­lo­ca­tor seemed like an ob­vi­ous scal­a­bil­ity im­prove­ment, and with en­cour­age­ment from friends and col­leagues I in­te­grated what quickly be­came known as je­mal­loc. Ah, but not so fast! Shortly af­ter in­te­gra­tion it be­came ap­par­ent that je­mal­loc had ter­ri­ble frag­men­ta­tion is­sues un­der some loads, no­tably those in­duced by KDE

ap­pli­ca­tions. Just when I thought I was mostly done, this real-world fail­ure called je­mal­loc’s vi­a­bil­ity into ques­tion.

In brief, the frag­men­ta­tion is­sue arose from us­ing a uni­fied ex­tent al­lo­ca­tion ap­proach (i.e. no size class seg­re­ga­tion). I had taken ba­sic in­spi­ra­tion from Doug Lea’s

dl­mal­loc, but with­out the in­ter­twined bat­tle-tested heuris­tics that avoided many of the worst frag­men­ta­tion is­sues. Much fran­tic re­search and ex­per­i­men­ta­tion en­sued. By the time je­mal­loc was part of a FreeBSD re­lease, its lay­out al­go­rithms had com­pletely changed to use size-seg­re­gated re­gions, as de­scribed in the 2006 BSDCan je­mal­loc

pa­per.

In November 2007, Mozilla Firefox 3 was near­ing re­lease, and high frag­men­ta­tion was an un­re­solved is­sue, es­pe­cially on Microsoft Windows. Thus be­gan a year of work­ing with Mozilla on mem­ory al­lo­ca­tion. Porting je­mal­loc to Linux was triv­ial, but Windows was an­other mat­ter. The canon­i­cal je­mal­loc sources were in the FreeBSD libc li­brary, so we es­sen­tially forked je­mal­loc and added porta­bil­ity code, up­stream­ing any­thing that was rel­e­vant to FreeBSD. The en­tire im­ple­men­ta­tion was still in one file, which re­duced the fric­tion of fork main­te­nance, but the im­ple­men­ta­tion com­plex­ity def­i­nitely sur­passed what is rea­son­able in a sin­gle file some­time dur­ing this phase of de­vel­op­ment.

Years later, Mozilla de­vel­op­ers made sig­nif­i­cant con­tri­bu­tions to the up­stream je­mal­loc in an ef­fort to move away from their fork. Unfortunately, Mozilla bench­marks con­sis­tently showed that the forked ver­sion out­per­formed the up­stream ver­sion. I don’t know if this was due to over­fit­ting to a lo­cal op­ti­mum or an ac­tual in­di­ca­tion of per­for­mance re­gres­sion, but it re­mains one of my biggest je­mal­loc dis­ap­point­ments.

When I started work at Facebook in 2009, I was sur­prised to dis­cover that the biggest im­ped­i­ment to ubiq­ui­tous je­mal­loc use in Facebook in­fra­struc­ture was in­stru­men­ta­tion. Critical in­ter­nal ser­vices were in the awk­ward sit­u­a­tion of de­pend­ing on je­mal­loc to keep mem­ory frag­men­ta­tion un­der con­trol, but en­gi­neers needed to de­bug mem­ory leaks with tc­mal­loc and the pprof heap pro­fil­ing tool that is part of

gperftools. pprof-com­pat­i­ble heap pro­fil­ing func­tion­al­ity head­lined the je­mal­loc 1.0.0 re­lease.

je­mal­loc de­vel­op­ment mi­grated to GitHub and con­tin­ued spo­rad­i­cally for the next few years as is­sues and op­por­tu­ni­ties arose. Other de­vel­op­ers started con­tribut­ing sig­nif­i­cant func­tion­al­ity. Version 3.0.0 in­tro­duced ex­ten­sive test­ing in­fra­struc­ture, as well as

Valgrind sup­port. The 4.x re­lease se­ries in­tro­duced de­cay-based purg­ing and

JSON-formatted teleme­try. The 5.x se­ries tran­si­tioned from chunks” to extents” to pave the way for bet­ter in­ter­ac­tion with 2 MiB huge pages.

Somewhat more con­tro­ver­sially, I re­moved Valgrind sup­port in 5.0.0 be­cause it was a sig­nif­i­cant main­te­nance com­pli­ca­tion (numerous ten­drils in sub­tle places), and it was un­used in­side Facebook; other tools like pprof and MemorySanitizer

dom­i­nated. I had re­ceived very lit­tle feed­back about Valgrind sup­port, and ex­trap­o­lated that it was not be­ing used. In ret­ro­spect, that seems not to have been the case. In par­tic­u­lar, the Rust

lan­guage di­rectly in­cor­po­rated je­mal­loc into com­piled pro­grams, and I think there was some over­lap be­tween Rust de­vel­op­ers and Valgrind de­vel­op­ers. People were an­gry. je­mal­loc was prob­a­bly booted from Rust bi­na­ries sooner than the nat­ural course of de­vel­op­ment might have oth­er­wise dic­tated.

Facebook’s in­ter­nal teleme­try is a won­der to be­hold, and it is a mas­sive boon to have per­for­mance data from myr­iad ser­vices in­form­ing mem­ory al­lo­ca­tor de­vel­op­ment. I don’t think it’s an ac­ci­dent that two of the fastest mem­ory al­lo­ca­tors of the past decade (tcmalloc and je­mal­loc) ben­e­fit from such data. Even simple” things like fast-path op­ti­miza­tions are much eas­ier to get right when there are ag­gre­gated Linux perf data on hand. Harder things like frag­men­ta­tion avoid­ance are still hard, but if thou­sands of dis­tinct work­flows be­have well with no out­lier re­gres­sions, then a change is prob­a­bly safe. je­mal­loc has ben­e­fited im­mensely from be­ing in­te­gral to the Facebook in­fra­struc­ture in terms of per­for­mance, re­silience, and con­sis­tent be­hav­ior. Additionally, je­mal­loc’s own in­te­grated sta­tis­tics re­port­ing ca­pa­bil­i­ties arose di­rectly in re­sponse to this ubiq­ui­tous teleme­try en­vi­ron­ment, and this turned out to gen­er­ally ben­e­fit both je­mal­loc de­vel­op­ment and non-Face­book ap­pli­ca­tion tun­ing/​de­bug­ging far in ex­cess of the im­ple­men­ta­tion ef­fort re­quired.

During my last year at Facebook I was en­cour­aged to build a small je­mal­loc team so that we could tackle some big tasks that would have been oth­er­wise daunt­ing. On top of ma­jor per­for­mance im­prove­ments, we got things like con­tin­u­ous in­te­gra­tion test­ing and com­pre­hen­sive teleme­try. When I left Facebook in 2017, the je­mal­loc team car­ried on do­ing ex­cel­lent de­vel­op­ment and main­te­nance work for sev­eral years, al­most en­tirely with­out my in­volve­ment, un­der the lead­er­ship of my es­teemed col­league, Qi Wang, and as ev­i­denced by the com­mit his­tory, with the ex­cel­lent con­tri­bu­tions of many oth­ers.

The na­ture of je­mal­loc de­vel­op­ment no­tice­ably shifted around the time that Facebook re­branded it­self as Meta. Facebook in­fra­struc­ture en­gi­neer­ing re­duced in­vest­ment in core tech­nol­ogy, in­stead em­pha­siz­ing re­turn on in­vest­ment. This is ap­par­ent in the je­mal­loc com­mit his­tory. In par­tic­u­lar, the seeds for prin­ci­pled huge page al­lo­ca­tion (HPA) were sown way back in 2016! HPA work con­tin­ued apace for sev­eral years, slowed, then stag­nated as tweaks piled on top of each other with­out the req­ui­site refac­tor­ing that keeps a code­base healthy. This fea­ture tra­jec­tory re­cently cratered. The heart­break for me is some­what blunted since I have not been closely in­volved for years, but as a re­sult of re­cent changes within Meta we no longer have any­one shep­herd­ing long-term je­mal­loc de­vel­op­ment with an eye to­ward gen­eral util­ity.

I don’t want to dwell on drama, but it is per­haps worth men­tion­ing that we reached a sad end for je­mal­loc in the hands of Facebook/Meta even though most of the peo­ple in­volved were act­ing in good faith. Corporate cul­tures shift in com­pli­ance with both ex­ter­nal and in­ter­nal pres­sures. And peo­ple find them­selves in im­pos­si­ble sit­u­a­tions where the main choices are 1) make poor de­ci­sions un­der ex­treme pres­sure, 2) com­ply un­der ex­treme pres­sure, or 3) get routed around. As in­di­vid­u­als we some­times have enough in­flu­ence to slow or­ga­ni­za­tional degra­da­tion, maybe even con­tribute to iso­lated re­nais­sances, but none of us can pre­vent the in­evitable.

I re­main very grate­ful to my for­mer col­leagues for all their ex­cel­lent work on je­mal­loc, and Facebook/Meta in gen­eral for in­vest­ing so much, for so long.

What now? As far as I am con­cerned, upstream” je­mal­loc de­vel­op­ment has con­cluded. Meta’s needs stopped align­ing well with those of ex­ter­nal uses some time ago, and they are bet­ter off do­ing their own thing. Were I to reen­gage, the first step would be at least hun­dreds of hours of refac­tor­ing to pay off ac­crued tech­ni­cal debt. And I’m not suf­fi­ciently ex­cited by what would come af­ter to pay such a high up­front cost. Perhaps oth­ers will cre­ate vi­able forks, whether from the dev branch or from the 5.3.0 re­lease (already three years old!).

In above sec­tions I men­tioned sev­eral phase-spe­cific fail­ures, but there were some generic fail­ures that sur­prised me de­spite a ca­reer fo­cused on open source de­vel­op­ment.

* As men­tioned, re­mov­ing Valgrind caused some bad sen­ti­ment. But the root of the prob­lem is lack of

aware­ness about ex­ter­nal uses and needs. I prob­a­bly would have worked with oth­ers to pre­serve

Valgrind sup­port if I’d known that it mat­tered to any­one. As an­other ex­am­ple, I was com­pletely

un­aware of je­mal­loc’s use as the

Android mem­ory al­lo­ca­tor for per­haps

two years. And years later, un­aware of its re­place­ment un­til af­ter the fact.

* Even though je­mal­loc de­vel­op­ment re­mained com­pletely out in the open (not siloed in­side Facebook),

the pro­ject never grew to re­tain pri­mary con­trib­u­tors from other or­ga­ni­za­tions. The Mozilla ef­fort

by Mike Hommey to move Firefox to the up­stream je­mal­loc was a near miss. Efforts by oth­ers to

tran­si­tion to a CMake-based build sys­tem stalled mul­ti­ple times, and never

crossed the fin­ish line. I knew from hard ex­pe­ri­ence with

Darwin that in­ter­nally siloed open

source pro­jects can­not thrive (HHVM was a re­peat les­son), but je­mal­loc needed

more than open de­vel­op­ment to thrive as an in­de­pen­dent pro­ject.

je­mal­loc was an odd di­ver­sion for me, since I have been a strong pro­po­nent of garbage col­lec­tion over man­ual mem­ory man­age­ment for over 25 years. Personally I’m happy to be work­ing again on garbage-col­lected sys­tems, but je­mal­loc was a tremen­dously ful­fill­ing pro­ject. Thank you to every­one who made this pro­ject so worth­while, col­lab­o­ra­tors, sup­port­ers, and users alike.

...

Read the original on jasone.github.io »

6 691 shares, 50 trendiness

Working on databases from prison

I’m very ex­cited to an­nounce that I have re­cently joined Turso as a soft­ware en­gi­neer. For many in the field, in­clud­ing my­self, get­ting to work on data­bases and solve unique chal­lenges with such a tal­ented team would be a dream job, but it is that much more spe­cial to me be­cause of my un­usual and un­likely cir­cum­stances. As dif­fi­cult as it might be to be­lieve, I am cur­rently in­car­cer­ated and I landed this job from my cell in state prison. If you don’t know me, let me tell you more about how I got here.

Nearly two years have passed since I pub­lished How I got here to my blog. That post was my first real con­tact with the out­side world in years, as I’d been off all so­cial me­dia and the in­ter­net since 2017. The re­sponse and sup­port I would re­ceive from the tech com­mu­nity caught me com­pletely off guard.

A brief sum­mary is that I’m cur­rently serv­ing prison time for poor de­ci­sions and lifestyle choices I made in my twen­ties, all re­lated to drugs. Three years ago, I en­rolled in a prison col­lege pro­gram that came with the unique op­por­tu­nity to ac­cess a com­puter with lim­ited in­ter­net ac­cess. This im­me­di­ately reignited a teenage love for pro­gram­ming and a light­bulb im­me­di­ately lit up: that this would be my way out of the mess I had got­ten my­self into over the past 15 years. I quickly out­grew the cur­ricu­lum, pre­fer­ring in­stead to spend ~15+ hours a day on pro­jects and open source con­tri­bu­tions.

Through for­tu­nate tim­ing and lots of hard work, I was se­lected to be one of the first par­tic­i­pants in the Maine Dept of Correction’s re­mote work pro­gram, where res­i­dents who meet cer­tain re­quire­ments are al­lowed to seek out re­mote em­ploy­ment op­por­tu­ni­ties. I landed a soft­ware en­gi­neer­ing job at a startup called Unlocked Labs build­ing ed­u­ca­tion so­lu­tions for in­car­cer­ated learn­ers, while con­tribut­ing to open source on the side. After just a year, I was lead­ing their de­vel­op­ment team.

Last December I was be­tween side-pro­jects and brows­ing Hacker News when I dis­cov­ered Project Limbo, an ef­fort by Turso to rewrite SQLite from scratch. I’d never worked on re­la­tional data­bases, but some ex­pe­ri­ence with a cache had re­cently sparked an in­ter­est in stor­age en­gines. Luckily for me I saw that the pro­ject was fairly young with plenty of low hang­ing fruit to cut my teeth on.

To put this en­tirely into per­spec­tive for some of you may be dif­fi­cult, but in prison there is­n’t ex­actly a whole lot to do and pro­gram­ming ab­solutely con­sumes my life. I ei­ther write code or man­age Kubernetes clus­ters or other in­fra­struc­ture for about 90 hours a week, and my only en­ter­tain­ment is a daily hour of tech/​pro­gram­ming YouTube; mostly con­sist­ing of The Primeagen, whose story was a huge in­spi­ra­tion to me early on.

Through Prime, I had known about Turso since the be­gin­ning and had watched sev­eral in­ter­views with Glauber and Pekka dis­cussing their Linux ker­nel back­grounds and talk­ing about the con­cept of dis­trib­uted, multi-ten­ant SQLite. These were folks I’d looked up to for years and def­i­nitely could not have imag­ined that I would even­tu­ally be in any po­si­tion to be con­tribut­ing mean­ing­fully to such an am­bi­tious pro­ject of theirs. So need­less to say, for those first PRs, just the thought of a ker­nel main­tainer re­view­ing my code had made me quite ner­vous.

Helping build Limbo quickly be­came my new ob­ses­sion. I split my time be­tween my job and div­ing deep into SQLite source code, aca­d­e­mic pa­pers on data­base in­ter­nals, and Andy Pavlo’s CMU lec­tures. I was ac­tive on the Turso Discord but I don’t think I con­sid­ered whether any­one was aware that one of the top con­trib­u­tors was do­ing so from a prison cell. My story and in­for­ma­tion are linked on my GitHub, but it’s sub­tle enough where you could miss it if you did­n’t read the whole pro­file. A cou­ple months later, I got a Discord mes­sage from Glauber in­tro­duc­ing him­self and ask­ing if we could meet.

In January, Glauber’s tweet about our in­ter­ac­tion caught the at­ten­tion of The Primeagen, and he ended up read­ing my blog post on his stream, bring­ing a whole lot of new at­ten­tion to it.

To this day I re­ceive semi-reg­u­lar emails ei­ther from de­vel­op­ers, col­lege kids or oth­ers who maybe have ei­ther gone through ad­dic­tion or sim­i­lar cir­cum­stances, or just want to reach out for ad­vice on how to best start con­tribut­ing to open source or op­ti­mize their learn­ing path.

I’m in­cred­i­bly proud to be an ex­am­ple to oth­ers of how far hard work, de­ter­mi­na­tion and dis­ci­pline will get you, and will be for­ever grate­ful for the op­por­tu­ni­ties given to me by the Maine Dept of Corrections to even be able to work hard in the first place, and to Unlocked Labs for giv­ing me a chance and hir­ing me at a time when most as­suredly no-one else would.

I’m also in­cred­i­bly proud to an­nounce that I am now work­ing for Turso full time, some­thing I would never have dreamed would be pos­si­ble just a few years ago, I’m very ex­cited to be a part of the team and to get to help build the mod­ern evo­lu­tion of SQLite.

Although some re­cent bad news from the court means that I won’t be com­ing home as early as my fam­ily and I had hoped, my only choice is to view this as a bless­ing and for the next 10 months, will in­stead just be able to con­tinue to ded­i­cate time and fo­cus to ad­vanc­ing my ca­reer at such a level that just would­n’t be pos­si­ble oth­er­wise.

Thank you to every­one who has taken the time to reach out over the past cou­ple years, to my team at Unlocked Labs, and es­pe­cially my par­ents. Thanks to Turso for the op­por­tu­nity and to all the other com­pa­nies with fair chance hir­ing poli­cies who be­lieve that peo­ple de­serve a sec­ond chance. This jour­ney has been to­tally sur­real and every day I am still in awe of how far my life has come from the life I lived even just a few years ago.

...

Read the original on turso.tech »

7 671 shares, 24 trendiness

US-backed Israeli company's spyware used to target European journalists, Citizen Lab finds

ROME (AP) — Spyware from a U. S.-backed Israeli com­pany was used to tar­get the phones of at least three promi­nent jour­nal­ists in Europe, two of whom are ed­i­tors at an in­ves­tiga­tive news site in Italy, ac­cord­ing to dig­i­tal re­searchers at Citizen Lab, cit­ing new foren­sic ev­i­dence of the at­tacks.

The find­ings come amid a grow­ing ques­tions about what role the gov­ern­ment of Italian Prime Minister Giorgia Meloni may have played in spy­ing on jour­nal­ists and civil so­ci­ety ac­tivists crit­i­cal of her lead­er­ship, and raised new con­cerns about the po­ten­tial for abuse of com­mer­cial spy­ware, even in de­mo­c­ra­tic coun­tries.

Any at­tempts to il­le­gally ac­cess data of cit­i­zens, in­clud­ing jour­nal­ists and po­lit­i­cal op­po­nents, is un­ac­cept­able, if con­firmed,” the European Union’s ex­ec­u­tive branch said in a state­ment Wednesday in re­sponse to ques­tions from mem­bers of par­lia­ment. The European Commission will use all the tools at its dis­posal to en­sure the ef­fec­tive ap­pli­ca­tion of EU law.”

Meloni’s of­fice de­clined to com­ment Thursday, but a promi­nent mem­ber of her Cabinet has said that Italy rigorously re­spected” the law and that the gov­ern­ment had­n’t il­le­gally spied on jour­nal­ists.

The com­pany be­hind the hacks, Paragon Solutions, has sought to po­si­tion it­self as a vir­tu­ous player in the mer­ce­nary spy­ware in­dus­try and won U. S. gov­ern­ment con­tracts, The Associated Press found.

Backed by for­mer Israeli Prime Minister Ehud Barak, Paragon was re­port­edly ac­quired by AE Industrial Partners, a pri­vate in­vest­ment firm based in Florida, in a December deal worth at least $500 mil­lion, pend­ing reg­u­la­tory ap­provals. AE Industrial Partners did­n’t di­rectly re­spond to re­quests for com­ment on the deal.

Paragon’s spy­ware, Graphite, was used to tar­get around 90 WhatsApp users from more than two dozen coun­tries, pri­mar­ily in Europe, Meta said in January. Since then, there’s been a scram­ble to fig­ure out who was hacked and who was re­spon­si­ble.

We’ve seen first-hand how com­mer­cial spy­ware can be weaponized to tar­get jour­nal­ists and civil so­ci­ety, and these com­pa­nies must be held ac­count­able,” a spokesper­son for WhatsApp told AP in an email. WhatsApp will con­tinue to pro­tect peo­ples’ abil­ity to com­mu­ni­cate pri­vately.” Meta said the vul­ner­a­bil­ity has been patched and they have not de­tected sub­se­quent at­tacks. Meta also sent a cease-and-de­sist let­ter to Paragon. Last month, a California court awarded Meta $168 mil­lion in dam­ages from Israel’s NSO Group, whose spy­ware was used to hack 1,400 WhatsApp ac­counts, in­clud­ing of jour­nal­ists, ac­tivists and gov­ern­ment of­fi­cials.

It is un­ac­cept­able in a de­mo­c­ra­tic coun­try that jour­nal­ists are spied on with­out know­ing the rea­son. We do not know how many there are and if there are oth­ers,” Vittorio di Trapani, pres­i­dent of the Italian jour­nal­ists’ union FNSI, told the AP. The EU should in­ter­vene. The democ­racy of a found­ing coun­try of the union and there­fore of the whole of Europe is at stake.”

The Citizen Lab’s find­ings, re­leased to­day, show that the use of spy­ware against jour­nal­ists has con­tin­ued, de­spite the back­lash against NSO Group, and es­tab­lish for the first time that Paragon was able to suc­cess­fully in­fect Apple de­vices.

Ciro Pellegrino, who heads the Naples news­room of an in­ves­tiga­tive news out­let called Fanpage.it, re­ceived a no­tice on April 29 that his iPhone had been tar­geted.

Last year, Fanpage se­cretly in­fil­trated the youth wing of Meloni’s Brothers of Italy party and filmed some of them mak­ing fas­cist and racist re­marks. Pellegrino’s col­league, Fanpage ed­i­tor-in-chief Francesco Cancellato, also re­ceived a no­tice from Meta that his Android de­vice had been tar­geted by Paragon spy­ware, though foren­sic ev­i­dence that his phone was ac­tu­ally in­fected with Graphite has­n’t yet sur­faced, ac­cord­ing to Citizen Lab.

The Citizen Lab’s re­port to­day also re­vealed a third case, of a prominent European jour­nal­ist,” who asked to re­main anony­mous, but is con­nected to the Italian clus­ter by foren­sic ev­i­dence un­earthed by re­searchers at the lab­o­ra­tory, which is run out of the Munk School at the University of Toronto. The Citizen Lab, which has an­a­lyzed all the de­vices, said the at­tack came via iMes­sage, and that Apple has patched the vul­ner­a­bil­ity. Apple did not re­spond im­me­di­ately to re­quests for com­ment.

Paragon is now mired in ex­actly the kind of abuse scan­dal that NSO Group is no­to­ri­ous for,” said John Scott-Railton, a se­nior re­searcher at the Citizen Lab. This shows the in­dus­try and its way of do­ing busi­ness is the prob­lem. It’s not just a few bad ap­ples.”

Paragon’s spy­ware is es­pe­cially stealthy be­cause it can com­pro­mise a de­vice with­out any ac­tion from the user. Similar to the NSO Group’s no­to­ri­ous Pegasus spy­ware, which has been black­listed by the U. S. gov­ern­ment, Graphite al­lows the op­er­a­tor to covertly ac­cess ap­pli­ca­tions, in­clud­ing en­crypted mes­sen­gers like Signal and WhatsApp.

There’s no link to click, at­tach­ment to down­load, file to open or mis­take to make,” Scott-Railton said. One mo­ment the phone is yours, and the next minute its data is stream­ing to an at­tacker.”

COPASIR, the par­lia­men­tary com­mit­tee over­see­ing the Italian se­cret ser­vices, took the rare step last week of mak­ing pub­lic the re­sults of its in­ves­ti­ga­tion into the gov­ern­men­t’s use of Paragon. The COPASIR re­port said that Italian in­tel­li­gence ser­vices had­n’t spied on Cancellato, the ed­i­tor of Fanpage.

The re­port did con­firm the sur­veil­lance, with tools in­clud­ing Graphite, of civil so­ci­ety ac­tivists, but said they had been tar­geted legally and with gov­ern­ment au­tho­riza­tion — not as ac­tivists but over their work re­lated to ir­reg­u­lar im­mi­gra­tion and na­tional se­cu­rity.

Giovanni Donzelli, vice pres­i­dent of COPASIR and a promi­nent mem­ber of Meloni’s Brothers of Italy party, de­clined fur­ther com­ment Thursday, say­ing the par­lia­men­tary re­port was more rel­e­vant than an analy­sis done by a pri­vately funded Canadian lab­o­ra­tory.”

Citizen Lab says it’s rigorously in­de­pen­dent,” and does­n’t ac­cept re­search fund­ing from gov­ern­ments or com­pa­nies.

Italy and Paragon both say they’ve ter­mi­nated their re­la­tion­ship, but of­fer starkly dif­fer­ent ver­sions of the breakup.

Paragon re­ferred ques­tions to a state­ment it gave to Israeli news­pa­per Haaretz, in which the com­pany said that it stopped pro­vid­ing spy­ware to Italy af­ter the gov­ern­ment de­clined its of­fer to help in­ves­ti­gate Cancellato’s case. Italian au­thor­i­ties, how­ever, said they had re­jected Paragon’s of­fer over na­tional se­cu­rity con­cerns and ended the re­la­tion­ship fol­low­ing me­dia out­cry.

Paragon has been keen to de­flect rep­u­ta­tional dam­age that could, in the­ory, im­pact its con­tracts with the U. S. gov­ern­ment.

A 2023 ex­ec­u­tive or­der, which so far has­n’t been over­turned by U. S. President Donald Trump, pro­hibits fed­eral gov­ern­ment de­part­ments and agen­cies from ac­quir­ing com­mer­cial spy­ware that has been mis­used by for­eign gov­ern­ments, in­clud­ing to limit free­dom of ex­pres­sion and po­lit­i­cal dis­sent.

The U. S. Department of Homeland Security awarded Paragon a one-year, $2 mil­lion con­tract last September for op­er­a­tions and sup­port of U.S. Immigration and Customs Enforcement, pub­lic records show.

The U. S. Drug Enforcement Administration has also re­port­edly used the spy­ware. In December 2022, Adam Schiff, the California Democrat who at the time chaired the House Intelligence Committee, wrote to the ad­min­is­tra­tor of the U.S. Drug Enforcement Administration ques­tion­ing whether the DEAs use of Graphite spy­ware un­der­mined ef­forts to de­ter the broad pro­lif­er­a­tion of pow­er­ful sur­veil­lance ca­pa­bil­i­ties to au­to­cratic regimes and oth­ers who may mis­use them.”

Byron Tau in Washington, and Lorne Cook in Brussels, con­tributed to this re­port.

...

Read the original on apnews.com »

8 650 shares, 24 trendiness

What if the Big Bang wasn’t the beginning? Our research suggests it may have taken place inside a black hole

The Big Bang is of­ten de­scribed as the ex­plo­sive birth of the uni­verse — a sin­gu­lar mo­ment when space, time and mat­ter sprang into ex­is­tence. But what if this was not the be­gin­ning at all? What if our uni­verse emerged from some­thing else — some­thing more fa­mil­iar and rad­i­cal at the same time?

In a new pa­per, pub­lished in Physical Review D, my col­leagues and I pro­pose a strik­ing al­ter­na­tive. Our cal­cu­la­tions sug­gest the Big Bang was not the start of every­thing, but rather the out­come of a grav­i­ta­tional crunch or col­lapse that formed a very mas­sive black hole — fol­lowed by a bounce in­side it.

This idea, which we call the black hole uni­verse, of­fers a rad­i­cally dif­fer­ent view of cos­mic ori­gins, yet it is grounded en­tirely in known physics and ob­ser­va­tions.

Today’s stan­dard cos­mo­log­i­cal model, based on the Big Bang and cos­mic in­fla­tion (the idea that the early uni­verse rapidly blew up in size), has been re­mark­ably suc­cess­ful in ex­plain­ing the struc­ture and evo­lu­tion of the uni­verse. But it comes at a price: it leaves some of the most fun­da­men­tal ques­tions unan­swered.

For one, the Big Bang model be­gins with a sin­gu­lar­ity — a point of in­fi­nite den­sity where the laws of physics break down. This is not just a tech­ni­cal glitch; it’s a deep the­o­ret­i­cal prob­lem that sug­gests we don’t re­ally un­der­stand the be­gin­ning at all.

To ex­plain the uni­verse’s large-scale struc­ture, physi­cists in­tro­duced a brief phase of rapid ex­pan­sion into the early uni­verse called cos­mic in­fla­tion, pow­ered by an un­known field with strange prop­er­ties. Later, to ex­plain the ac­cel­er­at­ing ex­pan­sion ob­served to­day, they added an­other mysterious” com­po­nent: dark en­ergy.

In short, the stan­dard model of cos­mol­ogy works well — but only by in­tro­duc­ing new in­gre­di­ents we have never ob­served di­rectly. Meanwhile, the most ba­sic ques­tions re­main open: where did every­thing come from? Why did it be­gin this way? And why is the uni­verse so flat, smooth, and large?

Our new model tack­les these ques­tions from a dif­fer­ent an­gle — by look­ing in­ward in­stead of out­ward. Instead of start­ing with an ex­pand­ing uni­verse and try­ing to trace back how it be­gan, we con­sider what hap­pens when an overly dense col­lec­tion of mat­ter col­lapses un­der grav­ity.

This is a fa­mil­iar process: stars col­lapse into black holes, which are among the most well-un­der­stood ob­jects in physics. But what hap­pens in­side a black hole, be­yond the event hori­zon from which noth­ing can es­cape, re­mains a mys­tery.

In 1965, the British physi­cist Roger Penrose proved that un­der very gen­eral con­di­tions, grav­i­ta­tional col­lapse must lead to a sin­gu­lar­ity. This re­sult, ex­tended by the late British physi­cist Stephen Hawking and oth­ers, un­der­pins the idea that sin­gu­lar­i­ties — like the one at the Big Bang — are un­avoid­able.

The idea helped win Penrose a share of the 2020 Nobel prize in physics and in­spired Hawking’s global best­seller A Brief History of Time: From the Big Bang to Black Holes. But there’s a caveat. These singularity the­o­rems” rely on classical physics” which de­scribes or­di­nary macro­scopic ob­jects. If we in­clude the ef­fects of quan­tum me­chan­ics, which rules the tiny mi­cro­cos­mos of atoms and par­ti­cles, as we must at ex­treme den­si­ties, the story may change.

In our new pa­per, we show that grav­i­ta­tional col­lapse does not have to end in a sin­gu­lar­ity. We find an ex­act an­a­lyt­i­cal so­lu­tion — a math­e­mat­i­cal re­sult with no ap­prox­i­ma­tions. Our maths show that as we ap­proach the po­ten­tial sin­gu­lar­ity, the size of the uni­verse changes as a (hyperbolic) func­tion of cos­mic time.

This sim­ple math­e­mat­i­cal so­lu­tion de­scribes how a col­laps­ing cloud of mat­ter can reach a high-den­sity state and then bounce, re­bound­ing out­ward into a new ex­pand­ing phase.

But how come Penrose’s the­o­rems for­bid out such out­comes? It’s all down to a rule called the quan­tum ex­clu­sion prin­ci­ple, which states that no two iden­ti­cal par­ti­cles known as fermi­ons can oc­cupy the same quan­tum state (such as an­gu­lar mo­men­tum, or spin”).

And we show that this rule pre­vents the par­ti­cles in the col­laps­ing mat­ter from be­ing squeezed in­def­i­nitely. As a re­sult, the col­lapse halts and re­verses. The bounce is not only pos­si­ble — it’s in­evitable un­der the right con­di­tions.

Crucially, this bounce oc­curs en­tirely within the frame­work of gen­eral rel­a­tiv­ity, which ap­plies on large scales such as stars and galax­ies, com­bined with the ba­sic prin­ci­ples of quan­tum me­chan­ics — no ex­otic fields, ex­tra di­men­sions or spec­u­la­tive physics re­quired.

What emerges on the other side of the bounce is a uni­verse re­mark­ably like our own. Even more sur­pris­ingly, the re­bound nat­u­rally pro­duces the two sep­a­rate phases of ac­cel­er­ated ex­pan­sion — in­fla­tion and dark en­ergy — dri­ven not by a hy­po­thet­i­cal fields but by the physics of the bounce it­self.

One of the strengths of this model is that it makes testable pre­dic­tions. It pre­dicts a small but non-zero amount of pos­i­tive spa­tial cur­va­ture — mean­ing the uni­verse is not ex­actly flat, but slightly curved, like the sur­face of the Earth.

This is sim­ply a relic of the ini­tial small over-den­sity that trig­gered the col­lapse. If fu­ture ob­ser­va­tions, such as the on­go­ing Euclid mis­sion, con­firm a small pos­i­tive cur­va­ture, it would be a strong hint that our uni­verse did in­deed emerge from such a bounce. It also makes pre­dic­tions about the cur­rent uni­verse’s rate of ex­pan­sion, some­thing that has al­ready been ver­i­fied.

This model does more than fix tech­ni­cal prob­lems with stan­dard cos­mol­ogy. It could also shed new light on other deep mys­ter­ies in our un­der­stand­ing of the early uni­verse — such as the ori­gin of su­per­mas­sive black holes, the na­ture of dark mat­ter, or the hi­er­ar­chi­cal for­ma­tion and evo­lu­tion of galax­ies.

These ques­tions will be ex­plored by fu­ture space mis­sions such as Arrakhis, which will study dif­fuse fea­tures such as stel­lar ha­los (a spher­i­cal struc­ture of stars and glob­u­lar clus­ters sur­round­ing galax­ies) and satel­lite galax­ies (smaller galax­ies that or­bit larger ones) that are dif­fi­cult to de­tect with tra­di­tional tele­scopes from Earth and will help us un­der­stand dark mat­ter and galaxy evo­lu­tion.

These phe­nom­ena might also be linked to relic com­pact ob­jects — such as black holes — that formed dur­ing the col­laps­ing phase and sur­vived the bounce.

The black hole uni­verse also of­fers a new per­spec­tive on our place in the cos­mos. In this frame­work, our en­tire ob­serv­able uni­verse lies in­side the in­te­rior of a black hole formed in some larger parent” uni­verse.

We are not spe­cial, no more than Earth was in the geo­cen­tric world­view that led Galileo (the as­tronomer who sug­gested the Earth re­volves around the Sun in the 16th and 17th cen­turies) to be placed un­der house ar­rest.

We are not wit­ness­ing the birth of every­thing from noth­ing, but rather the con­tin­u­a­tion of a cos­mic cy­cle — one shaped by grav­ity, quan­tum me­chan­ics, and the deep in­ter­con­nec­tions be­tween them.

...

Read the original on www.port.ac.uk »

9 625 shares, 21 trendiness

I Convinced HP's Board to Buy Palm for $1.2B. Then I Watched Them Kill It in 49 Days

I’ve never shared this story pub­licly be­fore—how I con­vinced HPs board to ac­quire Palm for $1.2 bil­lion, then watched as they de­stroyed it while I was con­fined to bed re­cov­er­ing from surgery.

This is­n’t just an­other tech fail­ure analy­sis. I was the HP Chief Technology Officer who led the tech­ni­cal due dili­gence on Palm. I pre­sented to Mark Hurd and the HP board, mak­ing the case for mov­ing for­ward with the ac­qui­si­tion. I be­lieved we were buy­ing the fu­ture of mo­bile com­put­ing.

Then I watched it all fall apart from the worst pos­si­ble van­tage point—ly­ing in bed dur­ing a eight-week re­cov­ery, help­less to in­ter­vene as every­thing I’d worked to build got dis­man­tled in real time.

This is the story of how smart peo­ple de­stroyed $1.2 bil­lion in in­no­va­tion value in just 49 days. It’s about the bru­tal per­sonal cost of be­ing blamed for a dis­as­ter that hap­pened while you’re re­cov­er­ing from surgery. And it’s about why I still be­lieve in HP de­spite every­thing that went wrong.

In early 2010, HP was des­per­ately seek­ing mo­bile plat­form ca­pa­bil­i­ties. We knew the com­put­ing world was shift­ing to­ward mo­bile, and our tra­di­tional PC busi­ness faced real threats from tablets and smart­phones. We needed to be there.

Palm was strug­gling fi­nan­cially, but they pos­sessed some­thing gen­uinely spe­cial in WebOS—true mul­ti­task­ing when iOS and Android could­n’t han­dle it, el­e­gant user in­ter­face de­sign, and break­through tech­nol­ogy ar­chi­tec­ture buried in­side a fail­ing busi­ness.

As CTO, I led the tech­ni­cal due dili­gence process. I spent weeks em­bed­ded with the Palm en­gi­neer­ing team in Sunnyvale, crawl­ing through their code base, un­der­stand­ing their plat­form ar­chi­tec­ture, and as­sess­ing the qual­ity of their tech­ni­cal tal­ent. The deeper I dug, the more con­vinced I be­came that this was­n’t just an­other mo­bile op­er­at­ing sys­tem.

My con­clu­sion was un­am­bigu­ous: WebOS rep­re­sented a break­through plat­form tech­nol­ogy that could dif­fer­en­ti­ate HP in the emerg­ing mo­bile com­put­ing mar­ket. The tech­nol­ogy was solid. The team was ex­cep­tional. The plat­form vi­sion was com­pelling.

I pre­sented this as­sess­ment to Mark Hurd and the board with com­plete con­vic­tion. This was­n’t about buy­ing a strug­gling phone com­pany—it was our strate­gic en­try into the fu­ture of com­put­ing plat­forms. I be­lieved every word of my pre­sen­ta­tion be­cause I had seen the tech­nol­o­gy’s po­ten­tial first­hand.

The board agreed with my rec­om­men­da­tion. In April 2010, we an­nounced the $1.2 bil­lion ac­qui­si­tion. I felt proud of the tech­ni­cal work we’d done and ex­cited about what we could build to­gether.

After the ac­qui­si­tion closed in July 2010, my role shifted to help­ing the Palm team lever­age HPs mas­sive ca­pa­bil­i­ties. We had global man­u­fac­tur­ing scale, es­tab­lished sup­ply chain re­la­tion­ships, and a con­sumer and en­ter­prise cus­tomer base that Palm had never been able to ac­cess as an in­de­pen­dent com­pany.

I spent count­less hours work­ing with the Palm lead­er­ship team, map­ping out in­te­gra­tion plans and iden­ti­fy­ing strate­gic syn­er­gies. We dis­cussed how WebOS could ex­pand be­yond smart­phones into tablets, po­ten­tially in­te­grate with HPs PC plat­forms, and even find ap­pli­ca­tions in our printer ecosys­tem.

Everything seemed aligned for suc­cess.

Then life in­ter­vened in the worst pos­si­ble way.

Everything seemed aligned for suc­cess un­til the first dis­as­ter struck. In August 2010—just one month af­ter we closed the Palm ac­qui­si­tion—Mark Hurd was forced to re­sign as CEO. The board re­placed him with Leo Apotheker, for­mer CEO of SAP, who brought a com­pletely dif­fer­ent strate­gic vi­sion to HP.

Apotheker’s plan was rad­i­cal: trans­form HP from a hard­ware com­pany into a soft­ware and ser­vices com­pany, sim­i­lar to IBMs trans­for­ma­tion years ear­lier. He wanted to exit or min­i­mize HPs hard­ware busi­nesses—PCs, print­ers, and by ex­ten­sion, mo­bile de­vices like the TouchPad. In his mind, WebOS rep­re­sented ex­actly the kind of hard­ware dis­trac­tion he wanted to elim­i­nate.

I as­sumed the strate­gic ra­tio­nale for the ac­qui­si­tion re­mained sound de­spite the lead­er­ship change. The tech­nol­ogy had­n’t changed. The mar­ket op­por­tu­nity was still there. But I was wrong about the con­ti­nu­ity of strate­gic vi­sion.

Then, in late June 2011, life in­ter­vened in the worst pos­si­ble way. I faced a med­ical emer­gency re­quir­ing im­me­di­ate surgery and a eight-week re­cov­ery pe­riod con­fined to bed. You don’t sched­ule med­ical emer­gen­cies—and I had to step away from my in­te­gra­tion work with Palm just as the most crit­i­cal de­ci­sions were be­ing made about the plat­for­m’s fu­ture.

While I was re­cov­er­ing at home, un­able to par­tic­i­pate in meet­ings or pro­vide strate­gic in­put, the en­tire mo­bile com­put­ing land­scape at HP be­gan to un­ravel.

On July 1, 2011, HP launched the TouchPad tablet run­ning WebOS 3.0. I watched the launch from my bed, hop­ing to see the cul­mi­na­tion of all our tech­ni­cal work and strate­gic plan­ning. Instead, I wit­nessed the be­gin­ning of one of the fastest prod­uct fail­ures in tech his­tory.

The launch was botched from the start. HP priced the TouchPad at $499 to com­pete di­rectly with the iPad, but with­out the app ecosys­tem or mar­ket­ing mus­cle to jus­tify that pre­mium. The de­vice felt rushed to mar­ket, lack­ing the pol­ish that could have helped it com­pete. Consumer re­views were mixed at best.

Initial sales num­bers were dev­as­tat­ing: HP sold only 25,000 TouchPads out of 270,000 units shipped to re­tail­ers. While Apple was sell­ing 9 mil­lion iPads that same quar­ter, TouchPads were gath­er­ing dust on store shelves.

Then came the an­nounce­ment that changed every­thing.

On August 18, 2011—just 49 days af­ter the TouchPad launch—HP an­nounced it would dis­con­tinue all WebOS de­vices im­me­di­ately. I was still con­fined to bed, watch­ing my tech­ni­cal due dili­gence work and strate­gic rec­om­men­da­tions get dis­man­tled in real time through news re­ports and in­dus­try analy­sis.

Forty-nine days. That’s not enough time to fix launch prob­lems, build de­vel­oper mo­men­tum, or es­tab­lish re­tail part­ner­ships. Platform busi­nesses typ­i­cally need 18-24 months to demon­strate real mar­ket trac­tion. But Leo Apotheker was­n’t think­ing about plat­form time­lines—he was think­ing about his soft­ware trans­for­ma­tion strat­egy.

The most painful part was­n’t just the speed of the de­ci­sion, but learn­ing that Apotheker had made the dis­con­tin­u­a­tion choice with­out even in­form­ing the Palm team be­fore­hand. According to mul­ti­ple re­ports, he seemed ea­ger to kill off a prod­uct that did­n’t fit his vi­sion of HP as a soft­ware com­pany.

I felt help­less. Betrayed. And some­how, I was re­spon­si­ble for not be­ing there to fight for what I knew was break­through tech­nol­ogy.

My first day back at HP will be burned into my mem­ory for­ever. I was sim­ply try­ing to grab lunch in the cafe­te­ria at HP Labs when I found my­self sur­rounded by what felt like the en­tire tech­ni­cal staff. They weren’t there to wel­come me back—they were there to hold me ac­count­able.

The scene was in­tense and un­am­bigu­ous. Engineers and re­searchers who had watched the WebOS dis­as­ter un­fold were point­ing fin­gers and rais­ing voices. Their mes­sage was crys­tal clear and bru­tal: You can never take leave again—EVER!”

Their ex­act words still echo in my mind: The CEO and board need adult su­per­vi­sion.”

Think about the im­pli­ca­tions of that state­ment. HPs own tech­ni­cal staff, the peo­ple clos­est to our in­no­va­tion work, be­lieved that se­nior lead­er­ship could­n’t be trusted to make sound tech­nol­ogy de­ci­sions with­out some­one there to pro­vide over­sight and guid­ance.

They weren’t wrong. The num­bers proved it in the most painful way pos­si­ble.

But hear­ing it di­rected at me per­son­ally—be­ing blamed for not pro­vid­ing adult su­per­vi­sion” while I was re­cov­er­ing from surgery—was dev­as­tat­ing. I had rec­om­mended the ac­qui­si­tion based on solid tech­ni­cal analy­sis. I had worked to in­te­grate the teams and tech­nol­ogy. But be­cause I was­n’t there dur­ing the crit­i­cal 49 days when the de­ci­sion was made to kill WebOS, some­how the fail­ure be­came my re­spon­si­bil­ity.

Standing in that cafe­te­ria, lis­ten­ing to my col­leagues blame me for not be­ing there to pre­vent the dis­as­ter, I re­al­ized the fun­da­men­tal prob­lem was­n’t my ab­sence. It was a sys­tem­atic mis­match be­tween Leo Apotheker’s ex­pe­ri­ence and the role he was asked to fill.

SAPs an­nual rev­enue while Leo served as its CEO was ap­prox­i­mately $15 bil­lion. The HP board hired a CEO whose largest or­ga­ni­za­tional ex­pe­ri­ence was run­ning a com­pany smaller than HPs small­est di­vi­sion. Based purely on rev­enue man­age­ment ex­pe­ri­ence, Apotheker would­n’t have qual­i­fied to be a Executive Vice President at HP, yet the board put him in charge of a $125 bil­lion tech­nol­ogy com­pany.

This was­n’t just a cul­tural mis­match—it was a fun­da­men­tal scale and com­plex­ity mis­match that should have been im­me­di­ately ob­vi­ous to any func­tion­ing board. But no­body asked the right ques­tions about whether Leo’s en­ter­prise soft­ware back­ground pre­pared him to eval­u­ate con­sumer plat­form tech­nolo­gies such as WebOS, and I was­n’t there to pro­vide what my col­leagues called adult su­per­vi­sion.”

When I de­cided to retire” from HP, they of­fered me a sep­a­ra­tion bonus—a sig­nif­i­cant fi­nan­cial pack­age that would have made my tran­si­tion eas­ier. But there was a catch: ac­cept­ing it would have re­stricted what I could say pub­licly about my ex­pe­ri­ences at the com­pany.

I’ve spent my ca­reer be­liev­ing that the truth about in­no­va­tion de­ci­sions—both suc­cesses and fail­ures—should be shared so oth­ers can learn from them. Taking money to stay quiet about sys­tem­atic think­ing er­rors that de­stroyed break­through tech­nol­ogy felt like be­tray­ing every­thing I be­lieved about how in­no­va­tion should work.

The de­ci­sion cost me fi­nan­cially, but it pre­served my abil­ity to tell sto­ries like this one. Stories that might help other lead­ers avoid sim­i­lar dis­as­ters.

Disclaimer: The fol­low­ing re­flects my per­sonal in­vest­ment de­ci­sions and re­la­tion­ship with HP lead­er­ship. This is not fi­nan­cial ad­vice, and you should con­sult with a qual­i­fied fi­nan­cial ad­vi­sor be­fore mak­ing any in­vest­ment de­ci­sions.

Here’s what might sur­prise you: I haven’t sold a sin­gle HP share since leav­ing the com­pany. Despite watch­ing the WebOS dis­as­ter un­fold, de­spite be­ing blamed for not pre­vent­ing it, de­spite every­thing that went wrong dur­ing that pe­riod, I still be­lieve in HP as an or­ga­ni­za­tion.

I take every op­por­tu­nity to re­mind Enrique Lores, HPs cur­rent CEO, and Antonio Neri, CEO of HPE, about this fact. Both men were peers of mine when I was at HP. We worked closely to­gether as part of the lead­er­ship team that made HP #1 in mar­ket share for con­sumer and com­mer­cial PCs & lap­tops, print­ers, and servers—help­ing drive HP to Fortune #11 dur­ing that pe­riod. They are nat­ural lead­ers who un­der­stand HPs cul­ture and val­ues from the in­side, hav­ing come up through the or­ga­ni­za­tion rather than be­ing para­chuted in from out­side.

Enrique and Antonio rep­re­sent what HP is at its best: tech­ni­cal ex­cel­lence com­bined with strate­gic think­ing, in­no­va­tion grounded in op­er­a­tional dis­ci­pline, and lead­er­ship that un­der­stands both the tech­nol­ogy and the busi­ness. They’re the kind of lead­ers who would have asked dif­fer­ent ques­tions about WebOS, who would have ap­plied bet­ter de­ci­sion frame­works to eval­u­ate plat­form tech­nol­ogy un­der un­cer­tainty.

My con­tin­ued share­hold­ing is­n’t just a mat­ter of fi­nan­cial con­fi­dence—it’s a state­ment of faith in what HP can be­come when the right lead­er­ship ap­plies sys­tem­atic think­ing to in­no­va­tion de­ci­sions.

The WebOS fail­ure taught me some­thing fun­da­men­tal about in­no­va­tion de­ci­sions that I had­n’t fully un­der­stood be­fore: in­tel­li­gence and good in­ten­tions don’t pre­dict de­ci­sion qual­ity. Systematic think­ing frame­works do.

Leo Apotheker was­n’t stu­pid. The HP board was­n’t in­com­pe­tent. The fi­nan­cial an­a­lysts weren’t ma­li­cious. But they all used flawed think­ing frame­works to eval­u­ate break­through tech­nol­ogy un­der pres­sure, and those sys­tem­atic er­rors de­stroyed $1.2 bil­lion in in­no­va­tion value.

The think­ing er­rors were pre­dictable and pre­ventable:

Solving the wrong prob­lem. Apotheker was ask­ing How do I trans­form HP into a soft­ware com­pany?” when the right ques­tion was How do we build sus­tain­able com­pet­i­tive ad­van­tage in mo­bile com­put­ing plat­forms?” His en­tire strate­gic frame­work was about ex­it­ing hard­ware busi­nesses, not build­ing plat­form ad­van­tages.

Identity-driven de­ci­sion mak­ing. His back­ground at SAP shaped his en­tire ap­proach to HPs port­fo­lio. He saw WebOS as a hard­ware dis­trac­tion from his soft­ware trans­for­ma­tion vi­sion.

Tunnel vi­sion un­der pres­sure. During this same pe­riod, he be­came laser-fo­cused on ac­quir­ing Autonomy for $10.3 bil­lion—a soft­ware com­pany that fit his trans­for­ma­tion vi­sion per­fectly. Everything else, in­clud­ing break­through mo­bile tech­nol­ogy, felt like a dis­trac­tion from this soft­ware-fo­cused strat­egy. That Autonomy ac­qui­si­tion later re­quired more than an $8 bil­lion write-down, but at the time it con­sumed all of lead­er­ship’s at­ten­tion.

Timeline com­pres­sion un­der stress. Forty-nine days is­n’t enough time to fairly eval­u­ate plat­form tech­nol­ogy, but pres­sure to show de­ci­sive lead­er­ship com­pressed the de­ci­sion time­line ar­ti­fi­cially.

These er­rors weren’t unique to HP or to Apotheker. I’ve seen iden­ti­cal pat­terns across mul­ti­ple com­pa­nies and in­dus­tries. Brilliant en­gi­neers kill break­through pro­to­types be­cause they don’t fit cur­rent prod­uct roadmaps. Marketing teams re­ject game-chang­ing con­cepts be­cause they can’t ex­plain them us­ing ex­ist­ing frame­works. CEOs avoid plat­form in­vest­ments be­cause the path to prof­itabil­ity is­n’t im­me­di­ately clear.

Lying in bed dur­ing those eight weeks, watch­ing the WebOS dis­as­ter un­fold while be­ing pow­er­less to in­ter­vene, I first per­formed an au­topsy of how we got to such a bad de­ci­sion. Was there a frame­work that could have led to bet­ter de­ci­sions? That analy­sis even­tu­ally be­came a sys­tem­atic ap­proach to in­no­va­tion de­ci­sions that could pre­vent these pre­dictable er­rors.

The frame­work that emerged from this painful ex­pe­ri­ence is some­thing I call DECIDE—a sys­tem­atic think­ing process specif­i­cally de­signed for in­no­va­tion de­ci­sions un­der un­cer­tainty:

Define the real de­ci­sion (not just the ob­vi­ous ques­tion)

Examine your think­ing process for cog­ni­tive bi­ases

Challenge your as­sump­tions sys­tem­at­i­cally

Identify de­ci­sion traps spe­cific to in­no­va­tion con­texts

Design mul­ti­ple gen­uinely dif­fer­ent op­tions

Evaluate with ev­i­dence frame­works ap­pro­pri­ate for break­through tech­nol­ogy

This is­n’t the­o­ret­i­cal aca­d­e­mic stuff. It’s a prac­ti­cal frame­work born from watch­ing smart peo­ple make sys­tem­atic think­ing er­rors that de­stroyed break­through tech­nol­ogy value. It’s de­signed to pre­vent the spe­cific cog­ni­tive er­rors that killed WebOS and con­tinue to kill in­no­va­tion in­vest­ments across in­dus­tries.

Next Wednesday (6/11/2025) , I’ll walk you through ex­actly how to ap­ply the DECIDE frame­work to your cur­rent in­no­va­tion de­ci­sions. I’ll show you the spe­cific tools and ques­tions that can help you avoid the think­ing traps that con­sis­tently de­stroy break­through tech­nol­ogy value.

Sometimes I won­der what would have hap­pened if I had­n’t needed surgery dur­ing those crit­i­cal weeks. Would I have been able to con­vince the lead­er­ship team to give WebOS more time? Could I have pro­vided the adult su­per­vi­sion” my col­leagues said was miss­ing? Would bet­ter think­ing frame­works have changed the out­come?

I’ll never know for cer­tain. But I do know this: the tech­nol­ogy was sound, the mar­ket op­por­tu­nity was real, and the de­ci­sion to kill WebOS af­ter 49 days was based on sys­tem­atic think­ing er­rors that could have been pre­vented with bet­ter de­ci­sion frame­works.

But here’s the fi­nal piece of the story: Leo Apotheker was fired on September 22, 2011—just 35 days af­ter shut­ting down WebOS and eleven months af­ter tak­ing over as CEO. The board fi­nally rec­og­nized the sys­tem­atic think­ing er­rors that had de­stroyed bil­lions in value, but it was too late for WebOS.

The hu­man cost of these de­ci­sions goes be­yond stock prices and quar­terly re­ports. There are real peo­ple who be­lieved in break­through tech­nol­ogy, fought for in­no­va­tion, and had to watch it get de­stroyed by pre­ventable think­ing er­rors.

In my case, I an­nounced my re­tire­ment from HP on October 31, 2011 via my blog (Wired Magazine). My last day at HP was December 31, 2011.

WebOS tech­nol­ogy even­tu­ally found suc­cess when LG li­censed it for smart TV plat­forms. The core ar­chi­tec­ture and UI in­flu­enced every sub­se­quent mo­bile op­er­at­ing sys­tem. HP could have owned that plat­form in­no­va­tion and the ecosys­tem value it cre­ated.

What break­through tech­nol­ogy or in­no­va­tion op­por­tu­nity is your team eval­u­at­ing right now? Before you make any ir­re­versible choices, ask your­self: are you ap­ply­ing sys­tem­atic think­ing frame­works to this de­ci­sion, or are you re­ly­ing on in­tu­ition and con­ven­tional busi­ness met­rics that might mis­lead in in­no­va­tion con­texts?

Because here’s what I learned from watch­ing the WebOS dis­as­ter un­fold while con­fined to bed, help­less to in­ter­vene: when you have break­through tech­nol­ogy in your hands, the qual­ity of your de­ci­sion-mak­ing process mat­ters more than the qual­ity of your tech­nol­ogy.

Intelligence and good in­ten­tions aren’t enough. You need sys­tem­atic frame­works for think­ing clearly about in­no­va­tion de­ci­sions un­der un­cer­tainty.

Wednesday, I’ll show you ex­actly how to build and ap­ply those frame­works. The tools ex­ist to pre­vent these dis­as­ters. The ques­tion is whether you’ll use them be­fore it’s too late.

What in­no­va­tion de­ci­sion is keep­ing you up at night? Reply and let me know—I read every re­sponse and of­ten use reader ques­tions to shape fu­ture Studio Notes.

...

Read the original on philmckinney.substack.com »

10 608 shares, 22 trendiness

About 700 Marines being mobilized in response to LA protests

More than 700 Marines based out of the Marine Corps Air Ground Combat Center in California have been mo­bi­lized to re­spond to the protests in Los Angeles, and the troops will join the thou­sands of National Guard mem­bers who were ac­ti­vated by President Donald Trump over the week­end with­out the con­sent of California’s gov­er­nor or LAs mayor.

The de­ploy­ment of the full Marine bat­tal­ion marks a sig­nif­i­cant es­ca­la­tion in Trump’s use of the mil­i­tary as a show of force against pro­test­ers, but it is still un­clear what their spe­cific task will be once in LA, sources told CNN. Like the National Guard troops, they are pro­hib­ited from con­duct­ing law en­force­ment ac­tiv­ity such as mak­ing ar­rests un­less Trump in­vokes the Insurrection Act, which per­mits the pres­i­dent to use the mil­i­tary to end an in­sur­rec­tion or re­bel­lion of fed­eral power.

The Marines be­ing ac­ti­vated are with 2nd bat­tal­ion, 7th Marines, 1st Marine di­vi­sion, ac­cord­ing to US Northern Command. The ac­ti­va­tion is intended to pro­vide Task Force 51 with ad­e­quate num­bers of forces to pro­vide con­tin­u­ous cov­er­age of the area in sup­port of the lead fed­eral agency,” NORTHCOM said in state­ment, re­fer­ring to US Army north’s con­tin­gency com­mand post.

One of the peo­ple fa­mil­iar with the Marine mo­bi­liza­tion said they will be aug­ment­ing the guard pres­ence on the ground in LA.

Approximately 1,700 National Guard mem­bers are now op­er­at­ing in the greater Los Angeles area, two days af­ter Trump’s Saturday mem­o­ran­dum de­ploy­ing 2,000 ser­vice mem­bers, ac­cord­ing to a state­ment from NORTHCOM. On Monday evening, the Pentagon an­nounced that Trump or­dered the de­ploy­ment of an ad­di­tional batch of 2,000 more National Guard mem­bers. It is un­clear when the rest of the ini­tial group, or the new troops an­nounced Monday, would ar­rive in Los Angeles.

The Marines are ex­pected to bol­ster some of the guard mem­bers who have been de­ployed to LA in the last two days, this per­son said.

And while the per­son fa­mil­iar stressed that the Marines were be­ing de­ployed only to aug­ment the forces al­ready there, the im­age of US Marines mo­bi­liz­ing in­side the United States will stand in con­trast to National Guardsmen who more rou­tinely re­spond to do­mes­tic is­sues. While some Marines have been as­sist­ing in bor­der se­cu­rity at the south­ern bor­der, one US of­fi­cial said Marines have not been mo­bi­lized within the US like they are in California now since the 1992 ri­ots in Los Angeles.

While the Marines’ tasks have not been spec­i­fied pub­licly, they could in­clude as­sign­ments like crowd con­trol or es­tab­lish­ing perime­ter se­cu­rity. Lawyers within the Defense Department are also still fi­nal­iz­ing lan­guage around the use-of-force guide­lines for the troops be­ing mo­bi­lized, but the per­son fa­mil­iar said it will likely mir­ror the mil­i­tary’s stand­ing rules of the use of force.

California Gov. Gavin Newsom de­scribed the in­volve­ment of Marines as unwarranted” and unprecedented.”

The level of es­ca­la­tion is com­pletely un­war­ranted, un­called for, and un­prece­dented — mo­bi­liz­ing the best in class branch of the U. S. mil­i­tary against its own cit­i­zens,” Newsom said in a state­ment link­ing to a news story about the Marines mo­bi­liz­ing.

Newsom dis­puted the char­ac­ter­i­za­tion as a deployment,” which the gov­er­nor de­scribed as dif­fer­ent from mo­bi­liza­tion. US Northern Command said in their state­ment, how­ever, that the Marines will seamlessly in­te­grate” with National Guard forces protecting fed­eral per­son­nel and fed­eral prop­erty in the greater Los Angeles area.”

Los Angeles Police Chief Jim McDonnell called for open and con­tin­u­ous lines of com­mu­ni­ca­tion” be­tween all agen­cies re­spond­ing to protests in the city ahead of the de­ploy­ment of US Marines.

McDonell said in a state­ment that his agency and other part­ner agen­cies have ex­pe­ri­ence deal­ing with large-scale demon­stra­tions and safety re­mains a top pri­or­ity for them.

That com­mu­ni­ca­tion will prevent con­fu­sion, avoid es­ca­la­tion, and en­sure a co­or­di­nated, law­ful, and or­derly re­sponse dur­ing this crit­i­cal time,” McDonnell stressed.

This story and head­line have been up­dated with ad­di­tional de­vel­op­ments.

CNNs Cindy Von Quednow and Danya Gainor con­tributed to this re­port.

...

Read the original on www.cnn.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.