10 interesting stories served every morning and every evening.
When you’re ready for more performance, you can upgrade individual components instead of replacing your entire laptop. Install a new Mainboard for generational processor upgrades, add memory to handle heavier workloads, or expand your storage to increase capacity or enable dual booting. The Framework Marketplace makes it easy to find the compatible parts you need.
...
Read the original on frame.work »
Organizations design systems that mirror their own communication structure.
Premature optimization is the root of all evil.
With a sufficient number of API users, all observable behaviors of your system will be depended on by somebody.
Leave the code better than you found it.
YAGNI (You Aren’t Gonna Need It)
Don’t add functionality until it is necessary.
Adding manpower to a late software project makes it later.
A complex system that works is invariably found to have evolved from a simple system that worked.
All non-trivial abstractions, to some degree, are leaky.
Every application has an inherent amount of irreducible complexity that can only be shifted, not eliminated.
A distributed system can guarantee only two of: consistency, availability, and partition tolerance.
Small, successful systems tend to be followed by overengineered, bloated replacements.
A set of eight false assumptions that new distributed system designers often make.
Every program attempts to expand until it can read mail.
There is a cognitive limit of about 150 stable relationships one person can maintain.
The square root of the total number of participants does 50% of the work.
Those who understand technology don’t manage it, and those who manage it don’t understand it.
In a hierarchy, every employee tends to rise to their level of incompetence.
The minimum number of team members whose loss would put the project in serious trouble.
Companies tend to promote incompetent employees to management to limit the damage they can do.
Work expands to fill the time available for its completion.
The first 90% of the code accounts for the first 90% of development time; the remaining 10% accounts for the other 90%.
It always takes longer than you expect, even when you take into account Hofstadter’s Law.
When a measure becomes a target, it ceases to be a good measure.
Anything you need to quantify can be measured in some way better than not measuring it.
Anything that can go wrong will go wrong.
Be conservative in what you do, be liberal in what you accept from others.
Technical Debt is everything that slows us down when developing software.
Given enough eyeballs, all bugs are shallow.
Debugging is twice as hard as writing the code in the first place.
A project should have many fast unit tests, fewer integration tests, and only a small number of UI tests.
Repeatedly running the same tests becomes less effective over time.
Software that reflects the real world must evolve, and that evolution has predictable limits.
90% of everything is crap.
The speedup from parallelization is limited by the fraction of work that cannot be parallelized.
It is possible to achieve significant speedup in parallel processing by increasing the problem size.
The value of a network is proportional to the square of the number of users.
Every piece of knowledge must have a single, unambiguous, authoritative representation.
Designs and systems should be as simple as possible.
Five main guidelines that enhance software design, making code more maintainable and scalable.
An object should only interact with its immediate friends, not strangers.
Software and interfaces should behave in a way that least surprises users and other developers.
The less you know about something, the more confident you tend to be.
Never attribute to malice that which is adequately explained by stupidity or carelessness.
The simplest explanation is often the most accurate one.
Sticking with a choice because you’ve invested time or energy in it, even when walking away helps you.
The Map Is Not the Territory
Our representations of reality are not the same as reality itself.
A tendency to favor information that supports our existing beliefs or ideas.
We tend to overestimate the effect of a technology in the short run and underestimate the impact in the long run.
The longer something has been in use, the more likely it is to continue being used.
Breaking a complex problem into its most basic blocks and then building up from there.
Solving a problem by considering the opposite outcome and working backward from it.
80% of the problems result from 20% of the causes.
The best way to get the correct answer on the Internet is not to ask a question, it’s to post the wrong answer.
...
Read the original on lawsofsoftwareengineering.com »
For workloads that need to run in the US, US-only inference is available at 1.1x pricing for input and output tokens. Learn more.
...
Read the original on claude.com »
This is a heavily interactive web application, and JavaScript is required. Simple HTML interfaces are possible, but that is not what this is.
Learn more about Bluesky at bsky.social and atproto.com. It appears that Anthropic has removed Claude Code from its $20-a-month pro subscription based on its pricing page. Anyone able to confirm who has a $20 plan?
...
Read the original on bsky.app »
Listen to this article in summarized format
...
Read the original on m.economictimes.com »
Today we’re making the following changes to GitHub Copilot’s Individual plans to protect the experience for existing customers: pausing new sign-ups, tightening usage limits, and adjusting model availability. We know these changes are disruptive, and we want to be clear about why we’re making them and how they will affect you.
Agentic workflows have fundamentally changed Copilot’s compute demands. Long-running, parallelized sessions now regularly consume far more resources than the original plan structure was built to support. As Copilot’s agentic capabilities have expanded rapidly, agents are doing more work, and more customers are hitting usage limits designed to maintain service reliability. Without further action, service quality degrades for everyone.
We’ve heard your frustrations about usage limits and model availability, and we need to do a better job communicating the guardrails we are adding—here’s what’s changing and why.
New sign-ups for GitHub Copilot Pro, Pro+, and Student plans are paused. Pausing sign-ups allows us to serve existing customers more effectively.
We are tightening usage limits for individual plans. Pro+ plans offer more than 5X the limits of Pro. Users on the Pro plan who need higher limits can upgrade to Pro+. Usage limits are now displayed in VS Code and Copilot CLI to make it easier for you to avoid hitting these limits.
Opus models are no longer available in Pro plans. Opus 4.7 remains available in Pro+ plans. As we announced in our changelog, Opus 4.5 and Opus 4.6 will be removed from Pro+.
These changes are necessary to ensure we can serve existing customers with a predictable experience. If you hit unexpected limits or these changes just don’t work for you, you can cancel your Pro or Pro+ subscription and receive a refund for the time remaining on your current subscription by visiting your Billing settings before May 20..
GitHub Copilot has two usage limits today: session and weekly (7 day) limits. Both limits depend on two distinct factors—token consumption and the model’s multiplier.
The session limits exist primarily to ensure that the service is not overloaded during periods of peak usage. They’re set so most users shouldn’t be impacted. Over time, these limits will be adjusted to balance reliability and demand. If you do encounter a session limit, you must wait until the usage window resets to resume using Copilot.
Weekly limits represent a cap on the total number of tokens a user can consume during the week. We introduced weekly limits recently to control for parallelized, long-trajectory requests that often run for extended periods of time and result in prohibitively high costs.
The weekly limits for each plan are also set so that most users will not be impacted. If you hit a weekly limit and have premium requests remaining, you can continue to use Copilot with Auto model selection. Model choice will be reenabled when the weekly period resets. If you are a Pro user, you can upgrade to Pro+ to increase your weekly limits. Pro+ includes over 5X the limits of Pro.
Usage limits are separate from your premium request entitlements. Premium requests determine which models you can access and how many requests you can make. Usage limits, by contrast, are token-based guardrails that cap how many tokens you can consume within a given time window. You can have premium requests remaining and still hit a usage limit.
Starting today, VS Code and Copilot CLI both display your available usage when you’re approaching a limit. These changes are meant to help you avoid a surprise limit.
If you are approaching a limit, there are a few things you can do to help reduce the chances of hitting it:
Use a model with a smaller multiplier for simpler tasks. The larger the multiplier, the faster you will hit the limit.
Consider upgrading to Pro+ if you are on a Pro plan to raise your limit by over 5X.
Use plan mode (VS Code, Copilot CLI) to improve task efficiency. Plan mode also improves task success.
Reduce parallel workflows. Tools such as /fleet will result in higher token consumption and should be used sparingly if you are nearing your limits.
Why we’re doing this
We’ve seen usage intensify for all users as they realize the value of agents and subagents in tackling complex coding problems. These long-running, parallelized workflows can yield great value, but they have also challenged our infrastructure and pricing structure: it’s now common for a handful of requests to incur costs that exceed the plan price! These are our problems to solve. The actions we are taking today enable us to provide the best possible experience for existing users while we develop a more sustainable solution.
Everything you need to master GitHub, all in one place.
Build what’s next on GitHub, the place for anyone from anywhere to build anything.
Meet the companies and engineering teams that build with GitHub.
Catch up on the GitHub podcast, a show dedicated to the topics, trends, stories and culture in and around the open source developer community on GitHub.
We do newsletters, tooDiscover tips, technical guides, and best practices in our biweekly newsletter just for devs.
Yes please, I’d like GitHub and affiliates to use my information for personalized communications, targeted advertising and campaign effectiveness. See the GitHub Privacy Statement for more details.
...
Read the original on github.blog »
Acetaminophen, ibuprofen, and what doctors probably want you to know.
Lots of people die after overdosing on acetaminophen (paracetamol, often sold as Tylenol or Panadol). In the U. S., it’s estimated to cause 56,000 emergency department visits, 2,600 hospitalizations, and 500 deaths per year. Acetaminophen has a scarily narrow therapeutic window. The instructions on the package say it’s okay to take up to four grams per day. If you take eight grams, your liver could fail and you could die. Meanwhile, it seems to be really hard to kill yourself by overdosing on ibuprofen (Advil, Nurofen, Motrin, Brufen). In 2006, Wood et al. searched the medical literature and found 10 documented cases in history. Nine of those cases involved complicating factors, and in the 10th, a woman took the equivalent of more than 500 standard (200mg) pills. So, for many years, if I needed a painkiller, I’d try to take ibuprofen rather than acetaminophen. My logic was that if eight grams of acetaminophen could kill my liver, then one gram was probably still hard on it. I’m fond of my liver and didn’t want to cause it any unnecessary inconvenience. But guess what? My logic was wrong and what I was doing was stupid. I’m now convinced that for most people in most circumstances, acetaminophen is safer than ibuprofen, provided you use it as directed. I think most doctors agree with this. In fact, I think many doctors think it’s obvious. (Source: I asked some doctors; they said it was obvious.) Should this have been obvious to me? I figured it out by obsessively researching how those drugs work and making up a story about metabolic pathways and blood flow, and amino acid reserves. It’s a good story, one that revealed that my logic stemmed from an egregious lack of respect for biology and that I’m a big dummy (always a favorite subject). But if the clearest road to some piece of knowledge runs through metabolic pathways, then I don’t think that knowledge counts as obvious. So how is a normal person meant to figure it out? Why doesn’t the fact that acetaminophen is typically safer than ibuprofen appear on drug labels or government websites or WebMD? Are normal people supposed to figure it out, or has society decided that this is the kind of thing best left illegible? Note: You should not switch medications based on the uninformed ramblings of non-trustworthy pseudonymous internet people.
Ibuprofen inhibits the body’s production of the Cyclooxygenase (COX) enzyme. This in turn inhibits the formation of messenger molecules involved in inflammation, which leads to less physical inflammation and thus less pain. The same story is true for almost all over-the-counter painkillers, which is why they’re almost all considered “non-steroidal anti-inflammatory drugs,” or NSAIDs. This includes ibuprofen, aspirin, naproxen (Aleve), and a long list of related drugs. But it does not include acetaminophen.
Like ibuprofen, acetaminophen inhibits some COX enzymes. But it does so in a weird way that barely affects inflammation or messenger molecules, so it’s unclear if this matters for pain reduction. In the brain, acetaminophen is metabolized into a mysterious chemical called AM404. This activates the cannabinoid receptors and increases endocannabinoid signaling, which seems to reduce the subjective experience of pain. AM404 also activates the capsaicin receptor, which is associated with burning sensations that you’d normally expect to increase pain, but maybe some desensitization thing happens downstream? And maybe acetaminophen also interacts with serotonin or nitric oxide or does other stuff? How this all comes together to reduce pain is still somewhat a scientific mystery. Aside: When trying to understand painkillers, it’s natural to focus on chemistry and molecular biology. But the unknown physical origins of consciousness are always nearby, looming ominously.
In an ideal world, the only thing ibuprofen would do is reduce inflammation in the part of your body that hurts. But that is not our world. When ibuprofen inhibits the COX enzymes, it does so throughout the body. And mostly, that is bad. For one, ibuprofen reduces production of mucus in the stomach. That might sound okay or even good. But stomach mucus is important. You need it to shield the lining of your stomach from your extremely acidic gastric juice.
Having less mucus can lead to gastrointestinal problems or even ulcers. Ibuprofen also affects the heart. When ibuprofen inhibits the COX enzymes there, this in turn inhibits one chemical that prevents clotting and another that causes clotting. In balance, this seems to lead to more clotting, and an increased statistical risk of heart attacks
. If you’re healthy, the risk of a heart attack from an occasional low dose of ibuprofen is probably zero. But if you have heart issues and take medium to large doses regularly for as little as a few days, this might be a serious concern. Ibuprofen also affects the kidneys. If you’re stressed, or cold, or dehydrated, or take stimulants, your body will constrict your blood vessels. That squeezes your kidneys’ intake tube, depriving them of blood. Your kidneys don’t like that, so they release signaling molecules to locally re-dilate the blood vessels. Trouble is, when ibuprofen inhibits COX enzymes in the kidneys, it inhibits those signaling molecules. If everything is normal, that’s okay, because the kidneys wouldn’t try to use those molecules anyway. But if your body has clamped down on the blood vessels, then the kidneys don’t have the tool they use to keep blood flowing, meaning they don’t get as much blood as they want. This is bad.
There are many other less common side effects, including allergies, respiratory reactions in asthmatics, induced meningitis, and suppressed ovulation. If you take a lot of ibuprofen, this could hurt your liver. But the major concerns seem to be the stomach, the heart, and the kidneys.
Acetaminophen also inhibits some COX enzymes. But unlike ibuprofen, the effect is minimal outside the central nervous system. Thus, acetaminophen has little effect on stomach mucus, blood clots, or blood flow, and so presents almost none of the risks that ibuprofen does. Even so, if you take too much acetaminophen at once, you could easily die. How does this happen? Well, when acetaminophen is metabolized by the liver, it’s mostly broken down into harmless stuff. But a small fraction (5-15%) is broken down by the P450 system into an extremely toxic chemical called NAPQI. Ordinarily this is fine; your body creates and neutralizes toxic stuff all the time. For example, if you drank 20 grams of formaldehyde, you’d likely die. But did you know that your body itself makes and processes ~50 grams of formaldehyde every day? When liver cells sense NAPQI, they immediately release glutathione, which binds to NAPQI and renders it harmless. But there’s a problem. If you take too much acetaminophen at once, the pathways that break it down into harmless stuff get saturated, but the P450 system doesn’t get saturated. This means that not only is there more acetaminophen, but also that a much larger fraction of it is broken down into NAPQI. Soon your liver cells will run out of glutathione to neutralize it. Then, NAPQI will build up and bind to various proteins in the liver cells (especially in mitochondria) causing them to malfunction and/or commit suicide. This can cause total liver failure. So you should never take more than the recommended dose of acetaminophen.
If you do take too much, you should go to a hospital immediately. They will give you NAC, which will replenish your glutathione and neutralize the NAPQI. Your prospects are good as long as you get to the hospital within a few hours.
Acetaminophen has lots of other possible side effects, like skin issues and blood disorders. But these all seem to be quite rare.
Sign up for our newsletter to get Asterisk’s latest interviews, essays, and more.
What if you have liver issues?
The primary concern with acetaminophen is liver damage. So if you have liver disease, then surely you’d want to avoid acetaminophen and take ibuprofen instead, right? Nope. It’s the opposite. Liver disease shifts the balance of risk in favor of acetaminophen. With liver disease, it’s hard for blood to flow into the liver, meaning that blood tends to pool in the abdomen. To counter this, blood vessels elsewhere in the body contract. This includes blood vessels around the kidneys. Remember the kidneys? Again, when blood vessels are constricted, the kidneys send out signaling molecules to locally re-dilate the blood vessels. But those signaling molecules are blocked by ibuprofen. So if you have liver disease, taking ibuprofen risks starving your kidneys of blood just like if you were dehydrated.Meanwhile, people with moderate liver disease are usually still able to process acetaminophen without issue, as long as it’s in smaller amounts. So doctors usually tell patients with liver disease to avoid ibuprofen and take acetaminophen instead, just with a maximum of two grams per day instead of four.
The main takeaway from all this is that the risks of both drugs emerge from the madhouse of complexity that is your body. Surely there are some situations where acetaminophen is more dangerous than ibuprofen?I tried to capture the most common situations in this table:
It’s actually fairly hard to find situations where ibuprofen is safer than acetaminophen. Possibly this is true if you’re hungover, but I would be very careful, because you tend to be dehydrated when hungover, raising the risk of kidney damage. (It’s probably optimal, from a health perspective, to avoid taking recreational drugs at doses that leave you physically ill the next day.) Aside from hangovers, the only situations I could find where ibuprofen might be safer than acetaminophen are if you’re taking certain anti-seizure or tuberculosis drugs or maybe if you have a certain enzyme deficiency (G6PDD).
What have we learned so far? 1. The body is really complicated! 2. The main risk of acetaminophen is liver damage by creating too much NAPQI. Taking too much at once can easily kill you. However, as long as you don’t take too much at once and your liver isn’t depleted, then your liver will maintain NAPQI levels at zero and it will be completely fine. And there are very few other risks. 3. Meanwhile, ibuprofen poses a risk of gastrointestinal issues, heart attacks, or kidney damage. The risk varies based on lots of factors like whether you’ve eaten food, whether you’re dehydrated, your blood pressure, and your heart health.
4. Therefore, acetaminophen is probably safer, provided you never take too much.
I don’t want to be alarmist. If you’re healthy, the risk from taking an occasional dose of ibuprofen as directed is extremely low. Given that so many people find that ibuprofen is more effective for many kinds of pain, it’s totally reasonable to use it. I do so myself. Still, it seems to be the case that in the vast majority of situations, acetaminophen is safer. Personally, if I have pain, I first take acetaminophen, and then add ibuprofen if necessary. I’m pretty sure many experts think this is somewhere between “sensible” and “obvious.” But if acetaminophen is safer, then why don’t official sources tell you that?
I can get doctors to admit this off-the-record. I can find random comment threads with support from people who seem to know what they’re talking about. But why does this fact never appear on government websites or drug labels?
Let’s look at those drug labels
In the U.S., the Food and Drug Administration (FDA) creates
a “drug facts” label for over-the-counter drugs. Here’s what that looks like for ibuprofen:
And here’s what it looks like for acetaminophen (acetaminophen):
I feel dumb saying this, but when I saw those labels in the past, I thought of them as a bunch of random information thrown together for legal reasons. But after spending a lot of time trying to understand these drugs myself, I now realize that these labels are… really good? Imagine you work at the FDA and it’s your job to write a safety label. You need to synthesize a vast and murky scientific landscape. Your label will be read by people with minimal scientific background who are likely currently in pain, and who could die if they take the drug in the wrong situation. If I were in that situation, I’d think about all the different situations in which taking one of these drugs could literally kill someone, and then — after a quick panic attack — I’d write a label that screamed, HEY, IF YOU ARE IN ANY OF THESE SITUATIONS, TAKING THIS DRUG COULD LITERALLY KILL YOU. Then I’d think about all the other situations where taking the drug might be okay depending on a set of complex science stuff and tell people in those situations to PLEASE TALK TO A DOCTOR FOR THE LOVE OF GOD because I DON’T KNOW IF YOU’VE HEARD BUT SCIENCE IS COMPLICATED. Everything else would be a minor concern.From that perspective, these labels are a triumph. This isn’t random information — every word is a synthesis of a mountain of research, carefully optimized to save lives.
How did those drug labels come to be? If you want a taste for the FDA’s process, I encourage you to skim the 2002 Federal Register document in which the FDA proposed to update ibuprofen’s safety label and to formally classify it as Generally Recognized as Safe. It’s more than 21,000 words long and — I think — astonishingly good. It not only summarizes the entire medical literature on ibuprofen, it summarizes it well. Here is onerepresentative bit:
Bradley et al. (Ref. 42) conducted a 4-week, double-blind, randomized trial in 184 subjects comparing the effectiveness and safety of the maximum approved OTC daily dose of 1,200 mg of ibuprofen (number of subjects (n) = 62) to that of a prescription dose of 2,400 mg/day (n = 61), and to 4,000 mg/day of acetaminophen (n = 59) for the treatment of osteoarthritis. While there were no significant differences in the number of side effects reported during this study, the study demonstrated a trend towards a dose dependent increase in minor GI adverse events (nausea and dyspepsia) associated with higher doses of ibuprofen (1,200 mg/day: 7/62 or 11.3 percent; versus 2,400 mg/day: 14/61 or 23 percent). In addition, two subjects treated with 2,400 mg/day of ibuprofen became positive for occult blood while participating in the study.
I spend a lot of time complaining about bad statistical writing. A lot. Probably too much. But I’m here to tell you, that paragraph is gorgeous. The writing is clear and penetrating. It contains all the important details, but no other details. Compared to the abstract of the original paper, the above is shorter and easier to understand yet simultaneously more informative. Five stars. The rest of the document is equally good, with clear and sensible explanations for various recommendations. For example, they discuss a proposal from the National Kidney Foundation for additional warning about risks to kidneys, explain why they think that proposal has merit, and then recommend a shorter version, which appears on every package of ibuprofen sold today. As far as I can tell, this level of quality is typical. For example, the FDA’s 2019 proposed rule on sunscreens is similarly masterful.
This leaves us with this constellation of facts: 1. Acetaminophen is, in general, safer than ibuprofen. 2. The FDA doesn’t tell you that. Neither do other respectable authorities. So what’s happening here? Have the experts conspired to keep this knowledge secret? I don’t think so. Mostly, I think this is down to two factors. First, the FDA doesn’t really have a mission of determining “in what circumstances is drug A safer than drug B?” Their goal is to take individual drugs and determine how people can use them safely. They seem to be quite good at this. Second, everyone is mortally afraid of giving “medical advice.” It varies by jurisdiction, but in general, giving “wellness advice” is OK, but if you give personalized advice, you risk going to prison. The more credible you are, the higher that risk is.
Stepping back, how should we think about this situation? The body is complicated. When experts give the public advice on drugs, they are trying to insulate us from that complexity. But there is no way to do that without making trade-offs. Society has implicitly chosen tradeoffs that mean certain “less important” facts are de-prioritized. It’s not obvious that this is the wrong choice. I feel foolish for not having more respect for the body’s complexity and for the difficulty of the task all the experts are trying to accomplish. This is not medical advice.
...
Read the original on asteriskmag.com »
It is intended only for protocol study, signal analysis, and controlled experiments on hardware you personally own or are explicitly authorized to test.
This repository does not authorize access to, modification of, or interference with any third-party deployment, commercial installation, or retail environment.
TagTinker is a Flipper Zero app for educational research into infrared electronic shelf-label protocols and related display behavior on authorized test hardware.
It is focused on:
This README intentionally avoids deployment-oriented instructions and excludes guidance for interacting with live commercial systems.
Where is the .fap release?
The Flipper app is source-first. Build the .fap yourself from this repository with ufbt so it matches your firmware and local toolchain.
What if it crashes or behaves oddly?
The maintainer primarily uses TagTinker on Momentum firmware with asset packs disabled and has not had issues in that setup. If you are using a different firmware branch, custom asset packs, or a heavily modified device setup, start by testing from a clean baseline.
What happens if I pull the battery out of the tag?
Many infrared ESL tags store their firmware, address, and display data in volatile RAM (not flash memory) to save cost and energy.
If you remove the battery or let it fully discharge, the tag will lose all programming and become unresponsive (“dead”). It usually cannot be recovered without the original base station.
I found a bug or want to contribute — how can I get in touch?
You can contact me on:
I’m currently traveling, so response times may be slower than usual. Feel free to open issues or Pull Requests anyway — contributions (bug fixes, improvements, documentation, etc.) are very welcome and will help keep the project alive while I’m away.
TagTinker is built around the study of infrared electronic shelf-label communication used by fixed-transmitter labeling systems.
* communication is based on addressed protocol frames containing command, parameter, and integrity fields
* display updates are carried as prepared payloads for supported monochrome graphics formats
* local tooling in this project helps researchers prepare assets and perform controlled experiments on authorized hardware
This project is intended to help researchers understand:
For the underlying reverse-engineering background and deeper protocol research, see:
TagTinker is limited to home-lab and authorized research use, including:
It is not a retail tool, operational tool, or field-use utility.
You are solely responsible for ensuring that any use of this software is lawful, authorized, and appropriate for your environment.
The maintainer does not authorize, approve, or participate in any unauthorized use of this project, and disclaims responsibility for misuse, damage, disruption, legal violations, or any consequences arising from such use.
If you do not own the hardware, or do not have explicit written permission to test it, do not use this project on it.
Any unauthorized use is outside the intended scope of this repository and is undertaken entirely at the user’s own risk.
This is an independent research project.
It is not affiliated with, endorsed by, authorized by, or sponsored by any electronic shelf-label vendor, retailer, infrastructure provider, or system operator.
Any references to external research, public documentation, or reverse-engineering work are included strictly for educational and research context.
This project is a port and adaptation of the excellent public reverse-engineering work by furrtek / PrecIR and related community research.
Licensed under the GNU General Public License v3.0 (GPL-3.0).
See the LICENSE file for details.
This software is provided “AS IS”, without warranty of any kind, express or implied.
In no event shall the authors or copyright holders be liable for any claim, damages, or other liability arising from the use of this software.
This repository is maintained as a narrowly scoped educational research project.
The maintainer does not authorize, encourage, condone, or accept responsibility for use against third-party devices, deployed commercial systems, retail infrastructure, or any environment where the user lacks explicit permission.
...
Read the original on github.com »
It’s the nature of business that the eulogy for a chief executive doesn’t happen when they die, but when they retire, or, in the case of Apple CEO Tim Cook, announce that they will step up to the role of Executive Chairman on September 1. The one morbid exception is when a CEO dies on the job — or quits because they are dying — and the truth of the matter is that that is where any honest recounting of Cook’s incredibly successful tenure as Apple CEO, particularly from a financial perspective, has to begin.
The numbers, to be clear, are extraordinary. Cook became CEO of Apple on August 24, 2011, and in the intervening 15 years revenue has increased 303%, profit 354%, and the value of Apple has gone from $297 billion to $4 trillion, a staggering 1,251% increase.
The reason for Cook’s accession in 2011 became clear a mere six weeks later, when Steve Jobs passed away from cancer on October 5, 2011. Jobs’ death isn’t the reason Cook was chosen — Cook had already served as interim CEO while Jobs underwent treatment in 2009 — but I think the timing played a major role in making Cook arguably the greatest non-founder CEO of all time.
Peter Thiel introduced the concept of Zero To One thusly:
When we think about the future, we hope for a future of progress. That progress can take one of two forms. Horizontal or extensive progress means copying things that work — going from 1 to n. Horizontal progress is easy to imagine because we already know what it looks like. Vertical or intensive progress means doing new things — going from 0 to 1. Vertical progress is harder to imagine because it requires doing something nobody else has ever done. If you take one typewriter and build 100, you have made horizontal progress. If you have a typewriter and build a word processor, you have made vertical progress.
Steve Jobs made 0 to 1 products, as he reminded the audience in the introduction to his most famous keynote:
Every once in a while, a revolutionary product comes along that changes everything. First of all, one’s very fortunate if one gets to work on one of these in your career. Apple’s been very fortunate: it’s been able to introduce a few of these into the world.
In 1984, we introduced the Macintosh. It didn’t just change Apple, it changed the whole computer industry. In 2001, we introduced the first iPod. It didn’t just change the way we all listen to music, it changed the entire music industry.
Well, today we’re introducing three revolutionary products of this class. The first one: a widescreen iPod with touch controls. The second: a revolutionary mobile phone. And the third is a breakthrough Internet communications device. Three things…are you getting it? These are not three separate devices. This is one device, and we are calling it iPhone.
Steve Jobs would, three years later, also introduce the iPad, which makes four distinct product categories if you’re counting. Perhaps the most important 0 to 1 product Jobs created, however, was Apple itself, which raises the question: what makes Apple Apple?
“What Makes Apple Apple” isn’t a new question; it was the central question of Apple University, the internal training program the company launched in 2008. Apple University was hailed on the outside as a Steve Jobs creation, but while I’m sure he green lit the concept, it was clear to me as an intern on the Apple University team in 2010, that the program’s driving force was Tim Cook.
The core of the program, at least when I was there, was what became known as The Cook Doctrine:
We believe that we’re on the face of the Earth to make great products, and that’s not changing.
We believe in the simple, not the complex.
We believe that we need to own and control the primary technologies behind the products we make, and participate only in markets where we can make a significant contribution.
We believe in saying no to thousands of projects so that we can really focus on the few that are truly important and meaningful to us.
We believe in deep collaboration and cross-pollination of our groups, which allow us to innovate in a way that others cannot.
And frankly, we don’t settle for anything less than excellence in every group in the company, and we have the self-honesty to admit when we’re wrong and the courage to change.
And I think, regardless of who is in what job, those values are so embedded in this company that Apple will do extremely well.
Cook explained this on Apple’s January 2009 earnings call, during Jobs’ first leave of absence, in response to a question about how Apple would fare without its founder. It’s a brilliant statement, but it is — as the last paragraph makes clear — ultimately about maintaining, nurturing, and growing what Jobs built.
That is why I started this Article by highlighting the timing of Cook’s ascent to the CEO role. The challenge for CEOs following iconic founders is that the person who took the company from 0 to 1 usually sticks around for 2, 3, 4, etc.; by the time they step down the only way forward is often down. Jobs, however, by virtue of leaving the world too soon, left Apple only a few years after its most important 0 to 1 product ever, meaning it was Cook who was in charge of growing and expanding Apple’s most revolutionary device yet.
Cook, to be clear, managed this brilliantly. Under his watch the iPhone not only got better every year, but expanded its market to every carrier in basically every country, and expanded the line from one model in two colors to five models in a plethora of colors sold at the scale of hundreds of millions of units a year.
Cook was, without question, an operational genius. Moreover, this was clearly the case even before he scaled the iPhone to unimaginable scale. When Cook joined Apple in 1998 the company’s operations — centered on Apple’s own factories and warehouses — were a massive drag on the company; Cook methodically shut them down and shifted Apple’s manufacturing base to China, creating a just-in-time supply chain that year-after-year coordinated a worldwide network of suppliers to deliver Apple’s ever-expanding product line to customers’ doorsteps and a fleet of beautiful and brand-expanding stores. There was not, under Cook’s leadership, a single significant product issue or recall.
Cook also oversaw the introduction of major new products, most notably AirPods and Apple Watch; the “Wearables, Home, and Accessories” category delivered $35.4 billion in revenue last year, which would rank 128 on the Fortune 500. Still, both products are derivative of the iPhone; Cook’s signature 0 to 1 product, the Apple Vision Pro, is more of a 0.5.
Cook’s more momentous contribution to Apple’s top line was the elevation of Services. The Google search deal actually originated in 2002 with an agreement to make Google the default search service for Safari on the Mac, and was extended to the iPhone in 2007; Google’s motivation was to ensure that Apple never competed for their core business, and Cook was happy to take an ever increasing amount of pure profit.
The App Store also predated Cook; Steve Jobs said during the App Store’s introduction that “we keep 30 [percent] to pay for running the App Store”, and called it “the best deal going to distribute applications to mobile platforms”. It’s important to note that, in 2008, this was true! The App Store really was a great deal.
Three years later, in a July 28, 2011 email — less than a month before Cook officially became CEO — Phil Schiller wondered if Apple should lower its take once they were making $1 billion a year in profit from the App Store. John Gruber, writing on Daring Fireball in 2021, wondered what might have been had Cook followed Schiller’s advice:
In my imagination, a world where Apple had used Phil Schiller’s memo above as a game plan for the App Store over the last decade is a better place for everyone today: developers for sure, but also users, and, yes, Apple itself. I’ve often said that Apple’s priorities are consistent: Apple’s own needs first, users’ second, developers’ third. Apple, for obvious reasons, does not like to talk about the Apple-first part of those priorities, but Cook made explicit during his testimony during the Epic trial that when user and developer needs conflict, Apple sides with users. (Hence App Tracking Transparency, for example.)
These priorities are as they should be. I’m not complaining about their order. But putting developer needs third doesn’t mean they should be neglected or overlooked. A large base of developers who are experts on developing and designing for Apple’s proprietary platforms is an incredible asset. Making those developers happy — happy enough to keep them wanting to work and focus on Apple’s platforms — is good for Apple itself.
I want to agree with Gruber — I was criticizing Apple’s App Store policies within weeks of starting Stratechery, years before it became a major issue — but from a shareholder perspective, i.e. Cook’s ultimate bosses, it’s hard to argue with Apple’s uncompromising approach. Last year Apple Services generated 26% of Apple’s revenue and 41% of the company’s profit; more importantly, Services continues to grow year-over-year, even as iPhone growth has slowed from the go-go years.
Another way to frame the Services question is to say that Gruber is concerned about the long-term importance of something that is somewhat ineffable — developer willingness and desire to support Apple’s platforms — which is, at least in Gruber’s mind, essential for Apple’s long-term health. Cook, in this critique, prioritized Apple’s financial results and shareholder returns over what was best for Apple in the long run.
This isn’t the only part of Apple’s business where this critique has validity. Cook’s greatest triumph was, as I noted above, completely overhauling and subsequently scaling Apple’s operations, which first and foremost meant developing a heavy dependence on China. This dependence was not inevitable: Patrick McGee explained in Apple In China, which I consider one of the all-time great books about the tech industry, how Apple made China into the manufacturing behemoth it became. McGee added in a Stratechery Interview:
Let me just refer back to something that you wrote I think a few months ago when you called the last 20, 25 years, like the golden age for companies like Apple and Silicon Valley focused on software and Chinese taking care of the hardware manufacturing. That is a perfect partnership, and if we were living in a simulation and it ended tomorrow, you’d give props for Apple to taking advantage of the situation better than anybody else.
The problem is we’re probably not living in the simulation and things go on, and I’ve got this rather disquieting conclusion where, look, Apple’s still really good probably, they’re not as good as they once were under Jony Ive, but they’re still good at industrial design and product design, but they don’t do any operations in our own country. That’s all dependent on China. You’ve called this in fact the biggest violation of the Tim Cook doctrine to own and control your destiny, but the Chinese aren’t just doing the operations anymore, they also have industrial design, product design, manufacturing design.
It really is ironic: Tim Cook built what is arguably Apple’s most important technology — its ability to build the world’s best personal computer products at astronomical scale — and did so in a way that leaves Apple more vulnerable than anyone to the deteriorating relationship between the United States and China. China was certainly good for the bottom line, but was it good for Apple’s long-run sustainability?
This same critique — of favoring a financially optimal strategy over long-term sustainability — may also one day be levied on the biggest question Cook leaves his successor: what impact will AI have on Apple? Apple has, to date, avoided spending hundreds of billions of dollars on the AI buildout, and there is one potential future where the company profits from AI by selling the devices everyone uses to access commoditized models; there is another future where AI becomes the means by which Apple’s 50 Years of Integration is finally disrupted by companies that actually invested in the technology of the future.
If Tim Cook’s timing was fortunate in terms of when in Apple’s lifecycle he took the reins, then I would call his timing in terms of when in Apple’s lifecycle he is stepping down as being prudent, both for his legacy and for Apple’s future.
Apple is, in terms of its traditional business model, in a better place than it has ever been. The iPhone line is fantastic, and selling at a record pace; the Mac, meanwhile, is poised to massively expand its market share as Apple Silicon — another Jobs initiative, appropriately invested in and nurtured by Cook — makes the Mac the computer of choice for both the high end (thanks to Apple Silicon’s performance and unified memory architecture) and the low end (the iPhone chip-based MacBook Neo significantly expands Apple’s addressable market). Meanwhile, the Services business continues to grow. Cook is stepping down after Apple’s best-ever quarter, a milestone that very much captures his tenure, for better and for worse.
At the same time, the AI question looms — and it suggests that Something Is Rotten in the State of Cupertino. The new Siri still hasn’t launched, and when it does, it will be with Google’s technology at the core. That was, as I wrote in an Update, a momentous decision for Apple’s future:
Apple’s plans are a bit like the alcoholic who admits that they have a drinking problem, but promises to limit their intake to social occasions. Namely, how exactly does Apple plan on replacing Gemini with its own models when (1) Google has more talent, (2) Google spends far more on infrastructure, and (3) Gemini will be continually increasing from the current level, where it is far ahead of Apple’s efforts? Moreover, there is now a new factor working against Apple: if this white-labeling effort works, then the bar for “good enough” will be much higher than it is currently. Will Apple, after all of the trouble they are going through to fix Siri, actually be willing to tear out a model that works so that they can once again roll their own solution, particularly when that solution hasn’t faced the market pressure of actually working, while Gemini has?
In short, I think Apple has made a good decision here for short term reasons, but I don’t think it’s a short-term decision: I strongly suspect that Apple, whether it has admitted it to itself or not, has just committed itself to depending on 3rd-parties for AI for the long run.
As I noted above and in that Update, this decision may work out; if it doesn’t, however, the sting will be felt long after Cook is gone. To that end, I certainly hope that John Ternus, the new CEO, was heavily involved in the decision; truthfully, he should have made it.
To that end, it’s right that Cook is stepping down now. Jobs might have been responsible for taking Apple from 0 to 1, but it was Cook that took Apple from 1 to $436 billion in revenue and $118 billion in profit last year. It’s a testament to his capabilities and execution that Apple didn’t suffer any sort of post-founder hangover; only time will tell if, along the way, Cook created the conditions for a crash out, by virtue of he himself forgetting The Cook Doctrine and what makes Apple Apple.
...
Read the original on stratechery.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.