10 interesting stories served every morning and every evening.
Organizations design systems that mirror their own communication structure.
Premature optimization is the root of all evil.
With a sufficient number of API users, all observable behaviors of your system will be depended on by somebody.
Leave the code better than you found it.
YAGNI (You Aren’t Gonna Need It)
Don’t add functionality until it is necessary.
Adding manpower to a late software project makes it later.
A complex system that works is invariably found to have evolved from a simple system that worked.
All non-trivial abstractions, to some degree, are leaky.
Every application has an inherent amount of irreducible complexity that can only be shifted, not eliminated.
A distributed system can guarantee only two of: consistency, availability, and partition tolerance.
Small, successful systems tend to be followed by overengineered, bloated replacements.
A set of eight false assumptions that new distributed system designers often make.
Every program attempts to expand until it can read mail.
There is a cognitive limit of about 150 stable relationships one person can maintain.
The square root of the total number of participants does 50% of the work.
Those who understand technology don’t manage it, and those who manage it don’t understand it.
In a hierarchy, every employee tends to rise to their level of incompetence.
The minimum number of team members whose loss would put the project in serious trouble.
Companies tend to promote incompetent employees to management to limit the damage they can do.
Work expands to fill the time available for its completion.
The first 90% of the code accounts for the first 90% of development time; the remaining 10% accounts for the other 90%.
It always takes longer than you expect, even when you take into account Hofstadter’s Law.
When a measure becomes a target, it ceases to be a good measure.
Anything you need to quantify can be measured in some way better than not measuring it.
Anything that can go wrong will go wrong.
Be conservative in what you do, be liberal in what you accept from others.
Technical Debt is everything that slows us down when developing software.
Given enough eyeballs, all bugs are shallow.
Debugging is twice as hard as writing the code in the first place.
A project should have many fast unit tests, fewer integration tests, and only a small number of UI tests.
Repeatedly running the same tests becomes less effective over time.
Software that reflects the real world must evolve, and that evolution has predictable limits.
90% of everything is crap.
The speedup from parallelization is limited by the fraction of work that cannot be parallelized.
It is possible to achieve significant speedup in parallel processing by increasing the problem size.
The value of a network is proportional to the square of the number of users.
Every piece of knowledge must have a single, unambiguous, authoritative representation.
Designs and systems should be as simple as possible.
Five main guidelines that enhance software design, making code more maintainable and scalable.
An object should only interact with its immediate friends, not strangers.
Software and interfaces should behave in a way that least surprises users and other developers.
The less you know about something, the more confident you tend to be.
Never attribute to malice that which is adequately explained by stupidity or carelessness.
The simplest explanation is often the most accurate one.
Sticking with a choice because you’ve invested time or energy in it, even when walking away helps you.
The Map Is Not the Territory
Our representations of reality are not the same as reality itself.
A tendency to favor information that supports our existing beliefs or ideas.
We tend to overestimate the effect of a technology in the short run and underestimate the impact in the long run.
The longer something has been in use, the more likely it is to continue being used.
Breaking a complex problem into its most basic blocks and then building up from there.
Solving a problem by considering the opposite outcome and working backward from it.
80% of the problems result from 20% of the causes.
The best way to get the correct answer on the Internet is not to ask a question, it’s to post the wrong answer.
...
Read the original on lawsofsoftwareengineering.com »
When you’re ready for more performance, you can upgrade individual components instead of replacing your entire laptop. Install a new Mainboard for generational processor upgrades, add memory to handle heavier workloads, or expand your storage to increase capacity or enable dual booting. The Framework Marketplace makes it easy to find the compatible parts you need.
...
Read the original on frame.work »
It’s the nature of business that the eulogy for a chief executive doesn’t happen when they die, but when they retire, or, in the case of Apple CEO Tim Cook, announce that they will step up to the role of Executive Chairman on September 1. The one morbid exception is when a CEO dies on the job — or quits because they are dying — and the truth of the matter is that that is where any honest recounting of Cook’s incredibly successful tenure as Apple CEO, particularly from a financial perspective, has to begin.
The numbers, to be clear, are extraordinary. Cook became CEO of Apple on August 24, 2011, and in the intervening 15 years revenue has increased 303%, profit 354%, and the value of Apple has gone from $297 billion to $4 trillion, a staggering 1,251% increase.
The reason for Cook’s accession in 2011 became clear a mere six weeks later, when Steve Jobs passed away from cancer on October 5, 2011. Jobs’ death isn’t the reason Cook was chosen — Cook had already served as interim CEO while Jobs underwent treatment in 2009 — but I think the timing played a major role in making Cook arguably the greatest non-founder CEO of all time.
Peter Thiel introduced the concept of Zero To One thusly:
When we think about the future, we hope for a future of progress. That progress can take one of two forms. Horizontal or extensive progress means copying things that work — going from 1 to n. Horizontal progress is easy to imagine because we already know what it looks like. Vertical or intensive progress means doing new things — going from 0 to 1. Vertical progress is harder to imagine because it requires doing something nobody else has ever done. If you take one typewriter and build 100, you have made horizontal progress. If you have a typewriter and build a word processor, you have made vertical progress.
Steve Jobs made 0 to 1 products, as he reminded the audience in the introduction to his most famous keynote:
Every once in a while, a revolutionary product comes along that changes everything. First of all, one’s very fortunate if one gets to work on one of these in your career. Apple’s been very fortunate: it’s been able to introduce a few of these into the world.
In 1984, we introduced the Macintosh. It didn’t just change Apple, it changed the whole computer industry. In 2001, we introduced the first iPod. It didn’t just change the way we all listen to music, it changed the entire music industry.
Well, today we’re introducing three revolutionary products of this class. The first one: a widescreen iPod with touch controls. The second: a revolutionary mobile phone. And the third is a breakthrough Internet communications device. Three things…are you getting it? These are not three separate devices. This is one device, and we are calling it iPhone.
Steve Jobs would, three years later, also introduce the iPad, which makes four distinct product categories if you’re counting. Perhaps the most important 0 to 1 product Jobs created, however, was Apple itself, which raises the question: what makes Apple Apple?
“What Makes Apple Apple” isn’t a new question; it was the central question of Apple University, the internal training program the company launched in 2008. Apple University was hailed on the outside as a Steve Jobs creation, but while I’m sure he green lit the concept, it was clear to me as an intern on the Apple University team in 2010, that the program’s driving force was Tim Cook.
The core of the program, at least when I was there, was what became known as The Cook Doctrine:
We believe that we’re on the face of the Earth to make great products, and that’s not changing.
We believe in the simple, not the complex.
We believe that we need to own and control the primary technologies behind the products we make, and participate only in markets where we can make a significant contribution.
We believe in saying no to thousands of projects so that we can really focus on the few that are truly important and meaningful to us.
We believe in deep collaboration and cross-pollination of our groups, which allow us to innovate in a way that others cannot.
And frankly, we don’t settle for anything less than excellence in every group in the company, and we have the self-honesty to admit when we’re wrong and the courage to change.
And I think, regardless of who is in what job, those values are so embedded in this company that Apple will do extremely well.
Cook explained this on Apple’s January 2009 earnings call, during Jobs’ first leave of absence, in response to a question about how Apple would fare without its founder. It’s a brilliant statement, but it is — as the last paragraph makes clear — ultimately about maintaining, nurturing, and growing what Jobs built.
That is why I started this Article by highlighting the timing of Cook’s ascent to the CEO role. The challenge for CEOs following iconic founders is that the person who took the company from 0 to 1 usually sticks around for 2, 3, 4, etc.; by the time they step down the only way forward is often down. Jobs, however, by virtue of leaving the world too soon, left Apple only a few years after its most important 0 to 1 product ever, meaning it was Cook who was in charge of growing and expanding Apple’s most revolutionary device yet.
Cook, to be clear, managed this brilliantly. Under his watch the iPhone not only got better every year, but expanded its market to every carrier in basically every country, and expanded the line from one model in two colors to five models in a plethora of colors sold at the scale of hundreds of millions of units a year.
Cook was, without question, an operational genius. Moreover, this was clearly the case even before he scaled the iPhone to unimaginable scale. When Cook joined Apple in 1998 the company’s operations — centered on Apple’s own factories and warehouses — were a massive drag on the company; Cook methodically shut them down and shifted Apple’s manufacturing base to China, creating a just-in-time supply chain that year-after-year coordinated a worldwide network of suppliers to deliver Apple’s ever-expanding product line to customers’ doorsteps and a fleet of beautiful and brand-expanding stores. There was not, under Cook’s leadership, a single significant product issue or recall.
Cook also oversaw the introduction of major new products, most notably AirPods and Apple Watch; the “Wearables, Home, and Accessories” category delivered $35.4 billion in revenue last year, which would rank 128 on the Fortune 500. Still, both products are derivative of the iPhone; Cook’s signature 0 to 1 product, the Apple Vision Pro, is more of a 0.5.
Cook’s more momentous contribution to Apple’s top line was the elevation of Services. The Google search deal actually originated in 2002 with an agreement to make Google the default search service for Safari on the Mac, and was extended to the iPhone in 2007; Google’s motivation was to ensure that Apple never competed for their core business, and Cook was happy to take an ever increasing amount of pure profit.
The App Store also predated Cook; Steve Jobs said during the App Store’s introduction that “we keep 30 [percent] to pay for running the App Store”, and called it “the best deal going to distribute applications to mobile platforms”. It’s important to note that, in 2008, this was true! The App Store really was a great deal.
Three years later, in a July 28, 2011 email — less than a month before Cook officially became CEO — Phil Schiller wondered if Apple should lower its take once they were making $1 billion a year in profit from the App Store. John Gruber, writing on Daring Fireball in 2021, wondered what might have been had Cook followed Schiller’s advice:
In my imagination, a world where Apple had used Phil Schiller’s memo above as a game plan for the App Store over the last decade is a better place for everyone today: developers for sure, but also users, and, yes, Apple itself. I’ve often said that Apple’s priorities are consistent: Apple’s own needs first, users’ second, developers’ third. Apple, for obvious reasons, does not like to talk about the Apple-first part of those priorities, but Cook made explicit during his testimony during the Epic trial that when user and developer needs conflict, Apple sides with users. (Hence App Tracking Transparency, for example.)
These priorities are as they should be. I’m not complaining about their order. But putting developer needs third doesn’t mean they should be neglected or overlooked. A large base of developers who are experts on developing and designing for Apple’s proprietary platforms is an incredible asset. Making those developers happy — happy enough to keep them wanting to work and focus on Apple’s platforms — is good for Apple itself.
I want to agree with Gruber — I was criticizing Apple’s App Store policies within weeks of starting Stratechery, years before it became a major issue — but from a shareholder perspective, i.e. Cook’s ultimate bosses, it’s hard to argue with Apple’s uncompromising approach. Last year Apple Services generated 26% of Apple’s revenue and 41% of the company’s profit; more importantly, Services continues to grow year-over-year, even as iPhone growth has slowed from the go-go years.
Another way to frame the Services question is to say that Gruber is concerned about the long-term importance of something that is somewhat ineffable — developer willingness and desire to support Apple’s platforms — which is, at least in Gruber’s mind, essential for Apple’s long-term health. Cook, in this critique, prioritized Apple’s financial results and shareholder returns over what was best for Apple in the long run.
This isn’t the only part of Apple’s business where this critique has validity. Cook’s greatest triumph was, as I noted above, completely overhauling and subsequently scaling Apple’s operations, which first and foremost meant developing a heavy dependence on China. This dependence was not inevitable: Patrick McGee explained in Apple In China, which I consider one of the all-time great books about the tech industry, how Apple made China into the manufacturing behemoth it became. McGee added in a Stratechery Interview:
Let me just refer back to something that you wrote I think a few months ago when you called the last 20, 25 years, like the golden age for companies like Apple and Silicon Valley focused on software and Chinese taking care of the hardware manufacturing. That is a perfect partnership, and if we were living in a simulation and it ended tomorrow, you’d give props for Apple to taking advantage of the situation better than anybody else.
The problem is we’re probably not living in the simulation and things go on, and I’ve got this rather disquieting conclusion where, look, Apple’s still really good probably, they’re not as good as they once were under Jony Ive, but they’re still good at industrial design and product design, but they don’t do any operations in our own country. That’s all dependent on China. You’ve called this in fact the biggest violation of the Tim Cook doctrine to own and control your destiny, but the Chinese aren’t just doing the operations anymore, they also have industrial design, product design, manufacturing design.
It really is ironic: Tim Cook built what is arguably Apple’s most important technology — its ability to build the world’s best personal computer products at astronomical scale — and did so in a way that leaves Apple more vulnerable than anyone to the deteriorating relationship between the United States and China. China was certainly good for the bottom line, but was it good for Apple’s long-run sustainability?
This same critique — of favoring a financially optimal strategy over long-term sustainability — may also one day be levied on the biggest question Cook leaves his successor: what impact will AI have on Apple? Apple has, to date, avoided spending hundreds of billions of dollars on the AI buildout, and there is one potential future where the company profits from AI by selling the devices everyone uses to access commoditized models; there is another future where AI becomes the means by which Apple’s 50 Years of Integration is finally disrupted by companies that actually invested in the technology of the future.
If Tim Cook’s timing was fortunate in terms of when in Apple’s lifecycle he took the reins, then I would call his timing in terms of when in Apple’s lifecycle he is stepping down as being prudent, both for his legacy and for Apple’s future.
Apple is, in terms of its traditional business model, in a better place than it has ever been. The iPhone line is fantastic, and selling at a record pace; the Mac, meanwhile, is poised to massively expand its market share as Apple Silicon — another Jobs initiative, appropriately invested in and nurtured by Cook — makes the Mac the computer of choice for both the high end (thanks to Apple Silicon’s performance and unified memory architecture) and the low end (the iPhone chip-based MacBook Neo significantly expands Apple’s addressable market). Meanwhile, the Services business continues to grow. Cook is stepping down after Apple’s best-ever quarter, a milestone that very much captures his tenure, for better and for worse.
At the same time, the AI question looms — and it suggests that Something Is Rotten in the State of Cupertino. The new Siri still hasn’t launched, and when it does, it will be with Google’s technology at the core. That was, as I wrote in an Update, a momentous decision for Apple’s future:
Apple’s plans are a bit like the alcoholic who admits that they have a drinking problem, but promises to limit their intake to social occasions. Namely, how exactly does Apple plan on replacing Gemini with its own models when (1) Google has more talent, (2) Google spends far more on infrastructure, and (3) Gemini will be continually increasing from the current level, where it is far ahead of Apple’s efforts? Moreover, there is now a new factor working against Apple: if this white-labeling effort works, then the bar for “good enough” will be much higher than it is currently. Will Apple, after all of the trouble they are going through to fix Siri, actually be willing to tear out a model that works so that they can once again roll their own solution, particularly when that solution hasn’t faced the market pressure of actually working, while Gemini has?
In short, I think Apple has made a good decision here for short term reasons, but I don’t think it’s a short-term decision: I strongly suspect that Apple, whether it has admitted it to itself or not, has just committed itself to depending on 3rd-parties for AI for the long run.
As I noted above and in that Update, this decision may work out; if it doesn’t, however, the sting will be felt long after Cook is gone. To that end, I certainly hope that John Ternus, the new CEO, was heavily involved in the decision; truthfully, he should have made it.
To that end, it’s right that Cook is stepping down now. Jobs might have been responsible for taking Apple from 0 to 1, but it was Cook that took Apple from 1 to $436 billion in revenue and $118 billion in profit last year. It’s a testament to his capabilities and execution that Apple didn’t suffer any sort of post-founder hangover; only time will tell if, along the way, Cook created the conditions for a crash out, by virtue of he himself forgetting The Cook Doctrine and what makes Apple Apple.
...
Read the original on stratechery.com »
I read the Trend Micro report on my phone at 1am last night and havent been able to stop thinking about it since. The timeline is genuinely absurd.
February 2026. An employee at Context.ai downloads a Roblox cheat. A Roblox cheat. Lumma Stealer comes bundled with it, grabs session cookies, credentials, everything. That employee had access to internal systems at a company that handles OAuth integrations for enterprise customers.
March 2026. The attacker uses Context.ai’s compromised infrastructure to pivot into a Vercel employee’s Google Workspace account. This Vercel employee had signed up for Context.ai’s “AI Office Suite” with their enterprise credentials and granted broad OAuth permissions. A Vercel engineer gave a third-party AI tool access to their corporate Google account because the onboarding flow asked for it and they clicked through.
April 19. Guillermo Rauch confirms everything. Non-sensitive environment variables were accessed and exfiltrated. A threat actor using the ShinyHunters name is asking $2 million for the data, though the actual ShinyHunters group says theyre not involved. Vercel published their incident bulletin the same day.
Okay I need to correct something I got wrong in my initial read of this. My first reaction was “they stored env vars in plaintext??” but thats not exactly whats happening. All Vercel env vars are encrypted at rest. The “sensitive” checkbox doesnt toggle encryption on and off. What it does is change how the decryption works.
Non-sensitive vars can be decrypted by the dashboard backend. You can view them, edit them, copy them from the UI. Sensitive vars can only be decrypted at build time. Write-only. Once you set them you cant see the value again, only the app can read them at runtime.
So when the attacker got into Vercel’s internal systems, they could access the backend that decrypts the non-sensitive vars. The sensitive ones appear to be safe. Vercel says they have no evidence the sensitive vars were accessed.
This is actually worse than a simple “plaintext” screwup because its more subtle. The encryption existed. The infrastructure was there. But the default was set to the less protected option and most developers never changed it because why would you. You see a text field, you paste your API key, you hit save. Nobody is hunting for a checkbox that changes the decryption scope of their environment variable. You just assume the platform handles that.
Vercel has since changed the default to sensitive. Which is an admission that the old default was wrong. But every env var created before that change is still sitting there in the less protected state unless someone manually went back and toggled each one.
I’ve been watching the AI tooling space for two years and theres a pattern that bugs me. Every AI productivity tool requires broad access to function. Thats the whole point. They need your docs, your emails, your code, your workspace. The value proposition is the access.
Every AI tool you plug into your workflow is an attack surface multiplier. Context.ai wasnt Join to see!. It was a Y Combinator company. Enterprise customers. SOC 2 compliance supposedly. And one employee downloading game cheats on a work machine turned the whole thing into a supply chain weapon.
I went through about a dozen AI tools I’ve personally authorized in the last year after reading this. Nine of them have Google Workspace OAuth permissions that include reading all emails and accessing all Drive files. Nine. I authorized every one of them without reading the permissions because the onboarding flow asked and I was in a hurry.
Actually I started counting how many OAuth apps I had authorized total and stopped at 23 because it was getting depressing. I dont even remember what half of them do. A meeting summarizer I used twice in January still has full email access. Thats on me but its also on every OAuth dialog ever designed because theyre all terrible.
Vercel’s incident page says “limited customer credentials” were compromised. BleepingComputer says the attacker is actively selling data. Crypto developers are scrambling because wallet infrastructure ran through Vercel env vars. The immediate damage is bad enough.
But the part I keep coming back to is the trust cost. Every developer on Vercel now has to go through every env var they ever set, figure out which ones werent marked sensitive, rotate every credential, and decide if they still trust the platform. Thats hundreds of thousands of projects. Some people are reporting it took them 6+ hours just to rotate everything on a single project.
Multiply that by the active Vercel userbase and youre looking at millions of developer-hours spent on credential rotation this week. Nobody at Vercel wants anyone doing that math right now.
Honestly? Probably not much changes for most people. I’ve watched this pattern enough times. Breach happens. Posts get written. Keys get rotated for about a week. Then everyone goes back to pasting secrets into platform dashboard text fields because its convenient and the alternatives require actual work.
AWS Secrets Manager. HashiCorp Vault. SOPS + age. Self-hosted infrastructure. Real options that real teams use. All require more setup than a text field. The gap between knowing whats secure and doing whats secure is measured entirely in convenience.
I started looking into how many YC companies have had security incidents tied to… actually thats a different rabbit hole for a different post.
The one thing I am doing differently is what Im calling the 12x audit. For every AI tool I authorize, Im spending 12x the time I used to spend clicking “Allow” on actually reading what it requests. Thats still only about two minutes per tool since I was spending roughly ten seconds before. But two minutes would have caught the exact permission pattern that made this whole chain possible. Ten seconds didnt.
A Roblox cheat brought down one of the biggest deployment platforms on the internet. Not a zero-day. Not a nation-state. A game cheat that a Context.ai employee probably downloaded for their kid. The attack surface wasnt sophisticated. It was convenient. And convenience is the only product the entire AI tooling industry is actually selling.
...
Read the original on webmatrices.com »
MNT Reform is an open hardware laptop, designed and assembled in Berlin, Germany.
2023.04.17: mnt reform #000120 is now being offered as a loaner by sdf.org.
The trackball can press against the screen when the lid is closed, causing a small mark to appear on the screen.
Lid, screen bezel, keyboard frame, and wrist rest are made from milled aluminium. Side panels and transparent bottom panel are made from acrylic.
Screws in the LCD bezel are not covered, and over time the one in the center can start to rub the paint off of the wrist rest.
My friend kindly sent me a pair of metal replacement side panels. First I tried painting them with a paint brush and a bottle of Vanta Black. This flaked off easily, so I sanded them down and repainted them with black spraypaint (satin finish). Managed to chip that as well during installation. I don’t know what I’m doing.
2022.03.03 Update: MNT has now made available steel replacement side panels.
2022.04.27 Update: I ended up just stretching the original molex antenna down under the trackball, which improved reception even more than buying an expensive new antenna. Because of its shape and the orientation of its cables, the Laird antenna wouldn’t quite reach.
iogear gwu637 ethernet to wifi n adapter - for operating systems where wifi doesn’t (yet) work
piñatex sleeve - note: pull tabs broke off in the first week
2022.02.22 Update: MNT sent me a replacement sleeve with new, all-metal zipper pulls that are now standard equipment on the sleeve.
2022.07.16 Update: One of the all-metal zipper pulls shattered as I tried to unzip the sleeve.
mbk-colors: 1u and 1.5u homing - replacement key caps, some with raised edges to help with acclimating to the non-standard keyboard layout
void linux -
sdcard image (does not boot on my machine)
By default, the speaker output of MNT Reform is a bit quiet, and
changing the volume with PulseAudio won’t dramatically change it.
There’s one more knob you can turn up that is only accessible via
ALSA.
Open a Terminal and type alsamixer. Then press F6 and select
the wm8960-audio card. Navigate with Cursor keys to the Playback
slider and turn it up
Well, there is no wm8960-audio listed on my system, only (default). And Master is already cranked to 100. Investigating, I noticed:
sl@reform:~$ dmesg | grep 8960
[ 3.613559] wm8960 2-001a: Failed to issue reset
Usually a reboot gets the audio going for me if I see failed to issue
reset (happens on booting from power off). Lukas speculates on a fix
here[1] and another person[2] provided this line in order to rebind the
device without a reboot:
echo 2-001a > /sys/bus/i2c/drivers/wm8960/bind
I was able to replicate the issue and test the above line out just
now. I had to “sudo su” first. Then the audio device showed up in
alsamixer again just fine.
This worked for me, as well.
Update 2022.06.20: After numerous updates, sound no longer works for me in Alpine Linux.
echo 0 > /sys/class/leds/ath9k-phy0/brightness # needs root permissions
...
Read the original on mnt.stanleylieber.com »
Anthropic announced on Monday that Amazon has agreed to invest a fresh $5 billion, bringing Amazon’s total investment in the company to $13 billion. Anthropic, for its part, has agreed to spend over $100 billion on AWS over the next 10 years, obtaining up to 5 GW of new computing capacity to train and run Claude.
The deal echoes an agreement Amazon struck with OpenAI just two months ago, when it joined a $110 billion funding round — contributing $50 billion — that valued the ChatGPT maker at a $730 billion pre-money valuation. That deal, too, was structured partly as cloud infrastructure services rather than straight cash.
At the heart of this deal is Amazon’s custom chips: Graviton (a low-power CPU) and Trainium (an Nvidia competitor and AI accelerator chip). The Anthropic deal specifically covers Trainium2 through Trainium4 chips, even though Trainium4 chips are not currently available. The latest chip, Trainium3, was released in December. On top of that, Anthropic has secured the option to buy capacity on future Amazon chips as they become available.
We’ll see if this news is a teaser to Anthropic announcing a new funding round. VCs have reportedly been offering the AI company capital in a deal that would value it at $800 billion or more.
...
Read the original on techcrunch.com »
It is intended only for protocol study, signal analysis, and controlled experiments on hardware you personally own or are explicitly authorized to test.
This repository does not authorize access to, modification of, or interference with any third-party deployment, commercial installation, or retail environment.
TagTinker is a Flipper Zero app for educational research into infrared electronic shelf-label protocols and related display behavior on authorized test hardware.
It is focused on:
This README intentionally avoids deployment-oriented instructions and excludes guidance for interacting with live commercial systems.
Where is the .fap release?
The Flipper app is source-first. Build the .fap yourself from this repository with ufbt so it matches your firmware and local toolchain.
What if it crashes or behaves oddly?
The maintainer primarily uses TagTinker on Momentum firmware with asset packs disabled and has not had issues in that setup. If you are using a different firmware branch, custom asset packs, or a heavily modified device setup, start by testing from a clean baseline.
What happens if I pull the battery out of the tag?
Many infrared ESL tags store their firmware, address, and display data in volatile RAM (not flash memory) to save cost and energy.
If you remove the battery or let it fully discharge, the tag will lose all programming and become unresponsive (“dead”). It usually cannot be recovered without the original base station.
I found a bug or want to contribute — how can I get in touch?
You can contact me on:
I’m currently traveling, so response times may be slower than usual. Feel free to open issues or Pull Requests anyway — contributions (bug fixes, improvements, documentation, etc.) are very welcome and will help keep the project alive while I’m away.
TagTinker is built around the study of infrared electronic shelf-label communication used by fixed-transmitter labeling systems.
* communication is based on addressed protocol frames containing command, parameter, and integrity fields
* display updates are carried as prepared payloads for supported monochrome graphics formats
* local tooling in this project helps researchers prepare assets and perform controlled experiments on authorized hardware
This project is intended to help researchers understand:
For the underlying reverse-engineering background and deeper protocol research, see:
TagTinker is limited to home-lab and authorized research use, including:
It is not a retail tool, operational tool, or field-use utility.
You are solely responsible for ensuring that any use of this software is lawful, authorized, and appropriate for your environment.
The maintainer does not authorize, approve, or participate in any unauthorized use of this project, and disclaims responsibility for misuse, damage, disruption, legal violations, or any consequences arising from such use.
If you do not own the hardware, or do not have explicit written permission to test it, do not use this project on it.
Any unauthorized use is outside the intended scope of this repository and is undertaken entirely at the user’s own risk.
This is an independent research project.
It is not affiliated with, endorsed by, authorized by, or sponsored by any electronic shelf-label vendor, retailer, infrastructure provider, or system operator.
Any references to external research, public documentation, or reverse-engineering work are included strictly for educational and research context.
This project is a port and adaptation of the excellent public reverse-engineering work by furrtek / PrecIR and related community research.
Licensed under the GNU General Public License v3.0 (GPL-3.0).
See the LICENSE file for details.
This software is provided “AS IS”, without warranty of any kind, express or implied.
In no event shall the authors or copyright holders be liable for any claim, damages, or other liability arising from the use of this software.
This repository is maintained as a narrowly scoped educational research project.
The maintainer does not authorize, encourage, condone, or accept responsibility for use against third-party devices, deployed commercial systems, retail infrastructure, or any environment where the user lacks explicit permission.
...
Read the original on github.com »
This site is best viewed in a modern browser with JavaScript enabled.
Something went wrong while trying to load the full version of this site. Try hard-refreshing this page to fix the error.
WIRED has published an article about GrapheneOS with a history of the project nearly entirely based on fabrications from James Donaldson. Donaldson has spent the past 8 years trying to destroy GrapheneOS and the life of the project’s founder, Daniel Micay. Donaldson has heavily engaged in fabrications with an ever changing story about the history of the project. Copperhead was forced to drop nearly all of their claims in the ongoing lawsuit. Copperhead was also forced to discontinue their closed source fork of GrapheneOS and is a zombie company with no significant operations or revenue. Copperhead lacks any serious basis for the remaining claims in their lawsuit and it isn’t a major concern for us anymore. Their claims have been thoroughly debunked at this point and are primarily an issue in the form of an extreme level of fabrications and harassment they started which is carried on without them. James Donaldson has been thoroughly proven to be a serial fabricator, scammer and thief. Despite this, WIRED listened to his tall tales and presented it as a history of GrapheneOS. We weren’t given an opportunity to provide an actual history of the project based in fact as we were led to believe it wasn’t a major part of the article and were barely asked about it.
Copperhead was propped up by the open source project and heavily held it back. After the split with the company, the project quickly gained a lot more funding via donations and has become highly successful. Instead of having a single full time developer barely being paid anything, GrapheneOS now has around 10 full time developers and is in the process of expanding by hiring several more. It’s entirely funded by donations and is far more than simply being sustainable that way. Donaldson believes that his past ties to the project he burned down and then spent years trying to destroy entitle him to getting rich from it. That’s why he continues misleading people about his involvement and doubled down on a failed lawsuit. He continues causing harm to GrapheneOS and Daniel Micay to this day.
GrapheneOS Foundation is a non-profit and no one is getting rich from it. Daniel solely gets his income via GitHub Sponsors and hasn’t paid himself anything from the GrapheneOS Foundation. Donaldson has only ever cared about money. He spent years manipulating and exploiting Daniel with the goal of enriching himself. He eventually decided Daniel was a barrier to him getting rich due to his values and tried to coerce him into handing over ownership and control of his open source project with no basis for it.
As part of the split between the open source project and Copperhead back in 2018, Donaldson stole a large amount of donations from the project. He ultimately ended up stealing around $300,000 worth of Bitcoin donations made to the open source project. Prior to his theft of the donations followed by years of repeatedly forking our project to sell it while falsely claiming to have created it, Donaldson heavily depended on income created by the open source project. Donaldson never funded or supported the project as he claims but rather it was entirely the other way around. He depended on a massive amount of work done by Daniel Micay to provide him with income for a tiny amount of work he was doing himself. He received as much money from device sales and donations as Daniel Micay for a tiny amount of work in comparison. His work was unsuccessful in getting any substantial funding. It didn’t make any sense for the open source project to remain tied to a company holding it back. It was entirely the prerogative of the open source project to move on without it. Donaldson could not accept it continuing as an open source project.
Donaldson’s claims can be proven false by interviewing numerous people who were around at the time. WIRED made no attempt to verify if anything he said was true prior to publishing it. Copperhead was a company founded by 3 people, not 2, and WIRED could have interviewed Dan McGrady who was the 3rd co-founder. There were many other people around back then they could have interviewed including many people who can confirm they had their donations stolen by James Donaldson. Donaldson serially fabricates things about himself and others. Giving him such a huge platform to mislead people is extremely irresponsible. He has very little to do with the overall history of GrapheneOS. His involvement was as someone leeching off the project for years while failing to deliver what he repeatedly promised. He isn’t a hacker as he claims but rather is largely non-technical. GrapheneOS has been enormously successful through entirely funding the project with donations. It was entirely possible to create a successful business based around it but Donaldson was never the right person to do it.
Our community manager @spring-onion (Dave Wilson) handled nearly all of the communications with WIRED over months. He isn’t a developer and clearly isn’t the same person as Daniel Micay but yet the article makes a completely unsubstantiated claim that it could be the same person. @spring-onion knows languages Daniel doesn’t speak including German, has a completely different writing style and a different voice. @spring-onion spent a massive amount of time communicating with them including multiple interviews focused on the GrapheneOS feature set and much more. WIRED repeatedly told us the article would barely cover the history of the project and wouldn’t focus on Daniel Micay. Due to this, we weren’t given an opportunity to provide them with information and address the claims made by James Donaldson. Despite this, it ended up being the primary focus of the article. We were only given an opportunity to respond to the vast majority of it after the article was already fully written and therefore our response to Donaldson’s stories was nearly entirely omitted from the article.
The content below are the questions we were asked by a WIRED fact checker with the original responses we provided to them with no modifications. This is what WIRED received in response to us and should have much more heavily incorporated into the article.
Do you live in Canada?
Did you meet James Donaldson between 2011 and 2013, when you joined Toronto Crypto?
Micay met Donaldson in late 2014 through Dan McGrady. McGrady knew Micay from his security work on Arch Linux and projects in Rust; McGrady and Micay initially connected via IRC.
Micay was not a member of Toronto Crypto. He did join the Toronto Crypto IRC channel while considering attending events, but did not attend any meetings before beginning the work that later became GrapheneOS.
At the time, were you a security researcher studying techniques used to protect banks and governments?
At that time, Micay was an open-source developer, security engineer, and security researcher; his work did not involve studying techniques used to protect banks or governments.
At the time, did you use your free time to experiment with applying the techniques you were studying to the fast-growing mobile space?
The idea of a hardened mobile OS was not novel; several projects existed or were being discussed. Micay chose to invest his free time in his own open-source implementation after discussions with McGrady. McGrady had a minor, short-lived involvement, but Micay built the initial project alone. This all occurred before Donaldson became involved.
Is it accurate that on one occasion, a troll infiltrated Toronto Crypto’s group chat and gave it what they called an “impossible” task of decrypting a series of messages? Did you eagerly accept the challenge and decrypt them with ease?
Micay has no recollection of that event / was not personally involved.
Around 2014, did Donaldson ask you to join him in a venture addressing Android’s security problems?
In late 2014, Donaldson and McGrady contacted Micay about forming a company around Micay’s existing hardened mobile OS project. While Micay’s work was open source, and thus available for anybody to use and improve upon, Donaldson wanted to sell support services around it, as well as telephone handsets with Micay’s OS pre-loaded. McGrady and Donaldson proposed calling the company Copperhead, and suggested that Micay market his work as CopperheadOS.
Micay agreed to participate only on the explicit understanding that he would retain control over the open source project’s development, licensing, copyrights, social/media accounts (GitHub, Twitter, Reddit), and donations.
Due to conflicts between McGrady and Donaldson between 2014-2015, McGrady stepped away before Copperhead was incorporated in November 2015. Prior to McGrady’s departure, Micay had very little contact with Donaldson. At that point Micay’s hardened mobile OS project had been launched (as “CopperheadOS”) and was using infrastructure which had been setup for the new company.
Was the plan to split everything equally, with Donaldson as CEO and you as chief technology officer?
The original plan called for the company to be split three ways between Micay, McGrady, and Donaldson. With McGrady’s departure, Donaldson appointed himself Copperhead’s Chief Executive Officer and sole director upon incorporation. Micay and Donaldson became co-equal 50% shareholders.
Although Donaldson sometimes described Micay as Copperhead’s “Chief Technology Officer,” Micay never signed an employment agreement with Copperhead, never accepted a common-law offer of employment, was not paid a regular salary, and did not agree to serve as a fiduciary of the company.
Was your flagship product CopperheadOS? Was it an open source operating system focused on Android hardening? Did CopperheadOS protect mobile data by adding layers of security on top of the stock Android OS?
The project that took the name “CopperheadOS” existed prior to the company and was an open source hardened mobile OS. Eventually CopperheadOS was renamed to the Android Hardening Project, and then GrapheneOS. These were renames, not rewrites or forks - they are all the same project.
Did Donaldson take on a diverse array of IT jobs in the early years of the company? Are some examples of that work fixing printers and recovering hacked WordPress websites? Did this fund your work on the operating system?
Micay’s improvements to the underlying Android system influenced or were explicitly adopted by the AOSP, resulting in the payment of bounties from Google to Micay.
More significantly though, Micay’s open source project began receiving substantial community donations. Micay intended those donations to fund additional contributors and necessary infrastructure, taking only a minimal amount for personal living expenses.
When the company failed to generate sufficient revenue, Micay agreed to temporarily share a portion of the project’s donations with Donaldson, so Donaldson could continue working on the company.
While Donaldson was face of the operation, were you spending most of your time hunting vulnerabilities in Android and patching them in CopperheadOS?
Donaldson was never the face of Micay’s open source project. He was only the face of the company towards businesses. Micay managed the social media account for the open source project and built a following for it. Micay did most of the talking to security engineers / researchers. Micay was also the one writing content about it, helping users and much more.
Did you also spend time troubleshooting for the userbase?
Micay spent a significant portion of his free time answering users’ questions and troubleshooting issues.
Did you feel it was your duty to support anyone interested in the project? Is this in part because you believe in the philosophy of open source and helping everyone have free access to mobile security? Did you spend time helping users even at the expense of your own well being?
Micay cares deeply about his open source project, which is why he put so much time and effort into it, often at the expense of his own health and well-being.
That being said, he did not necessarily feel a sense of duty - Micay also dedicated much time to helping people with Arch Linux and Rust.
Were you a longtime contributor to projects like Linux’s GRsecurity and Mozilla’s Rust programming language?
Micay only made minor contributions to Linux’s GRsecurity. His main work related to it was packaging and integrating it into Arch Linux, as well as testing and dealing with bugs.
Micay worked on Mozilla’s Rust programming language as a full-time volunteer for about a year.
For the first two years of Copperhead’s operation, was everything someone needed to download, install, or modify it available online?
Yes, for free, and for any purpose.
At this time, was the goal to make money from selling tech support that prioritized paying users?
The initial goal of the company was to engage in security consulting, with income generated from services unrelated to Micay’s open source project.
After those income streams failed to materialize, new approaches were explored, including selling devices preloaded with Micay’s OS, as well as offering paid support and contract work tied to it
Did the proliferation of CopperheadOS knockoffs, combined with your round-the-clock user support efforts, mean that everyone but the two of you were benefitting from the enterprise?
Micay’s open source project has been broadly successful and has generated substantial income through donations. It is reasonable to conclude that an open source project with that level of interest could also generate additional revenue through product and service sales or contract work. In fact, today’s ecosystem of companies offering products based on GrapheneOS illustrates that potential. Given that, the company’s inability to establish a sustainable business model appears to reflect shortcomings in its management and strategic direction under the stewardship of Donaldson.
Did your and Donaldson values begin to diverge? Was Donaldson more concerned with making money than you were?
Donaldson began to focus on the idea of changing the nature of CopperheadOS from an open source project to “closed source” software. In his view, this would allow Copperhead to sell licenses to CopperheadOS, since support contracts were not lucrative enough for Donaldson’s liking.
Micay consistently rejected these proposals. Donaldson’s plan had two fundamental flaws. First, the code had already been licensed to the public under open source licenses. There was no way to “claw back” the licenses under which they had already been released and were being used in the wider world. Moreover, Micay had no interest in writing proprietary software, or software for hire. Second, Donaldson’s proposal was fundamentally inconsistent with the collaborative, community-based work that had allowed CopperheadOS to develop in the first place.
Despite these problems, to placate Donaldson, Micay temporarily adopted a “source available” license for his future work on CopperheadOS in September 2016. This license did not apply to previous code / work done, or any contributions to that code from third parties. It applied only to the code released under that “source available” license.
In 2018, matters between Micay and Donaldson came to a head over Donaldson’s desire to pursue business deals with criminal organizations, and his attempts to compromise the security of CopperheadOS, including by proposing license enforcement and remote updating systems that would allow third-parties to have access to users’ phones. As part of this process, Donaldson began to demand that Micay provide Donaldson with the “signing keys” - i.e. the credentials required to verify the authenticity of releases of CopperheadOS. Donaldson advised that, in order to secure certain new business, potential customers required access to the Keys.
The keys had been in continuous use by Micay, in his personal capacity, since before the incorporation of Copperhead. However, more importantly, any party with the keys could mark malicious software as “authentic”, and thereby infiltrate devices using CopperheadOS.
Micay was unwilling to participate in that kind of security breach. Since Donaldson had control over certain infrastructure for the open source project, he would be able to incorporate (or hire others to incorporate) the privacy-damaging features described above for all future releases of CopperheadOS. Micay therefore deleted the keys permanently and severed ties with Copperhead and Donaldson.
Micay has since carried on his open source work as GrapheneOS, released under an open source license, incorporating all prior code except the aforementioned “source available” code.
Donaldson told Wired that you both made the decision to move Copperhead from being open source to having a noncommercial license. Is this accurate? Did that mean that users had to purchase a Copperhead phone to access the OS?
This decision was Micay’s alone, but was done to placate Donaldson after sustained pressure from Donaldson.
Micay agreed to apply a temporary non‑commercial license. During that period, new users needed to purchase a phone with the OS or build the OS from source to use it; existing users continued to receive updates without paying. The change narrowed who could access the project, conflicted with Micay’s goals for broader adoption, and failed to generate sustainable income streams - very few phones were purchased.
Is it accurate that when Copperhead relicensed, the project immediately started hearing from Fortune 500 companies?
No, it’s not accurate.
Donaldson secured licensing agreements with several companies and nonprofits, but those agreements committed the project to far more work than it could deliver. Micay was the sole developer and the team lacked the capacity to fulfill many of Donaldson’s commitments. Ultimately many of the agreements never progressed beyond an early stage.
Were the most lucrative contracts from defense contractors? Was Copperhead’s technology only used to protect defense clients from adversaries, and not for mass surveillance?
No. There was an unsuccessful attempt to secure a large contract with a defense contractor, but the commitments made exceeded the project’s capacity. Deliverables discussed would have required dedicated builds and special hardware signing that the team could not realistically support given available resources. Donaldson’s proposed approach involved converting Micay’s public OS into a version tailored to those requirements, which would have required taking control of Micay’s project - something Micay retained authority over. Donaldson pursued deals that depended on restricting or monetizing the project in ways inconsistent with Micay’s commitment to returning the project to an open source model.
Between licensing the OS and doing business with defense contractors, did you feel the integrity of your code and your decisionmaking role in the partnership were eroding?
There was one defense contractor attempt that failed early.
The larger issue was that Donaldson pursued revenue by promising deliverables the project couldn’t meet, which threatened the integrity of the project and undermined Micay’s role and values.
Were you bothered by both the facts that CopperheadOS was no longer available to the masses and that it was starting to serve the very people you wanted to protect users from?
Micay had always intended to go back to open source licensing, he had no interest in writing proprietary software, or software for hire.
In the spring of 2018, did you have sole possession of CopperheadOS’s signing keys?
Micay had sole possession of his open source project’s signing keys. The company had the option to make separate builds signed with separate keys but never did.
Did things between you and Donaldson devolve when he approached you about a compliance audit? Did he tell you that he needed to know how the signing keys were stored?
We understand that Daniel’s recollection was not that James wanted to know more information about how the signing keys were stored, but that he wanted direct access to them.
Did you suspect his request was tied to a deal he was brokering with a large defense contractor? Did you believe this would put the entirety of CopperheadOS’ user base at risk?
Yes and yes.
In response, did you post a series of tweets from the CopperheadOS X account—the same account you used to offer tech support—accusing Donaldson of being untrustworthy and “in business with criminals”? Did you say that it was your duty to expose this to the users?
The @CopperheadOS account belonged to Micay’s open-source project, not the company; a separate account had previously been created for the company.
Did you accuse Donaldson of spreading misinformation about CopperheadOS, while Donaldson accused you of impacting business opportunities?
Did you ban Donaldson from the CopperheadOS subreddit?
Did Donaldson’s lawyers send you a letter on May 14, 2018 requesting your termination?
We understand that the May 14 letter was a request to revise Daniel’s role at Copperhead, either by demotion or resignation.
Did the letter claim that “there is no written shareholders’ agreement in place, nor any written employment agreements or job descriptions for either of you”? Did it say that because Donaldson was “the sole director of the Corporation and the Chief Executive Officer,” he had the authority to deem the status of the company “unsustainable” and mandate your demotion for or immediate termination?
At this point, had Donaldson previously given you multiple opportunities to take paid leaves and regroup? Did you decline those offers?
Micay was never an employee of the company and was not offered a “leave”.
In June, 2018, did Donaldson file a claim against you to retrieve CopperheadOS’s signing keys and nearly half a million Canadian dollars’ worth in damages?
We realize that this question is incorrect. We understand that the June 2018 letter was simply ending Daniel’s employment, and that after this, James demanded access to the keys. We understand that the suit demanding $400k in damages was filed later, in 2020.
Did Donaldson tell you at the time that you needed to give up the keys so that the customers could keep using their devices?
See the answer to question #17.
Did you view this as Donaldson’s last-ditch effort to cash in on your work before you parted ways?
See the answer to question #17.
Is it fair to say you were livid?
Micay was justifiably disappointed with how everything turned out.
Did you destroy the keys?
See the answer to question #17.
In a Reddit post, did you write: “I consider the company and the infrastructure to be compromised”?
Micay did consider the company and the infrastructure to be compromised.
Without the signing keys, could neither you nor Donaldson make changes to CopperheadOS?
Correct. The OS accepts only updates signed with the proper keys.
After Micay ended his relationship with the company and Donaldson, Donaldson hired contractors to fork Micay’s open-source project into a closed-source OS; those efforts repeatedly required new forks as they fell behind and did not produce substantial original work, leaving them dependent on Micay’s open source project.
...
Read the original on discuss.grapheneos.org »
This post is about optimizing an extremely simple AST-walking interpreter for a dynamic language called Zef that I created for fun to the point where it is competitive with the likes of Lua, QuickJS, and CPython.
Most of what gets written about making language implementations fast focuses on the work you’d do when you already have a stable foundation, like writing yet another JIT (just in time) compiler or fine tuning an already pretty good garbage collector. I’ve written a lot of posts about crazy optimizations in a mature JS runtime. This post is different. It’s about the case where you’re starting from scratch, and you’re nowhere near writing a JIT and your GC isn’t your top problem.
The techniques in this post are easy to understand - there’s no SSA, no GC, no bytecodes, no machine code - yet they achieve a massive 16x speed-up (67x if you include the incomplete port to Yolo-C++) and bring my tiny interpreter into the ballpark of QuickJS, CPython, and Lua.
The techniques I’ll focus on in this post are:
To evaluate my progres, I created a benchmark suite called ScriptBench1. This has ports of classic language benchmarks to Zef:
These benchmarks are also available in a wide variety of other languages. I found existing ports of these benchmarks to JavaScript, Python, and Lua. For Splay, there weren’t existing Python and Lua ports, so I used Claude to port them.
All experiments run on Ubuntu 22.04.5 running on a Intel Core Ultra 5 135U with 32GB RAM and Fil-C++ version 0.677. Lua 5.4.7 is compiled with GCC 11.4.0. QuickJS-ng 0.14.0 is the binary from QuickJS’s GitHub releases page. CPython 3.10 is just what came with Ubuntu.
All experiments use the average of 30 randomly interleaved runs.
To be clear: for most of this post, I’ll be comparing my interpreter compiled with Fil-C++ to other folks’ interpreters compiled with Yolo-C compilers.
This post starts with a high-level description of the original AST-walking, hashtable-heavy Zef interpreter, followed by a section for each optimization that I landed on my journey to a 16.6x speed-up.
The original Zef interpreter was written with almost no regard for performance. Only two performance-aware choices were made:
* The value representation is a 64-bit tagged value that may hold a double, a 32-bit integer, or a Object*. Doubles are represented by offsetting them by 0x1000000000000 (a technique I learned from JavaScriptCore; the literature has taken to calling this ). Integers and pointers are represented natively, and I’m relying on the fact that no pointer will have a value below 0x100000000 (a dangerous choice, but one that you could force to be true; note that I could have represented integers by giving them a high bit tag of 0xffff000000000000 if I was worried about this). This makes it easy to have fast paths for operations on numbers (because you can detect if you have a number, and what kind, with a bit test). Even more importantly, this avoids heap allocations for numbers. If you’re building an interpreter from scratch, it’s good to start by making good choices about the fundamental value representation, since it’s super hard to change later! 32-bit or 64-bit tagged values are a standard place to start, if you’re implementing a dynamically typed language.
* I used some kind of C++. It’s important to pick a language that allows me to do all of the optimizations that language implementations eventually grow to have, and C++ is such a language. Notably, I would not pick something like Java, since there’s a ceiling to how many low level optimizations you can do. I would also not pick Rust, since a garbage collected language requires a heap representation that has global mutable state and cyclic references (though you could use Rust for some parts of the interpreter, if you were happy with being multilingual; or you could use Rust if you were happy with lots of unsafe code).
I also made tons of expedient choices that were wrong from a performance engineering standpoint:
* I used Fil-C++. This did allow me to move very quickly - for example, I get a garbage collector for free. Also, it meant that I spent zero time debugging memory safety issues (Fil-C++ reports memory safety violations with a pretty stack trace and lots of diagnostics) or undefined behavior (Fil-C++ does not have undefined behavior). Fil-C++ costs about 4x performance typically, so I’m starting with that 4x handicap, on top of all of the other suboptimal choices.
* Recursive AST walking interpreter. The interpreter is implemented as a virtual Node::evaluate method that gets overridden in a bunch of places.
* Strings everywhere. For example, the Get AST node holds a std::string to describe the name of the variable that it’s getting, and that string is used each time a variable is accessed.
* Hashtables everywhere. When that Get executes, the string is used as a key to a std::unordered_map, which contains the variable value.
* Chains of recursive calls to crawl the scope chain. Zef allows almost all constructs to be nested and nesting leads to closures; for example, class A nested in function F nested in class B nested in function G means that member functions of class A can see A’s fields, F’s locals, B’s fields, and G’s locals. The original interpreter achieved this by recursing in C++ over functions that can query different scope objects.
That said, those allowed me to implement an interpreter for a fairly sophisticated language with very little code. The largest module by far is the parser. Everything else is simple and crisp.
This interpreter was 35x slower than CPython 3.10, 80x slower than Lua 5.4.7, and 23x slower than QuickJS-ng 0.14.0. Let’s see how far we can get by implementing a bunch of optimizations!
The first optimization is to have the parser generate distinct AST nodes for each operator as opposed to using the DotCall node with the name of the operator.
a + b
Is identical to this:
a.add(b)
So, the original interpreter would parse a + b to DotCall(a, “add”) with b as an argument. That lead to slow execution since every since math operation involved a string lookup of the operator’s method name:
With this optimization, we have the parser create Binary<> and Unary<> nodes. With the help of some template and lambda magic, these nodes have separate virtual overrides for Node::evaluate per operator. These call directly into the corresponding Value fast paths for those operators. Hence, doing a + b now results in a call to Binary, which then calls Value::add.
This change is a 17.5% speed-up. At this point, Zef is 30x slower than CPython 3.10, 67x slower than Lua 5.4.7, and 19x slower than QuickJS-ng 0.14.0.
In the previous optimization, we made operators fast by avoiding string comparison based dispatch. But that change didn’t affect all operators! The RMW forms of those operators, like:
a += b
still used string based dispatch. So, the second optimization is to have the parser generate distinct nodes for each of the RMW cases. What’s happening here is that the parser requests LValue nodes to replace themselves with an RMW via the makeRMW virtual call:
* Get - corresponds to getting a variable, i.e. just id
Each of these virtual calls use the SPECIALIZE_NEW_RMW macro to create template specialized forms of:
Note that while the rest of the operator specialization (from change #1) uses lambdas to dispatch to the appropriate operator function Value, for RMWs we use an enumeration. This is a practical choice because of the number of places we have to thread the enum through to handle the fact that we may arrive at an RMW three different ways (get, dot, and subscript). All of this magic then bottoms out in the Value::callRMW<> template function, which dispatches the actual RMW operator call.
This change is a 3.7% speed-up. At this point, Zef is 29x slower than CPython 3.10, 65x slower than Lua 5.4.7, and 18.5x slower than QuickJS-ng 0.14.0. We’re now 1.22x faster than where we started.
The Value fast paths have a small problem: they use isInt(), which uses isIntSlow(), which does a virtual call to Object::isInt() to check if we’re really dealing with an int.
This is happening because the Zef value representation in the original interpreter had four distinct cases:
An IntObject for int64′s that cannot be represented as int32′s.
In the IntObject case, Value still drove the dispatch for all integer methods, since that allowed the interpreter to just have one implementation of all math operators (and that implementation was alwayts in Value).
This simple optimization causes Value fast paths to only consider int32 and double, and puts all IntObject handling in IntObject itself. Additionally, this change avoids the isInt() call on every method dispatch.
This is a 1% speed-up. At this point, Zef is 29x slower than CPython 3.10, 65x slower than Lua 5.4.7, and 18x slower than QuickJS-ng 0.14.0. We’re now 1.23x faster than where we started.
The original Zef interpreter uses std::string everywhere. Particularly brutal cases:
This is unfortunate because it means that all of these lookups don’t just involve hashtables - they involve hashtables keyed by those strings! So we’re hashing and comparing strings all the time when executing Zef.
This next optimization uses pointers to hash-consed Symbol objects instead of strings for all of those lookups.. This is a large change in terms of files impacted, but it’s really quite simple:
* There’s a new Symbol class in symbol.h and symbol.cpp. Symbols can be turned into strings and vice versa. Turning a string into a symbol involves a global hashtable to perform hash consing. This ensures that pointer equality on Symbol* is a valid way to check if two symbols are the same.
* Lots of places where we now refer to pre-cooked symbols instead of string literals, like Symbol::subscript instead of using the string “subscript”.
* Lots of places where we just change function signatures to use Symbol* instead ofconst std::string&`.
This is a 18% speed-up. At this point, Zef is 24x slower than CPython 3.10, 54x slower than Lua 5.4.7, and 15x slower than QuickJS-ng 0.14.0. We’re now 1.46x faster than where we started.
This change delivers a significant win by allowing inlining of important functions..
Almost all of the action in this change is the introduction of the new valueinlines.h header. This has separate header from value.h, since it uses headers that need to include `value.h.
This is a 2.8% speed-up. At this point, Zef is 24x slower than CPython 3.10, 53x slower than Lua 5.4.7, and 15x slower than QuickJS-ng 0.14.0. We’re now 1.5x faster than where we started.
Sometimes the only way to make your language implementation better is to land a massive patch. Don’t let anyone tell you that good engineering happens in small, easy to digest changes. That’s not always the case! It’s certainly not the case if you want to have a fast implementation of a dynamic language!
This is a massive change that redoes how Object, ClassObject, and Context work so that objects are cheaper to allocate and accesses can avoid hashtable lookups. This change combines three changes into one:
Previously, each lexical scope allocated Context object, and each Context object contained a hashtable of - i.e. the variables in that scope. Objects were even worse: each object was a hashtable that mapped the classes that the object was an instance of to Context objects. This was necessary because if you have an instance of Bar that descends from Foo, then Bar and Foo could both close over different scopes and they could share the same names for distinct fields (since fields are private by default in Zef). Clearly this is super inefficient! This change introduces the idea of Storage, which holds data at Offsets determined by some Context. So, Contexts still exist, but they are created ahead of time as part of the AST resolve pass; when objects or scopes are created, we just allocate a storage according to the size computed by the corresponding Context.
This is a classic technique that forms the foundation of modern high performance dynamic language implementations. But while this technique is classically discussed in the context of JIT compilers, in this change we’ll use it in an interpreter. The idea of inline caches is that given a location in code that does expr.name, we remember the last type that expr dynamically had and the last offset that name resolved to. In this change, the is done by placement constructing a specialized AST node on top of the generic one. There are five parts to this:
Say that we have a class Foo inside a lexical scope that has variable x, and one of Foo’s methods wants to access x. And, let’s say that there are no functions or varialbes called x inside Foo. We should be able to access x without any checkes, right? Well, not quite - someone could subclass Foo and add a getter called x, in which case that access should resolve to the getter, not the outer x. The way that inline caches handle this is by setting Watchpoints` within the runtime. In this example, it’s the was the name overridden watchpoint.
Each of these three features is large. I chose to implement all of them at once because:
* A new object model would not be meaningfully better unless it allowed inline caching to work well. So, I codeveloped the object model and inline chaces.
* Inline caches wouldn’t provide meaningful benefit unless I also had watchpoints, because so many cacheable conditions require watchpoints.
* The new object model and watchpoints have to work great together.
I started this change by writing a dumb version of CacheRecipe along with what ended up being the mostly final version of Storage and Offsets.
Some of the hardest work involved replacing the old style of intrinsic classes with a new style. Take arrays as an example. Previously, ArrayObject::tryCallMethod implemented all ArrayObject methods by simply intercepting the virtual Object::tryCallMethod call. But in the new object model, Object has no vtable and no virtual methods; instead Object::tryCallMethod forwards to object->classObject()->tryCallMethod(object, …). So, for Array to have methods, we need to create a class for Array that has those methods. Hence, this change shifts a lot of intrinsic functionality from being spread throughout the implementation to being focused inside makerootcontext.cpp. This is a good outcome, because it means that all of the inline caching machinery just works for native/intrinsic functions on objects!
This massive change has a massive win: 4.55x faster! At this point, Zef is 5.2x slower than CPython 3.10, 11.7x slower than Lua 5.4.7, and 3.3x slower than QuickJS-ng 0.14.0. In other words, Zef compiled with Fil-C++’s margin of loss against those other interpreters is right around what Fil-C costs (those other interpreters are compiled with Yolo-C).
We’re now 6.8x faster than where we started.
Before this change, the Zef interpreter would pass arguments to functions using const std::optional. The optional was needed because in some corner cases we have to distinguish between:
o.getter
o.function()
In most cases, in Zef, these two things are the same: they are a function call. Here’s an exception:
o.NestedClass
o.NestedClass()
The first case gets the NestedClass object, while the second case instantiates it.
Therefore, we need to tell if we’re passing an empty arguments array because this is a function call with zero arguments, or an empty arguments array because this was a getter-like call.
In any case, this is wildly inefficient because it means that the caller is allocating a vector and then the callee is allocating an arguments scope that is a copy of that vector.
This change introduces the Arguments type, which is shaped exactly like the arguments scope that the callee would have allocated. So, now we have the caller allocate these directly. This more than halves the number of allocations needed to make a call:
* Even in Yolo-C++, we’d be halving the allocations because we’d no longer have to malloc the backing store of the vector.
* In Fil-C++, the std::optional needs to be heap allocated. Even if we didn’t have a std::optional, passing a const std::vector<>& would be an allocation because anything stack allocated is heap allocated.
* It so happened that the callers would reallocate the vector multiple times rather than presizing it..
A lot of this change is just changing function signatures to take Arguments* instead of the optional vectors.
This is a 1.33x speed-up. At this point, Zef is 3.9x slower than CPython 3.10, 8.8x slower than Lua 5.4.7, and 2.5x slower than QuickJS-ng 0.14.0. We’re now 9.05x faster than where we started.
Like Ruby and many other object oriented languages, Zef has private instance fields by default. They are private in the sense that only that instance can see them. Take this code:
class Foo {
my f
fn (inF) f = inF
This is a class Foo that takes a value for f in its constructor, and stores it to a local variable scoped just to instances. For example, this wouldn’t work:
class Foo {
my f
fn (inF) f = inF
fn nope(o) o.f
println(Foo(42).nope(Foo(666)))
The o.f expression in nope cannot access o’s f even though o is of the same type. This is just an outcome of the fact that fields work by appearing in the scope chain of class members. When we do something like o.f, we’re asking to call a method called f. Hence, we get lots of code like:
class Foo {
my f
fn (inF) f = inF
fn f f # method called f that returns local variable f
class Foo {
readable f # shorthand for `my f` and `fn f f`
fn (inF) f = inF
Hence, lots of method calls end up being calls to getters. It’s super wasteful to have all of those calls evaluate the AST of the getter along with everything this entails!
So the next change is to specialize getters.
The heart of this change is in UserFunction, which uses the new Node::inferGetter method to infer whether the body of the function is just a getter. The important bits of this are:
...
Read the original on zef-lang.dev »
Powerful video tools that run entirely in your browser. No uploads to servers, no waiting - just private video processing. No Install
We use cookies for analytics and error tracking to improve your experience. Learn more
...
Read the original on vidstudio.app »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.