10 interesting stories served every morning and every evening.
Deadline: end of WWDC 2026. The exact dates haven’t been announced yet and this timer is based on the estimated schedule (June 9–13). I’ll update it when Apple confirms the dates. They have until the conference ends.
Deadline: end of WWDC 2026. The exact dates haven’t been announced yet and this timer is based on the estimated schedule (June 9–13). I’ll update it when Apple confirms the dates. They have until the conference ends.
The iOS keyboard has been broken since at least iOS 17 and it’s somehow only gotten worse. iOS 26 has been my breaking point. Autocorrect is nearly useless and often hostile, that part I’m used to. But now the correctly tapped letters aren’t even registering correctly. This isn’t just me.
iOS has bugs across the whole ecosystem. But having the keyboard, the thing I interact with hundreds of times a day on my primary device, get progressively worse with every update is absolutely maddening.
I randomly tried Android again for a few months last spring. Using a functioning keyboard was revelatory. But I came crawling back to iOS because I’m weak and the orange iPhone was pretty and the Pixel 10 was boring and I caved to the blue bubble pressure. But the keyboard on this beautiful phone is worse than ever.
So here’s the deal, Apple, if that’s even your real name: fix this broken keyboard, or at the very least publicly
acknowledge it’s broken and commit to fixing it in iOS 27 or earlier. If that countdown hits zero without either of those things happening, I’m switching to Android for good. (Good = at least 2 calendar years)
I know losing one customer means absolutely nothing to your bottom line. But I’d like to think it should mean something to the engineers, UX designers, product people, and whoever else had a hand in building this thing.
You were the “it just works” company. Now you’re just a fruit that I used to know.
...
Read the original on ios-countdown.win »
MonoSketch is an open-source project licensed under the Apache License 2.0.
If you find this project useful, please consider starring the repository on GitHub. Contributions are also welcome through pull requests or by opening issues on GitHub.
If you would like to support the project financially, you can do so by becoming a GitHub Sponsor or contributing via Kofi.
...
Read the original on monosketch.io »
...
Read the original on github.com »
A free browser game that challenges you to press “No Tip” while dark patterns try to trick you into tipping. From tiny buttons and guilt-trip modals to fake loading screens and rigged sliders — can you escape the tip screen?
Skip the Tips is a satirical take on modern tipping culture. Every checkout screen has become a guilt machine. This game lets you practice saying no — if you can find the button.
Features over 30 dark patterns inspired by real-world tipping screens, progressive difficulty, and a timer that keeps shrinking. Play free in your browser — no downloads, no sign-ups, no tip required.
...
Read the original on skipthe.tips »
There was an error while loading. Please reload this page.
Successfully merging this pull request may close these issues.
Reviewers whose approvals may not affect merge requirements
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
There was an error while loading. Please reload this page.
There was an error while loading. Please reload this page.
Successfully merging this pull request may close these issues.
Reviewers whose approvals may not affect merge requirements
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
There was an error while loading. Please reload this page.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
...
Read the original on github.com »
OpenAI, the maker of the most popular AI chatbot, used to say it aimed to build artificial intelligence that “safely benefits humanity, unconstrained by a need to generate financial return,” according to its 2023 mission statement. But the ChatGPT maker seems to no longer have the same emphasis on doing so “safely.”
While reviewing its latest IRS disclosure form, which was released in November 2025 and covers 2024, I noticed OpenAI had removed “safely” from its mission statement, among other changes. That change in wording coincided with its transformation from a nonprofit organization into a business increasingly focused on profits.
OpenAI currently faces several lawsuits related to its products’ safety, making this change newsworthy. Many of the plaintiffs suing the AI company allege psychological manipulation, wrongful death and assisted suicide, while others have filed negligence claims.
As a scholar of nonprofit accountability and the governance of social enterprises, I see the deletion of the word “safely” from its mission statement as a significant shift that has largely gone unreported — outside highly specialized outlets.
And I believe OpenAI’s makeover is a test case for how we, as a society, oversee the work of organizations that have the potential to both provide enormous benefits and do catastrophic harm.
OpenAI, which also makes the Sora video artificial intelligence app, was founded as a nonprofit scientific research lab in 2015. Its original purpose was to benefit society by making its findings public and royalty-free rather than to make money.
To raise the money that developing its AI models would require, OpenAI, under the leadership of CEO Sam Altman, created a for-profit subsidiary in 2019. Microsoft initially invested US$1 billion in this venture; by 2024 that sum had topped $13 billion.
In exchange, Microsoft was promised a portion of future profits, capped at 100 times its initial investment. But the software giant didn’t get a seat on OpenAI’s nonprofit board — meaning it lacked the power to help steer the AI venture it was funding.
A subsequent round of funding in late 2024, which raised $6.6 billion from multiple investors, came with a catch: that the funding would become debt unless OpenAI converted to a more traditional for-profit business in which investors could own shares, without any caps on profits, and possibly occupy board seats.
In October 2025, OpenAI reached an agreement with the attorneys general of California and Delaware to become a more traditional for-profit company.
Under the new arrangement, OpenAI was split into two entities: a nonprofit foundation and a for-profit business.
The restructured nonprofit, the OpenAI Foundation, owns about one-fourth of the stock in a new for-profit public benefit corporation, the OpenAI Group. Both are headquartered in California but incorporated in Delaware.
A public benefit corporation is a business that must consider interests beyond shareholders, such as those of society and the environment, and it must issue an annual benefit report to its shareholders and the public. However, it is up to the board to decide how to weigh those interests and what to report in terms of the benefits and harms caused by the company.
The new structure is described in a memorandum of understanding signed in October 2025 by OpenAI and the California attorney general, and endorsed by the Delaware attorney general.
Many business media outlets heralded the move, predicting that it would usher in more investment. Two months later, SoftBank, a Japanese conglomerate, finalized a $41 billion investment in OpenAI.
Most charities must file forms annually with the Internal Revenue Service with details about their missions, activities and financial status to show that they qualify for tax-exempt status. Because the IRS makes the forms public, they have become a way for nonprofits to signal their missions to the world.
In its forms for 2022, and 2023, OpenAI said its mission was “to build general-purpose artificial intelligence (AI) that safely benefits humanity, unconstrained by a need to generate financial return.”
That mission statement has changed, as of OpenAI’s 990 form for 2024 — which the company filed with the IRS in late 2025. It became “to ensure that artificial general intelligence benefits all of humanity.”
OpenAI had dropped its commitment to safety from its mission statement — along with a commitment to being “unconstrained” by a need to make money for investors. According to Platformer, a tech media outlet, it has also disbanded its “mission alignment” team.
In my view, these changes explicitly signal that OpenAI is making its profits a higher priority than the safety of its products.
To be sure, OpenAI continues to mention safety when it discusses its mission. “We view this mission as the most important challenge of our time,” it states on its website. “It requires simultaneously advancing AI’s capability, safety, and positive impact in the world.”
Nonprofit boards are responsible for key decisions and upholding their organization’s mission.
Unlike private companies, board members of tax-exempt charitable nonprofits cannot personally enrich themselves by taking a share of earnings. In cases where a nonprofit owns a for-profit business, as OpenAI did with its previous structure, investors can take a cut of profits — but they typically do not get a seat on the board or have an opportunity to elect board members, because that would be seen as a conflict of interest.
The OpenAI Foundation now has a 26% stake in OpenAI Group. In effect, that means that the nonprofit board has given up nearly three-quarters of its control over the company. Software giant Microsoft owns a slightly larger stake — 27% of OpenAI’s stock — due to its $13.8 billion investment in the AI company to date. OpenAI’s employees and its other investors own the rest of the shares.
The main goal of OpenAI’s restructuring, which it called a “recapitalization,” was to attract more private investment in the race for AI dominance.
It has already succeeded on that front.
As of early February 2026, the company was in talks with SoftBank for an additional $30 billion and stands to get up to a total of $60 billion from Amazon, Nvidia and Microsoft combined.
OpenAI is now valued at over $500 billion, up from $300 billion in March 2025. The new structure also paves the way for an eventual initial public offering, which, if it happens, would not only help the company raise more capital through stock markets but would also increase the pressure to make money for its shareholders.
OpenAI says the foundation’s endowment is worth about $130 billion.
Those numbers are only estimates because OpenAI is a privately held company without publicly traded shares. That means these figures are based on market value estimates rather than any objective evidence, such as market capitalization.
When he announced the new structure, California Attorney General Rob Bonta said, “We secured concessions that ensure charitable assets are used for their intended purpose.” He also predicted that “safety will be prioritized” and said the “top priority is, and always will be, protecting our kids.”
At the same time, several conditions in the OpenAI restructuring memo are designed to promote safety, including:
A safety and security committee on the OpenAI Foundation board has the authority to “require mitigation measures” that could potentially include the halting of a release of new OpenAI products based on assessments of their risks.
The for-profit OpenAI Group has its own board, which must consider only OpenAI’s mission — rather than financial issues — regarding safety and security issues.
The OpenAI Foundation’s nonprofit board gets to appoint all members of the OpenAI Group’s for-profit board.
But given that neither the mission of the foundation nor of the OpenAI group explicitly alludes to safety, it will be hard to hold their boards accountable for it.
Furthermore, since all but one board member currently serve on both boards, it is hard to see how they might oversee themselves. And the memorandum signed by the California attorney general doesn’t indicate whether he was aware of the removal of any reference to safety from the mission statement.
There are alternative models that I believe would serve the public interest better than this one.
When Health Net, a California nonprofit health maintenance organization, converted to a for-profit insurance company in 1992, regulators required that 80% of its equity be transferred to another nonprofit health foundation. Unlike with OpenAI, the foundation had majority control after the transformation.
A coalition of California nonprofits has argued that the attorney general should require OpenAI to transfer all of its assets to an independent nonprofit.
Another example is The Philadelphia Inquirer. The Pennsylvania newspaper became a for-profit public benefit corporation in 2016. It belongs to the Lenfest Institute, a nonprofit.
This structure allows Philadelphia’s biggest newspaper to attract investment without compromising its purpose — journalism serving the needs of its local communities. It’s become a model for potentially transforming the local news industry.
At this point, I believe that the public bears the burden of two governance failures. One is that OpenAI’s board has apparently abandoned its mission of safety. And the other is that the attorneys general of California and Delaware have let that happen.
...
Read the original on theconversation.com »
Brussels is going head-to-head with social media platforms to change addictive design.
The findings laid out a week ago mark the first time the Commission has set out its stance on the design of a social media platform under its Digital Services Act. | Jaap Arriens/NurPhoto via Getty Images
The European Commission is for the first time tackling the addictiveness of social media in a fight against TikTok that may set new design standards for the world’s most popular apps.
Brussels has told the company to change several key features, including disabling infinite scrolling, setting strict screen time breaks and changing its recommender systems. The demand follows the Commission’s declaration that TikTok’s design is addictive to users — especially children.
The fact that the Commission said TikTok should change the basic design of its service is “ground-breaking for the business model fueled by surveillance and advertising,” said Katarzyna Szymielewicz, president of the Panoptykon Foundation, a Polish civil society group.
That doesn’t bode well for other platforms, particularly Meta’s Facebook and Instagram. The two social media giants are also under investigation over the addictiveness of their design.
The findings laid out a week ago mark the first time the Commission has set out its stance on the design of a social media platform under its Digital Services Act, the EU’s flagship online-content law that Brussels says is essential for protecting users.
TikTok can now defend its practices and review all the evidence the Commission considered — and has said it would fight these findings. If it fails to satisfy the Commission, the app could face fines up to 6 percent of annual global revenue.
It’s the first time any regulator has attempted to set a legal standard for the addictiveness of platform design, a senior Commission official said in a briefing to reporters.
“The findings mark a turning point [because] the Commission is treating addictive design on social media as an enforceable risk” under the Digital Services Act, said Lena-Maria Böswald, senior policy researcher at think tank Interface.
Jan Penfrat, senior policy adviser at civil rights group EDRi, said it would be “very, very strange for the Commission to not then use this as a template and go after other companies as well.”
The Digital Services Act requires platforms like TikTok to assess and mitigate risks to their users. But these risks are vaguely defined in the law, so until now it had been unclear exactly where the regulator would draw the line.
Two years after the TikTok probe was launched, the Commission has opted to strike at the heart of platform design, claiming it poses a risk to the mental health of users, particularly children. The Commission’s other concerns with TikTok were settled amicably between the two sides.
At a briefing with reporters, EU tech chief Henna Virkkunen said the findings signal that the Commission’s work is entering a new stage of maturity when it comes to systemic risks.
Facebook and Instagram have been under investigation over the addictiveness of their platforms since May 2024, including whether they endanger children. Just like TikTok, the design and algorithms of the platforms are under scrutiny.
Meta has mounted a staunch defense in an ongoing California case, in which it is accused of knowingly designing an addictive social media that hurts users. TikTok and Snap settled the same case before it went to trial.
TikTok spokesperson Paolo Ganino said the Commission’s findings “present a categorically false and entirely meritless depiction of our platform and we will take whatever steps are necessary to challenge these findings through every means available to us.”
The Commission could eventually agree with platforms on a wide range of changes that address addictive design. What they decide will depend on the different risk profiles and patterns of use of each platform — as well as how each company defends itself.
That likely means it will take a while for TikTok to make any change to its systems, as the platform reviews the evidence and tries to negotiate a solution with the regulator.
In another, simpler DSA enforcement case, it took the Commission more than a year after issuing preliminary findings to declare Elon Musk’s X was not compliant with its obligations on transparency.
TikTok may pursue a series of changes and may push the Commission to adopt a lighter regulatory approach. The video-sharing giant likely won’t “get it right” the first time, said EDRi’s Penfrat, and it may take a few tries to satisfy Brussels.
“It could be anything from changing default settings, to outright prohibiting a specific design feature, or requiring more user control,” said Peter Chapman, a governance researcher and lawyer who is associate director at the Knight-Georgetown Institute.
He expects the changes could be different for each platform — as while the findings show the Commission’s thinking, interventions must be targeted depending on how design features are used.
“Multiple platforms use similar design features” but they serve different purposes and carry different risks, said Chapman, pointing to the example of notifications that try to draw you back in. For example, notifications for messages carry a different risk of addiction to those alerting a user about a livestream, he said.
...
Read the original on www.politico.eu »
United States Customs and Border Protection plans to spend $225,000 for a year of access to Clearview AI, a face recognition tool that compares photos against billions of images scraped from the internet.
The deal extends access to Clearview tools to Border Patrol’s headquarters intelligence division (INTEL) and the National Targeting Center, units that collect and analyze data as part of what CBP calls a coordinated effort to “disrupt, degrade, and dismantle” people and networks viewed as security threats.
The contract states that Clearview provides access to “over 60+ billion publicly available images” and will be used for “tactical targeting” and “strategic counter-network analysis,” indicating the service is intended to be embedded in analysts’ day-to-day intelligence work rather than reserved for isolated investigations. CBP says its intelligence units draw from a “variety of sources,” including commercially available tools and publicly available data, to identify people and map their connections for national security and immigration operations.
The agreement anticipates analysts handling sensitive personal data, including biometric identifiers such as face images, and requires nondisclosure agreements for contractors who have access. It does not specify what kinds of photos agents will upload, whether searches may include US citizens, or how long uploaded images or search results will be retained.
The Clearview contract lands as the Department of Homeland Security faces mounting scrutiny over how face recognition is used in federal enforcement operations far beyond the border, including large-scale actions in US cities that have swept up US citizens. Civil liberties groups and lawmakers have questioned whether face-search tools are being deployed as routine intelligence infrastructure, rather than limited investigative aids, and whether safeguards have kept pace with expansion.
Last week, Senator Ed Markey introduced legislation that would bar ICE and CBP from using face recognition technology altogether, citing concerns that biometric surveillance is being embedded without clear limits, transparency, or public consent.
CBP did not immediately respond to questions about how Clearview would be integrated into its systems, what types of images agents are authorized to upload, and whether searches may include US citizens.
Clearview’s business model has drawn scrutiny because it relies on scraping photos from public websites at scale. Those images are converted into biometric templates without the knowledge or consent of the people photographed.
Clearview also appears in DHS’s recently released artificial intelligence inventory, linked to a CBP pilot initiated in October 2025. The inventory entry ties the pilot to CBP’s Traveler Verification System, which conducts face comparisons at ports of entry and other border-related screenings.
CBP states in its public privacy documentation that the Traveler Verification System does not use information from “commercial sources or publicly available data.” It is more likely, at launch, that Clearview access would instead be tied to CBP’s Automated Targeting System, which links biometric galleries, watch lists, and enforcement records, including files tied to recent Immigration and Customs Enforcement operations in areas of the US far from any border.
Clearview AI did not immediately respond to a request for comment.
Recent testing by the National Institute of Standards and Technology, which evaluated Clearview AI among other vendors, found that face-search systems can perform well on “high-quality visa-like photos” but falter in less controlled settings. Images captured at border crossings that were “not originally intended for automated face recognition” produced error rates that were “much higher, often in excess of 20 percent, even with the more accurate algorithms,” federal scientists say.
The testing underscores a central limitation of the technology: NIST found that face-search systems cannot reduce false matches without also increasing the risk that the systems fail to recognize the correct person.
As a result, NIST says agencies may operate the software in an “investigative” setting that returns a ranked list of candidates for human review rather than a single confirmed match. When systems are configured to always return candidates, however, searches for people not already in the database will still generate “matches” for review. In those cases, the results will always be 100 percent wrong.
...
Read the original on www.wired.com »
Specification: SBOM 1.0 (Sandwich Bill of Materials)
Status: Draft
Maintainer: The SBOM Working Group
License: MIT (Mustard Is Transferable)
Modern sandwich construction relies on a complex graph of transitive ingredients sourced from multiple registries (farms, distributors, markets). Consumers have no standardized way to enumerate the components of their lunch, assess ingredient provenance, or verify that their sandwich was assembled from known-good sources. SBOM addresses this by providing a machine-readable format for declaring the full dependency tree of a sandwich, including sub-components, licensing information, and known vulnerabilities.
A typical sandwich contains between 6 and 47 direct dependencies, each pulling in its own transitive ingredients. A “simple” BLT depends on bacon, which depends on pork, which depends on a pig, which depends on feed corn, water, antibiotics, and a farmer whose field hasn’t flooded yet. The consumer sees three letters, but the supply chain sees a directed acyclic graph with cycle detection issues (the pig eats the corn that grows in the field that was fertilized by the pig).
The 2025 egg price crisis was a cascading failure equivalent to a left-pad incident, except it affected breakfast. A single avian flu outbreak took down the entire egg ecosystem for months. Post-incident analysis revealed that 94% of affected sandwiches had no lockfile and were resolving eggs to latest at assembly time.
An SBOM document MUST be a JSON file with the .sbom extension, after YAML was considered and rejected on the grounds that the sandwich industry has enough problems without adding whitespace sensitivity.
Each sandwich component MUST include the following fields:
surl (required): A Sandwich URL uniquely identifying the ingredient. Format: surl:type/name@version. Follows the same convention as PURL but for food. Examples:
name (required): The canonical name of the ingredient as registered in a recognized food registry. Unregistered ingredients (e.g., “that sauce from the place”) MUST be declared as unverified-source and will trigger a warning during sandwich linting.
version (required): The specific version of the ingredient. Tomatoes MUST use calendar versioning (harvest date). Cheese MUST use age-based versioning (e.g., cheddar@18m). Bread follows semver, where a MAJOR version bump indicates a change in grain type, MINOR indicates a change in hydration percentage, and PATCH indicates someone left it out overnight and it’s a bit stale but probably fine.
supplier (required): The origin registry. Valid registries include farm://, supermarket://, farmers-market://, and back-of-the-fridge://. The latter is considered an untrusted source and components resolved from it MUST include a best-before integrity check.
integrity (required): A SHA-256 hash of the ingredient at time of acquisition.
license (required): The license under which the ingredient is distributed. Common licenses include:
MIT (Mustard Is Transferable): The ingredient may be used in any sandwich without restriction. Attribution appreciated but not required.
GPL (General Pickle License): If you include a GPL-licensed ingredient, the entire sandwich becomes open-source. You must provide the full recipe to anyone who asks. Pickle vendors have been particularly aggressive about this.
AGPL (Affero General Pickle License): Same as GPL, but if you serve the sandwich over a network (delivery apps), you must also publish the recipe. This is why most restaurants avoid AGPL pickles.
BSD (Bread, Sauce, Distributed): Permissive. You can do whatever you want as long as you keep the original baker’s name on the bread bag, and also a second copy of the baker’s name, and also don’t use the baker’s name to promote your sandwich without permission. There are four variants of this license and nobody can remember which is which.
SSPL (Server Side Pickle License): You may use this pickle in your sandwich, but if you offer sandwich-making as a service, you must open-source your entire kitchen, including the weird drawer with all the takeaway menus. Most cloud sandwich providers have stopped serving SSPL pickles entirely.
Proprietary: The ingredient’s composition is not disclosed. Common for “secret sauces.” Consumption is permitted but redistribution, reverse-engineering, or asking what’s in it are prohibited by the EULA you agreed to by opening the packet.
Public Domain: The ingredient’s creator has waived all rights. Salt, for example, has been public domain since approximately the Jurassic period, though several companies have attempted to relicense it.
Sandwich assembly MUST resolve dependencies depth-first. If two ingredients declare conflicting sub-dependencies (e.g., sourdough requires starter-culture@wild but the prosciutto’s curing process pins salt@himalayan-pink), the assembler SHOULD attempt version negotiation. If negotiation fails, the sandwich enters a conflict state and MUST NOT be consumed until a human reviews the dependency tree and makes a judgement call.
Circular dependencies are permitted but discouraged. A sandwich that contains bread made with beer made with grain from the same field as the bread is technically valid but will cause the resolver to emit a warning about “co-dependent sourdough.”
All SBOM documents SHOULD be scanned against the National Sandwich Vulnerability Database (NSVD). Known vulnerabilities include:
CVE-2024-MAYO: Mayonnaise left at room temperature for more than four hours. Severity: Critical. Affected versions: all. No patch available; mitigation requires refrigeration, which the specification cannot enforce.
CVE-2023-GLUTEN: Bread contains gluten. This is not a bug; it is a feature of wheat. However, it must be disclosed because approximately 1% of consumers will experience adverse effects, and the remaining 99% will ask about it anyway.
CVE-2025-AVO: Avocado ripeness window is approximately 17 minutes. Version pinning is ineffective. The working group recommends vendoring avocado (i.e., buying it already mashed) to reduce exposure to ripeness drift.
CVE-2019-SPROUT: Alfalfa sprouts were found to be executing arbitrary bacteria in an unsandboxed environment. Severity: High. The vendor disputes this classification.
Each ingredient MUST include a signed provenance attestation from the supplier. The attestation MUST be generated in a hermetic build environment and MUST NOT be generated in a build environment where other food is being prepared simultaneously, as this introduces the risk of cross-contamination of provenance claims.
For farm-sourced ingredients, the attestation chain SHOULD extend to the seed or animal of origin. A tomato’s provenance chain includes the seed, the soil, the water, the sunlight, the farmer, the truck, the distributor, and the shelf it sat on for a period the supermarket would prefer not to disclose.
Eggs are worse, because an egg’s provenance attestation is generated by a chicken that may itself lack a valid attestation chain. The working group has deferred the question of chicken-or-egg provenance ordering to version 2.0.
A sandwich MUST be reproducible. Given identical inputs, two independent assemblers MUST produce bite-for-bite identical sandwiches, which in practice is impossible. The specification handles this by requiring assemblers to document all sources of non-determinism in a sandwich.lock file, including:
Whether the assembler was “just eyeballing it” for condiment quantities
Reproducible sandwich builds remain aspirational. A compliance level of “close enough” is acceptable for non-safety-critical sandwiches. Safety-critical sandwiches SHOULD target full reproducibility.
Consumers SHOULD audit their full dependency tree before consumption. A sbom audit command will flag any ingredient that:
Has not been updated in more than 12 months
Is maintained by a single farmer with no succession plan (see also: goat farming)
Has more than 200 transitive sub-ingredients
Was sourced from a registry that does not support 2FA
Contains an ingredient whose maintainer has mass-transferred ownership to an unknown entity in a different country (see: the left-lettuce incident)
Early adoption has been mixed. The artisanal sandwich community objects to machine-readable formats on philosophical grounds, arguing that a sandwich’s ingredients should be discoverable through the act of eating it. The fast food industry has expressed support in principle but notes that their sandwiches’ dependency trees are trade secrets and will be shipped as compiled binaries.
The EU Sandwich Resilience Act (SRA) requires all sandwiches sold or distributed within the European Union to include a machine-readable SBOM by Q3 2027. Sandwiches without a valid SBOM will be denied entry at the border. The European Commission has endorsed the specification as part of its broader lunch sovereignty agenda, arguing that member states cannot depend on foreign sandwich infrastructure without visibility into the ingredient graph. A working paper on “strategic autonomy in condiment supply chains” is expected Q2 2027.
The US has issued Executive Order 14028.5, which requires all sandwiches served in federal buildings to include an SBOM. The order does not specify whether it means Sandwich or Software Bill of Materials. Several federal agencies have begun submitting both.
The Software Heritage foundation archives all publicly available source code as a reference for future generations, and the Sandwich Heritage Foundation has adopted the same mission for sandwiches, with less success.
Every sandwich assembled under SBOM 1.0 is archived in a content-addressable store keyed by its integrity hash. The archive currently holds 14 sandwiches because most contributors cannot figure out how to hash a sandwich without eating it first. A BLT submitted in March was rejected because the tomato’s checksum changed during transit. The Foundation suspects condensation.
Long-term preservation remains an open problem. Software can be archived indefinitely on disk, but sandwiches introduce material constraints the specification was not designed for. The Foundation has explored freeze-drying, vacuum sealing, and “just taking a really detailed photo,” but none of these produce a bit-for-bit reproducible sandwich from the archive. The working group considers this a storage layer concern and out of scope for the specification.
Funding comes from individual donations and a pending grant application to the EU’s Horizon programme under the call for “digital preservation of cultural food heritage.” The application was rejected once already on the grounds that sandwiches are not digital, a characterization the Foundation disputes given that every sandwich under SBOM 1.0 is, by definition, a digital artifact with a hash.
This specification is dedicated to a small sandwich shop on Folsom Street in SoMA that made the best BLT the author has ever eaten, and which closed in 2019 without producing an SBOM or publishing its recipe in any machine-readable format.
This specification is provided “AS IS” without warranty of any kind, including but not limited to the warranties of edibility, fitness for a particular meal, and non-contamination. The SBOM Working Group is not responsible for any sandwich constructed in accordance with this specification that nonetheless tastes bad.
...
Read the original on nesbitt.io »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.