10 interesting stories served every morning and every evening.
I didn’t start running until I was in my late twenties, and even so I would end up in a pattern where I’d get motivated and go on a couple of runs, take a few days off, go on another run the following week, and next thing you know it’s been a month since I last run. Rinse and repeat.
In July 2015, something changed. I headed out on a run on a Tuesday, then did another one the next day, and the day after, and… I took the Friday off. When I woke up on July 11, 2015 I remember thinking I could have done 4 days in a row, so I set out to try and do that. 4 days turned into a week, then a month, then two, then six, then a year, and here I am, ten years later.
I’ve had the privilege to run in some amazing places, from the streets of my hometown to the trails of national parks, on all seven continents. I’ve run solo and I’ve run with friends, I’ve run with music and I’ve run with my own thoughts. I’ve run through stress fractures, heart procedures, flus and other physical ailments. I’ve run in frigid sub zero weather and in sweltering heat. Each run has been a new adventure, and I’ve learned something different from every experience.
Running has changed my life, and I hope I’ll still keep this going in another decade. I’ve been extremely lucky to have had the support of my wonderful wife Molly throughout this journey, and I couldn’t have done it without her patience, how many times has she heard me say I’ll be back in a few! in the mornings.
...
Read the original on nodaysoff.run »
A new agentic IDE that works alongside you from prototype to productionI’m sure you’ve been there: prompt, prompt, prompt, and you have a working application. It’s fun and feels like magic. But getting it to production requires more. What assumptions did the model make when building it? You guided the agent throughout, but those decisions aren’t documented. Requirements are fuzzy and you can’t tell if the application meets them. You can’t quickly understand how the system is designed and how that design will affect your environment and performance. Sometimes it’s better to take a step back, think through decisions, and you’ll end up with a better application that you can easily maintain. That’s what Kiro helps you do with spec-driven development. I’m excited to announce Kiro, an AI IDE that helps you deliver from concept to production through a simplified developer experience for working with AI agents. Kiro is great at ‘vibe coding’ but goes way beyond that—Kiro’s strength is getting those prototypes into production systems with features such as specs and hooks. Kiro specs are artifacts that prove useful anytime you need to think through a feature in-depth, refactor work that needs upfront planning, or when you want to understand the behavior of systems—in short, most things you need to get to production. Requirements are usually uncertain when you start building, which is why developers use specs for planning and clarity. Specs can guide AI agents to a better implementation in the same way.Kiro hooks act like an experienced developer catching things you miss or completing boilerplate tasks in the background as you work. These event-driven automations trigger an agent to execute a task in the background when you save, create, delete files, or on a manual trigger.Kiro accelerates the spec workflow by making it more integrated with development. In our example, we have an e-commerce application for selling crafts to which we want to add a review system for users to leave feedback on crafts. Let’s walk through the three-step process of building with specs.The ecommerce app that we are working withKiro unpacks requirements from a single prompt—type “Add a review system for products” and it generates user stories for viewing, creating, filtering, and rating reviews. Each user story includes EARS (Easy Approach to Requirements Syntax) notation acceptance criteria covering edge cases developers typically handle when building from basic user stories. This makes your prompt assumptions explicit, so you know Kiro is building what you want.Kiro then generates a design document by analyzing your codebase and approved spec requirements. It creates data flow diagrams, TypeScript interfaces, database schemas, and API endpoints—like the Review interfaces for our review system. This eliminates the lengthy back-and-forth on requirements clarity that typically slows development.Kiro generates tasks and sub-tasks, sequences them correctly based on dependencies, and links each to requirements. Each task includes details such as unit tests, integration tests, loading states, mobile responsiveness, and accessibility requirements for implementation. This lets you check work in steps rather than discovering missing pieces after you think you’re done.Kiro simplifies this entire process by autogenerating the tasks and sub-tasks, sequencing them in the right order, and linking each task back to requirements so nothing falls through the cracks. As you can see below, Kiro has thought of writing unit tests for each task, added loading states, integration tests for the interaction between products and reviews, and responsive design and accessibility.The task interface lets you trigger tasks one-by-one with a progress indicator showing execution status. Once complete, you can see the completion status inline and audit the work by viewing code diffs and agent execution history.Kiro’s specs stay synced with your evolving codebase. Developers can author code and ask Kiro to update specs or manually update specs to refresh tasks. This solves the common problem where developers stop updating original artifacts during implementation, causing documentation mismatches that complicate future maintenance.4. Catch issues before they ship with hooksBefore submitting code, most developers run through a mental checklist: Did I break anything? Are tests updated? Is documentation current? This caution is healthy but can take a lot of manual work to implement.Kiro’s agent hooks act like an experienced developer catching things you miss. Hooks are event-driven automations that execute when you save or create files—it’s like delegating tasks to a collaborator. Set up a hook once, and Kiro handles the rest. Some examples:When you save a React component, hooks update the test file.When you’re ready to commit, security hooks scan for leaked credentials.Hooks enforce consistency across your entire team. Everyone benefits from the same quality checks, code standards, and security validation fixes. For our review feature, I want to ensure any new React component follows the Single Responsibility Principle so developers don’t create components that do too many things. Kiro takes my prompt, generates an optimized system prompt, and selects the repository folders to monitor. Once this hook is committed to Git, it enforces the coding standard across my entire team—whenever anyone adds a new component, the agent automatically validates it against the guidelines. Beyond specs and hooks, Kiro includes all the features you’d expect from an AI code editor: Model Context Protocol (MCP) support for connecting specialized tools, steering rules to guide AI behavior across your project, and agentic chat for ad-hoc coding tasks with file, URL, Doc’s context providers. Kiro is built on Code OSS, so you can keep your VS Code settings and Open VSX compatible plugins while working with our IDE. You get the full AI coding experience, plus the fundamentals needed for production.Our vision is to solve the fundamental challenges that make building software products so difficult—from ensuring design alignment across teams and resolving conflicting requirements, to eliminating tech debt, bringing rigor to code reviews, and preserving institutional knowledge when senior engineers leave. The way humans and machines coordinate to build software is still messy and fragmented, but we’re working to change that. Specs is a major step in that direction.
Ready to experience spec-driven development? Kiro is free during preview, with some limits. We’re excited to see you try it out to build real apps and would love to hear from you on our Discord server.To get started, Download Kiro and sign in with one of our four login methods including Google and GitHub. We support Mac, Windows, and Linux, and most popular programming languages. Our hands-on tutorial walks you through building a complete feature from spec to deployment. Start the tutorial.Let’s connect - tag @kirodotdev on X, LinkedIn, or Instagram, and @kiro.dev on Bluesky, and share what you’ve built using hashtag #builtwithkiro
...
Read the original on kiro.dev »
Privacy advocates and elected officials have blasted the mishandling of the data by California law enforcement agencies. But the issue received renewed scrutiny after 404 Media reported in May that ICE was accessing information from the nationwide network of cameras manufactured by Atlanta-based Flock. Numerous Southern California agencies similarly shared Flock data with the feds, CalMatters wrote last month.
The cameras, made by Flock Safety, capture the license plate of every vehicle that passes them, then store the information in a database for use in police investigations.
Under a decade-old state law, California police are prohibited from sharing data from automated license plate readers with out-of-state and federal agencies. Attorney General Rob Bonta affirmed that fact in a 2023 notice to police.
The logs show that since installing hundreds of plate readers last year, the departments have shared data for investigations by seven federal agencies, including the FBI. In at least one case, the Oakland Police Department fulfilled a request related to an Immigration and Customs Enforcement investigation.
San Francisco and Oakland police appear to have repeatedly broken state law by sharing data from automated license plate cameras with federal law enforcement, according to records obtained by The Standard.
Everything you need to know to start your day.
Privacy advocates and elected officials have blasted the mishandling of the data by California law enforcement agencies. But the issue received renewed scrutiny after 404 Media reported in May that ICE was accessing information from the nationwide network of cameras manufactured by Atlanta-based Flock. Numerous Southern California agencies similarly shared Flock data with the feds, CalMatters wrote last month.
The cameras, made by Flock Safety, capture the license plate of every vehicle that passes them, then store the information in a database for use in police investigations.
Under a decade-old state law, California police are prohibited from sharing data from automated license plate readers with out-of-state and federal agencies. Attorney General Rob Bonta affirmed that fact in a 2023 notice to police.
The logs show that since installing hundreds of plate readers last year, the departments have shared data for investigations by seven federal agencies, including the FBI. In at least one case, the Oakland Police Department fulfilled a request related to an Immigration and Customs Enforcement investigation.
San Francisco and Oakland police appear to have repeatedly broken state law by sharing data from automated license plate cameras with federal law enforcement, according to records obtained by The Standard.
Everything you need to know to start your day.
The logs, which The Standard obtained via a public records request, show every time the Oakland Police Department searched its network of plate readers or granted other agencies access to its data. Each entry includes the requesting agency, reason for the request, time of submission, and the number of Flock networks included in the search.
The OPD didn’t share information directly with the federal agencies. Rather, other California police departments searched Oakland’s system on behalf of federal counterparts more than 200 times — providing reasons such as “FBI investigation” for the searches — which appears to mirror a strategy first reported by 404 Media, in which federal agencies that don’t have contracts with Flock turn to local police for backdoor access.
Oakland cops shared data with feds soon after their cameras went live in August 2024. Two of the OPD’s earliest logs are Sept. 16 searches in which the San Francisco Police Department pulled data from Oakland’s cameras on behalf of investigators at the FBI and the federal Bureau of Alcohol, Tobacco, Firearms, and Explosives.
One search by the California Highway Patrol of the OPD’s system on April 22 is listed as an “ICE case,” with no clarification. A CHP spokesperson said the agency was “actively investigating” the search after The Standard asked for comment.
“If any CHP personnel requested license plate data on behalf of ICE for purposes of immigration enforcement, that would be a blatant violation of both state law and longstanding department policy,” the spokesperson wrote. “If these allegations are confirmed, there will be consequences.”
The SFPD hasn’t responded to a request for its own logs filed by The Standard more than a month ago, but OPD records show San Francisco cops accessed Oakland’s data at least 100 times on behalf of federal agencies.
“OPD will carefully review this information to determine whether any actions are inconsistent with our policies,” an OPD spokesperson wrote. “We will also collaborate with external agencies to identify any potential issues and ensure accountability.”
“We take privacy seriously and we have robust policies in place to protect personal information and ensure the responsible, lawful use of surveillance technology,” the spokesperson wrote.
Adam Schwartz, privacy litigation director at the Electronic Frontier Foundation, confirmed that Senate Bill 34 of 2015 prohibits California police from sharing data from automated license plate readers with out-of-state and federal agencies, regardless of what they plan to do with the data or whether they’re working on a joint task force.
“Just because Oakland has collected ALPR data for purposes of dealing with local crime doesn’t mean this is a ‘come one, come all’ buffet,” Schwartz said.
Oakland and San Francisco in March 2024 signed contracts to install hundreds of Flock’s cameras. City officials have touted the cameras’ ability to tamp down on shootings, robberies, and road rage. Indeed, logs show that the vast majority of searches made by the San Francisco and Oakland police departments are related to local crime enforcement.
“We’re proud to be a place that is safe for immigrants, people seeking abortion, and people who are seeking gender-affirming healthcare,” Schwartz said. “In order to be fully a sanctuary state, we also have to be a data sanctuary state.”
Mike Katz-Lacabe, an activist at the group Oakland Privacy, said he’d like to see more litigation or enforcement from state officials to force police to comply with the law banning the sharing of data with federal agencies.
“Lawsuits that cost money generally get agencies and jurisdictions to change how they do things,” Katz-Lacabe said. “Until they actually have accountability, I don’t think that much is going to happen.”
...
Read the original on sfstandard.com »
TL;DR: Apple’s rules and technical restrictions are blocking other browser vendors from successfully offering their own engines to users in the EU. At the recent Digital Markets Act (DMA) workshop, Apple claimed it didn’t know why no browser vendor has ported their engine to iOS over the past 15 months. But the reality is Apple knows exactly what the barriers are, and has chosen not to remove them.
Safari is the highest margin product Apple has ever made, accounts for 14-16% of Apple’s annual operating profit and brings in $20 billion per year in search engine revenue from Google. For each 1% browser market share that Apple loses for Safari, Apple is set to lose $200 million in revenue per year.
Ensuring other browsers are not able to compete fairly is critical to Apple’s best and easiest revenue stream, and allows Apple to retain full control over the maximum capabilities of web apps, limiting their performance and utility to prevent them from meaningfully competing with native apps distributed through their app store. Consumers and developers (native or web) then suffer due to a lack of competition.
This browser engine ban is unique to Apple and no other gatekeeper imposes such a restriction. Until Apple lifts these barriers they are not in effective compliance with the DMA.
We had the opportunity to question Apple directly on this at the 2025 DMA workshop. Here’s how they responded:
As a quick background to new readers, we (Open Web Advocacy) are a non-profit dedicated to improving browser and web app competition on all operating systems. We receive no funding from any gatekeeper, nor any of the browser vendors. We have engaged with multiple regulators including in the EU, UK, Japan, Australia and the United States.
Our primary concern is Apple’s rule banning third-party browser engines from iOS and thus setting a ceiling on browser and web app competition.
We engaged extensively with the UK’s CMA and the EU on this topic and to our delight specific text was added to the EU’s Digital Markets Act explicitly prohibiting the banning of third-party browser engines, and stating that the purpose was to prevent gatekeepers from determining the performance, stability and functionality of third party browsers and the web apps they power.
The first batch of designated gatekeepers Apple, Google, Meta, Amazon, Bytedance, Microsoft were required to be in compliance with the DMA by March 7th, 2024.
Apple’s compliance did not start well. Faced with the genuine possibility of third-party browsers effectively powering web apps, Apple’s first instinct was to remove web app support entirely from iOS with no notice to either businesses or consumers. Under significant pressure from us and the Commission, Apple canceled their plan to sabotage web apps in the EU.
Both Google and Mozilla began porting their browser engines Blink and Gecko respectively to iOS. Other browser vendors are dependent on these ports to bring their own engines to their browsers iOS, as their products are typically soft forks (copies with modifications) of Blink or Gecko.
However there were significant issues with Apple’s contract and technical restrictions that made porting browser engines to iOS “as painful as possible” for browser vendors.
Apple’s proposals fail to give consumers viable choices by making it as painful as possible for others to provide competitive alternatives to Safari […] This is another example of Apple creating barriers to prevent true browser competition on iOS.
Many of Apple’s barriers rely on vague security and privacy grounds for which Apple has published no detailed technical justification for both their necessity and proportionality. As the US’s Department of Justice wrote in their complaint:
In the end, Apple deploys privacy and security justifications as an elastic shield that can stretch or contract to serve Apple’s financial and business interests.
In June, 2024 we published a paper outlining these barriers.
We recognize under the DMA that we’ve been forced to change. And we have created a program that keeps security and privacy in mind, that keeps the integrity of the operating system in mind, and allows third parties to bring their browser engine, Google, Mozilla, to the platform. And for whatever reason, they have chosen not to do so.
At the DMA workshop last week, we directly raised with Apple the primary blocker preventing third-party browser engines from shipping on iOS. Apple claimed that vendors like Google and Mozilla have “everything they need” to ship a browser engine in the EU and simply “have chosen not to do so”.
Apple has been fully aware of these barriers since at least June 2024, when we covered them in exhaustive detail. Multiple browser vendors have also discussed these same issues with Apple directly. The suggestion that Apple is unaware of the problems is not just ridiculous, it’s demonstrably false. Apple knows exactly what the issues are. It is simply refusing to address them.
The most critical barriers that continue to block third-party engines on iOS include:
Loss of existing EU users: Apple forces browser vendors to create entirely new apps to use their own engine, meaning they must abandon all current EU users and start from scratch.
Web developer testing: Apple allows native app developers outside the EU to test EU-specific functionality, but offers no equivalent for web developers to test their software using third-party browser engines on iOS. Apple stated during the conference to expect updates here but provided no details.
No updates on long trips outside EU: Apple has not confirmed they will not disable browser updates (including security patches) if an EU user travels outside the EU for more than 30 days. This, far from being a security measure, actively lowers users’ security by depriving them of security updates.
Hostile legal terms: The contractual conditions Apple imposes are harsh, one-sided, and incompatible with the DMA’s requirement that rules for API access can only be strictly necessary, proportionate security measures.
Apple has addressed two of the issues we raised in our original paper:
However, the most critical barrier remains firmly in place: Apple still forces browser vendors to abandon all their existing EU users if they want to ship a non-WebKit engine. This single requirement destroys the business case for porting an engine to iOS. Building and maintaining a full browser engine is a major undertaking. Requiring vendors to start from scratch in one region (even a region as large as the EU), with zero users, makes the investment commercially nonviable.
Instead, transaction and overhead costs for developers will be higher, rather than lower, since they must develop a version of their apps for the EU and another for the rest of the world. On top of that, if and when they exercise the possibility to, for instance, incorporate their own browser engines into their browsers (they formerly worked on Apple’s proprietary WebKit), they must submit a separate binary to Apple for its approval. What does that mean exactly? That developers must ship a new version of their app to its customers, and ‘reacquire’ them from zero.
Those are the major blockers to browser vendors porting their own engines to iOS. The list of changes that we believe Apple needs to make to be compliant with the DMA with respect to browsers and web apps on iOS is far larger, and we outline them in detail at the end of the article.
Perhaps the most important of these is the ability for browsers to install and manage web apps with their own engines. Something that has been directly recommended by both the UK’s MIR investigation and the UK’s SMS investigations.
Gatekeepers can hamper the ability of end users to access online content and services, including software applications. Therefore, rules should be established to ensure that the rights of end users to access an open internet are not compromised by the conduct of gatekeepers.
What sets the web apart is that it was never designed to confine users within closed ecosystems. It is the world’s only truly open and interoperable platform, requiring no contracts with OS gatekeepers, no revenue sharing with intermediaries, and no approval from dominant platform owners. Anyone can publish a website or build a web app without permission. There are no built-in lock-in mechanisms keeping users tied to a single company’s hardware or services. Users can switch browsers, move between devices, and across ecosystems, all without losing access to their data, tools, or digital lives.
This kind of freedom simply doesn’t exist in app store-controlled environments, where every app update, transaction, and user interaction is subject to centralized control, censorship, or a mandatory financial cut. The web’s architecture prioritizes user autonomy, developer freedom, and cross-platform compatibility.
Apple’s justification for its gatekeeping is security. Its position is that only Apple can be trusted to decide what software users are allowed to install. Every third party must submit to its review and approval process, no exceptions.
But the secure, interoperable, and capable alternative already exists, and it’s thriving. That solution is the Web, and more specifically, web apps. On open platforms like desktop, web technologies already account for over 70% of user activity, and that figure is only growing.
Web apps offer the key properties needed to solve the cross-platform problem. They run inside the browser sandbox, which even Apple admits is “orders of magnitude more stringent than the sandbox for native iOS apps”. They are fully interoperable across operating systems. They don’t require contracts with OS vendors. And they’re highly capable: if there was effective competition, around 90% of the apps on your phone could be delivered as web apps.
However, this promise only holds if browser vendors are allowed to compete, using their own engines, on every platform. Without that, Apple can unilaterally limit what the web is capable of, not just on iOS, but everywhere. If a feature can’t be used on a platform as critical as iOS, then for many developers, it may as well not exist.
That’s why enforcement of the Digital Markets Act on this issue is so vital, not just for the EU, but for the world.
The web is the world’s only truly interoperable check against operating system platform monopolies. It must be allowed to compete fairly.
A key question is whether Apple is required to fix this under the Digital Markets Act. Apple’s representatives argue that browser vendors can port their own engines to iOS in the EU and at a highly superficial and technical level this is true. However, what Apple does not acknowledge is that the conditions it imposes make doing so financially unviable in practice. Does this really count as compliance?
To answer that, we need to examine the DMA itself.
The primary relevant article in the Digital Markets Act is Article 5(7):
The gatekeeper shall not require end users to use, or business users to use, to offer, or to interoperate with, an identification service, a web browser engine or a payment service, or technical services that support the provision of payment services, such as payment systems for in-app purchases, of that gatekeeper in the context of services provided by the business users using that gatekeeper’s core platform services.
At face value, Apple appears to have complied with the letter of Article 5(7). It technically allows third-party engines, but only under technical platform constraints and contractual conditions that render porting non-viable.
The gatekeeper shall ensure and demonstrate compliance with the obligations laid down in Articles 5, 6 and 7 of this Regulation. The measures implemented by the gatekeeper to ensure compliance with those Articles shall be effective in achieving the objectives of this Regulation and of the relevant obligation.
The gatekeeper shall not engage in any behaviour that undermines effective compliance with the obligations of Articles 5, 6 and 7 regardless of whether that behaviour is of a contractual, commercial or technical nature, or of any other nature, or consists in the use of behavioural techniques or interface design.
These two articles clarify that it is not enough for Apple to simply allow alternative engines in theory. The measures must be effective in achieving the article’s objectives, and Apple must not undermine those objectives by technical or contractual means.
The intent of this provision is laid out clearly in the recitals of the DMA:
In particular, each browser is built on a web browser engine, which is responsible for key browser functionality such as speed, reliability and web compatibility. When gatekeepers operate and impose web browser engines, they are in a position to determine the functionality and standards that will apply not only to their own web browsers, but also to competing web browsers and, in turn, to web software applications. Gatekeepers should therefore not use their position to require their dependent business users to use any of the services provided together with, or in support of, core platform services by the gatekeeper itself as part of the provision of services or products by those business users.
In other words, Apple should not be in a position to dictate what features, performance, or standards in competing browsers and the web apps they power. That is, the intent is to guarantee that browser vendors have the freedom to implement their own engines, thereby removing Apple’s ability to control the performance, features, and standards of competing browsers and the web apps built on them.
Fifteen months since the DMA came into force, no browser vendor has successfully ported a competing engine to iOS. The financial, technical, and contractual barriers Apple has put in place remain insurmountable. These restrictions are not grounded in strictly necessary or proportionate security justifications.
This is not what effective compliance looks like. Article 5(7)’s goals, enabling engine-level competition and freeing web apps from Apple’s ceiling on functionality and stability, have not been met. Under Article 8(1) and Article 13(4), that makes Apple non-compliant.
Apple has a clear legal obligation to fix this. But will it act without pressure?
Any successful solution to allow browsers to use their own engines in the EU is highly likely to become global. Multiple regulators and government organizations have recommended ending Apple’s ban on third-party browsers including in the UK, Japan, USA and Australia. Further multiple new laws have already been passed, including the UK’s Digital Markets, Competition and Consumers Act (DMCC), and Japan’s Smartphone Act which directly prohibits it. Australia and the United States are also considering similar legislation. Finally the U. S. Department of Justice is pursuing an antitrust case against Apple and their complaint directly cites the issue.
With growing international momentum, and continued advocacy pushing for aligned global enforcement, Apple’s browser engine ban is facing sustained and mounting pressure. If the EU succeeds in forcing meaningful compliance under the DMA, it will set a global precedent. What regulator or government would tolerate such an obvious restriction on competition in their own market once the EU has shown it can be dismantled?
So why is Apple resisting this change so hard? They’ve already fought, and lost, a high court battle over it. Is this just a matter of being litigious? Hardly. Apple is acting rationally, if unethically. At the end of the day, it’s all about protecting revenue.
The UK regulator cites two incentives: protecting their app store revenue from competition from web apps, and protecting their Google search deal from competition from third-party browsers.
Apple receives significant revenue from Google by setting Google Search as the default search engine on Safari, and therefore benefits financially from high usage of Safari. […] The WebKit restriction may help to entrench this position by limiting the scope for other browsers on iOS to differentiate themselves from Safari […] As a result, it is less likely that users will choose other browsers over Safari, which in turn secures Apple’s revenues from Google. […] Apple generates revenue through its App Store, both by charging developers for access to the App Store and by taking a commission for payments made via Apple IAP. Apple therefore benefits from higher usage of native apps on iOS. By requiring all browsers on iOS to use the WebKit browser engine, Apple is able to exert control over the maximum functionality of all browsers on iOS and, as a consequence, hold up the development and use of web apps. This limits the competitive constraint that web apps pose on native apps, which in turn protects and benefits Apple’s App Store revenues.
A third and interesting incentive which the US’s Department of Justice cites, is that this behavior greatly weakens the interoperability of Apple’s devices, making it harder for consumers to switch or multi-home. It also greatly raises the barriers of entry for new mobile operating system entrants by depriving them of a library of interoperable apps.
Apple has long understood how middleware can help promote competition and its myriad benefits, including increased innovation and output, by increasing scale and interoperability. […] In the context of smartphones, examples of middleware include internet browsers, internet or cloud-based apps, super apps, and smartwatches, among other products and services. […] Apple has limited the capabilities of third-party iOS web browsers, including by requiring that they use Apple’s browser engine, WebKit. […] Apple has sole discretion to review and approve all apps and app updates. Apple selectively exercises that discretion to its own benefit, deviating from or changing its guidelines when it suits Apple’s interests and allowing Apple executives to control app reviews and decide whether to approve individual apps or updates. Apple often enforces its App Store rules arbitrarily. And it frequently uses App Store rules and restrictions to penalize and restrict developers that take advantage of technologies that threaten to disrupt, disintermediate, compete with, or erode Apple’s monopoly power.
Interoperability via middleware would reduce lock-in for Apple’s devices. Lock-in is a clear reason for Apple to block interoperability, as can be seen in this email exchange where Apple executives dismiss the idea of bringing iMessage to Android.
The #1 most difficult [reason] to leave the Apple universe app is iMessage … iMessage amounts to serious lock-in
iMessage on Android would simply serve to remove [an] obstacle to iPhone families giving their kids Android phones … moving iMessage to Android will hurt us more than help us, this email illustrates why.
Apple has also long been concerned that the web could be a threat to its app store. In 2011, Philip Schiller internally sent an email to Eddie Cue to discuss the threat of HTML5 to the Apple App Store titled “HTML5 poses a threat to both Flash and the App Store”.
Food for thought: Do we think our 30/70% split will last forever? While I am a staunch supporter of the 30/70% split and keeping it simple and consistent across our stores, I don’t think 30/70 will last unchanged forever. I think someday we will see a challenge from another platform or a web based solution to want to adjust our model
It is crucial that readers and regulators understand that this is not some trivial matter for Apple. Allowing both browsers and the web to compete fairly on iOS will seriously harm Apple’s margins and revenue.
Apple gets an astonishing $20 billion a year from Google to set its search engine as the default in Safari, accounting for 14-16 percent of Apple’s annual operating profits. Safari’s budget is a mere fraction of this, likely in the order of $300-400 million per year. This means that Safari is one of Apple’s most financially successful products and the highest margin product Apple has ever made. For each 1% browser market share that Apple loses for Safari, Apple is set to lose $200 million in revenue per year.
In 2024, Apple is estimated to have collected $27.4 billion from $91.3 billion in sales on its app store, underscoring its role as a critical and expanding source of profit. By contrast, the macOS App Store, where Apple does not exercise the same gatekeeping power over browsers or app distribution, remains a much smaller operation, with revenue that Apple chooses not to report.
Web apps, which already have a dominant 70% share on desktop, can replace most of the apps on your phone. Even a far more modest 20% shift towards web apps would represent a $5.5 billion annual loss in revenue.
This is important because it explains why Apple will not voluntarily make these changes. No rational actor with such a tight monopolistic grip on a market (the market for browsers and the market for apps on iOS) would give that up if they could plausibly hang onto it by subtly or explicitly undermining attempts to open it up. Apple’s statements about engaging or making changes are meaningless, it is only the concrete actions that they have taken to date that must be measured.
These changes, and the competition and interoperability they bring, will literally cost Apple billions if not tens of billions per year. On the flip side these are savings that developers and consumers are missing out on, both in terms of quality of apps and services, and direct costs. This is money that Apple is extracting out the market via their control of iOS on high-cost and high-margin devices sold to consumers at full price.
With a market value of $3 trillion, Apple has a legal budget of over $1 billion a year, giving it legal power that outstrips that of small nations. It is also not afraid to step as close to the line of non-compliance as possible, as Apple’s former general counsel explains:
work out how to get closer to a particular risk but be prepared to manage it if it does go nuclear, … steer the ship as close as you can to that line because that’s where the competitive advantage occurs. … Apple had to pay a large fine, Tim [Cook]’s reaction was that’s the right choice, don’t let that scare you, I don’t want you to stop pushing the envelope.
This, unfortunately, means that regulation is the only answer. Even Open Web Advocacy was only formed after we had exhausted every possible avenue at trying to convince Apple to develop critical web functionality.
Many other parties have attempted to negotiate with Apple on these topics over the last 15 years and all have come to naught, the power imbalance and the incentives for Apple not to do this is simply too strong.
Some have tried to frame the DMA as a clash between the EU and the US, with the DMA unfairly targeting American tech giants, but that is not the case.
For US negotiators to carve out exemptions for American companies now would defang the DMA and stall its pro-competition benefits just as they begin to be felt. […] The victims of a DMA pause would be America’s most innovative upstarts — especially AI start-ups. The DMA’s interoperability and fairness rules were designed to pry open closed platforms and give smaller companies a fighting chance. […] Big Tech lobbyists portray the DMA as anti-American. In reality, the DMA’s goals align with American ideals of fair competition. This isn’t Europe versus America; it’s open markets versus closed ones.
The reality is this: Apple stands alone in enforcing a ban on competing browser engines and suppressing web app competition on iOS. No other gatekeeper imposes such a restriction.
In fact, the three major organizations working to port alternative browser engines to iOS, Google, Mozilla, and Microsoft are themselves American. Smaller browser vendors, many of whom are also based in the US, are depending on these efforts. Apple’s restrictions don’t serve consumers, startups, web developers, native app creators, or even other American tech companies. They serve only Apple, who makes billions per year from undermining both browser and web app competition on iOS.
Through front groups like ACT, which Apple primarily funds, the company may attempt to reframe this issue as the EU targeting successful US firms. But that’s a distraction. This isn’t Europe versus America, it’s Apple versus the World.
At the DMA workshop last Monday, we had a chance to ask some of these questions, and to chat with Apple’s ever-charming Gary Davis (Senior Director, Apple Legal) on the sidelines. While we are strongly opposed to Apple’s ongoing anti-competitive conduct, we do deeply appreciate that Gary and Kyle were willing to come over and participate in person.
To kick off the first of OWA’s questions on browser engines, Roderick Gadellaa asked the key question: Why has no browser vendor been able to bring their own engine to iOS, even after 15 months of the DMA being in force?
The DMA has been in force now for 15 months. Despite this, not a single browser vendor has been able to port their browser using its own engine to iOS. It’s not because they’re incapable or they don’t want to, it’s because Apple’s strange policies are making this nearly impossible.
One of the key issues slowing progress is that Apple is not allowing browser vendors to update their existing browser app to use their own engine in the EU, and Apple’s WebKit engine elsewhere. This means that browser vendors have to ship a whole new app just for the EU and tell their existing EU customers to download their new app and start building the user base from scratch.
Now, we would love for Apple to allow competing browsers to ship their own engines globally. But if they insist on allowing this only in the EU, Apple can easily resolve this problem. Here’s how:
They can allow browsers to ship two separate versions of their existing browser to the App Store, one version for the EU and one for the rest of the world. Something which is currently possible in other App Stores. This would allow existing European users to get the European version of the app without having to download a separate app simply by receiving a software update. But it seems Apple doesn’t want that, and they make this very clear in their browser engine entitlement contract.
Given that, Apple can easily resolve this problem simply by allowing browsers to ship a separate version of the app to the EU under the same bundle ID. Why is Apple still insisting that browser vendors lose all their existing EU customers in order to take advantage of the rights granted under the DMA? Thank you.
Coalition for Open Digital Ecosystems (an European advocacy group with members including Google, Meta, Qualcomm and Opera) also asked about the difficulty in porting browser engines:
Apple has made some changes to its rule governing third-party browsers and the ability to use other browsers engines in the EU. However, as was already mentioned, they have various restrictions, including having two different versions of the app, limitations on testing, cumbersome contract requirements, still making it onerous to meaningfully take advantage of the browser engine interoperability. Which is why no one has really successfully launched on iOS using an alternative browser engine. What is Apple going to do to enable the third parties to launch a browser on iOS via an alternative engine?
Gary Davis (Senior Director Apple Legal) and Kyle Andeer (Vice President Apple Legal) were there to answer the questions:
Let me take the browser engine first. I know this is all just conversation is supposed to be about browser choice screens and defaults, but I know some of you, many of you with the same group, have traveled very far to have this conversation. And so I’ll take a question on that, which is, listen: as everyone knows, when we designed and released iOS and iPadOS over 15 years, we were hyper focused on how do we create the most secure computing platform in the world. We built it from the ground up with security and privacy in mind. The browser engine was a critical aspect of that design. Webkit was that aspect of the design. And that has worked for 18 years. We recognize under the DMA that we’ve been forced to change And we have created a program that keeps security and privacy in mind, that keeps the integrity of the operating system in mind, and allows third parties to bring their browser engine, Google, Mozilla, to the platform. And for whatever reason, they’ve chosen not to do so. And so we remain open. We remain open to engagement. We have had conversations, constructive conversations with Mozilla, less constructive engagement from the other party, but we are working to resolve that, those differences, and bring them to iOS in a way that we feel comfortable with in terms of security, privacy, and integrity perspective.
Kyle began by incorrectly asserting that the session was focused solely on browser choice screens and defaults, despite the session being explicitly titled “Browsers”. This appeared to suggest that our question on browser engines was somehow out of scope.
He acknowledged that under the DMA, Apple is now required to allow third-party browser engines on iOS. He then reiterated Apple’s long standing talking points: that iOS was built from the ground up with security and privacy in mind, that WebKit is a core part of that design, and that any changes must preserve what Apple deems the “integrity” of the platform.
However, the fact that Safari heavily reuses iOS code and components is unlikely to be a genuine security feature and is almost certainly a cost-saving measure. By reusing code and libraries between iOS components, Apple can save significant amounts on staffing. This comes with two significant downsides: First it worsens security by locking Safari updates to iOS updates, increasing the time it takes security patches to reach users. Second, this tight coupling harms Safari itself by making it difficult for Apple to port its browser to other operating systems, ultimately weakening its competitiveness and reach. It also means that Apple can’t offer beta versions of Safari to iOS users without them installing an entire beta version of the operating system, a limitation that other browsers do not have.
According to Kyle, Apple has created a program that allows third-party engines “in a way we feel comfortable with in terms of security, privacy, and integrity” but offered no specifics. He then shifted blame onto browser vendors, stating that Mozilla and Google have simply “chosen not to” bring their engines to iOS, omitting the fact that Apple’s technical and contractual constraints make doing so unviable.
There’s a lot of OWA people here in the rooms so well done on that. I also half the questions at least were about browser engines, which is obviously an Article 5(7) as opposed to a 6(3) issue. More than happy as Kyle already did to address the question, but I think it would be a shame that a session that is about choice screens and uninstallation and the defaults become a browser engine discussion. I was pleased that Kush was nodding when Kyle was pointing out the ongoing engagements with Google and Mozilla, which are continuing right up even to last week, and I think just some more this week. There was a bottom line issue, however, which is that both Google and Mozilla have everything they need to build their engines and ship them on iOS today. We heard some other issues mentioned. We are happy to engage on those issues. We are engaging on those issues, but everything is in place to ship here in the EU today. I think that’s an extremely important point to take away from this.
Gary reiterated Kyle’s suggestion that the questions on browser engines were out of scope and that browser vendors have everything they need to ship a browser engine on iOS today.
I think one other point I wanna make sure I address as I reflected upon the end, there was a question about why we don’t do this on a global basis. And I think we’ve always approached the DMA as to the European law that relates to Europe. And we are not going to export European law to the United States, and we’re not going to export European law to other jurisdictions. Each jurisdiction should have the freedom and decision making to make its own decisions. And so we’re going to abide by that.
Kyle concluded by asserting that Apple would comply with the DMA only within the EU, stating that it would not “export a European law to the United States”. This ignores the reality that Apple has, in fact, already extended several EU-driven changes globally, including USB-C charging for iPhones, support for game emulators, NFC access for third-party payments, the new default apps page and no longer hiding the option to change default browser if Safari was the default.
While we would prefer that Apple enable browser competition globally on iOS, we recognize that the DMA does not require it to do so. We highlight these globally adopted changes simply to point out that Apple could choose to take the same pro-competative approach here. This restriction not only undermines global interoperability, but also weakens the effectiveness of the solution for EU users themselves.
[…] it’s OK to ask questions which are other questions related to browsers. So I think that’s totally OK given the name of the session.
...
Read the original on open-web-advocacy.org »
The way I was taught x86 assembly at the university had been completely outdated for many years by the time I had my first class. It was around 2008 or 2009, and 64-bit processors had already started becoming a thing even in my neck of the woods. Meanwhile, we were doing DOS, real-mode, memory segmentation and all the other stuff from the bad old days.
Nevertheless, I picked up enough of it during the classes (and over the subsequent years) to be able to understand the stuff coming out of the other end of a compiler, and that has helped me a few times. However, I’ve never manually written any substantial amount of x86 assembly for something non-trivial. Due to being locked up inside (on account of a global pandemic), I decided to change that situation, to pass the time.
I wanted to focus on x86-64 specifically, and completely forget/skip any and all legacy crap that is no longer relevant for this architecture. After getting a bit deeper into it, I also decided to publish my notes in the form of tutorials on this blog since there seems to be a desire for this type of content.
Everything I write in these posts will be a normal, 64-bit, Windows program. We’ll be using Windows because that is the OS I’m running on all of my non-work machines, and when you drop down to the level of writing assembly it starts becoming incresingly impossible to ignore the operating system you’re running on. I will also try to go as “from scratch” as possible - no libraries, we’re only allowed to call out to the operating system and that’s it.
In this first, introductory part (yeah, I’m planning a series and I know I will regret this later), I will talk about the tools we will need, show how to use them, explain how I generally think about programming in assembly and show how to write what is perhaps the smallest viable Windows program.
There are two main tools that we will use throughout this series.
CPUs execute machine code - an efficient representation of instructions for the processor that is almost completely impenetrable to humans. The assembly language is a human-readable representation of it. A program that converts this symbolic representation into machine code ready to be executed by a CPU is called an assembler.
There is no single, agreed-upon standard for x86-64 assembly language. There are many assemblers out there, and even though some of them share a great deal of similarities, each has its own set of features and quirks. It is therefore important which assembler you choose. In this series, we will be using
Flat Assembler (or FASM for short). I like it because it’s small, easy to obtain and use, has a nice macro system and comes with a handy little editor.
Another important tool is the debugger. We’ll use it to examine the state of our programs. While I’m pretty sure it’s possible to use Visual Studio’s integrated debugger for this, I think a standalone debugger is better when all you want to do is look at the disassembly, memory and registers. I’ve always used OllyDbg for stuff like that, but unfortunately it does not have a 64-bit version. Therefore we will be using WinDbg. The version linked here is a revamp of this venerable tool with a slightly nicer interface. Alternatively, you can get the non-Windows-store version here as part of the Windows 10 SDK. Just make sure you deselect everything else besides WinDbg during installation. For our purposes, the two versions are mostly interchangeable.
Now that we have our tools, I want to spend a bit of time to discuss some basics. For the purpose of these tutorials I’m assuming some knowledge of languages like C or C++, but little or no previous exposure to assembly, therefore many readers will find this stuff familiar.
CPUs only “know” how to do a fixed number of certain things. When you hear someone talk about an “instruction set”, they’re referring to the set of things a particular CPU has been designed to do, and the term “instruction” just means “one of the things a CPU can do”. Most instructions are parameterized in one way or another, and they’re generally really simple. Usually an instruction is somthing along the lines of “write a given 8-bit value to a given location in memory”, or “interpreting the values from registers A and B as 16-bit signed integers, multiply them and record the result into register A”.
Below is a simple mental model of the architecture that we’ll start with.
This skips a ton of things (there can be more than one core executing instructions and reading/writing memory, there’s different levels of cache, etc. etc.), but should serve as a good starting point.
To be effective at low-level programming or debugging you need to understand that every high-level concept eventually maps to this low-level model, and learning how the mapping works will help you.
You can think of registers as a special kind of memory built right into the CPU that is very small, but extremely fast to access. There are many different kinds of registers in x86-64, and for now we’ll concern ourselves only with the so-called general-purpose registers, of which there are sixteen. Each of them is 64 bits wide, and for each of them the lower byte, word and double-word can be addressed individually (incidentally, 1 “word” = 2 bytes, 1 “double-word” = 4 bytes, in case you haven’t heard this terminology before).
Additionally, the higher 8 bits of rax, rbx, rcx and rdx can be referred to as ah, bh, ch and dh.
Note that even though I said those were “general-purpose” registers, some instructions can only be used with certain registers, and some registers have special meaning for certain instructions. In particular, rsp holds the stack pointer (which is used by instructions like push, pop, call and ret), and rsi and rdi serve as source and destination index for “string manipulation” instructions. Another example where certain registers get “special treatment” are the multiplication instructions, which require one of the multiplier values to be in the register rax, and write the result into the pair of registers rax and rdx.
In addition to these registers, we will also consider the special registers rip and rflags. rip holds the address of the next instruction to execute. It is modified by control flow instructions like call or jmp. rflags holds a bunch of binary flags indicating various aspects of the program’s state, such as whether the result of the last arithmetic operation was less, equal or greater than zero. The behavior of many instructions depends on those flags, and many instructions update certain flags as part of their execution. The flags register can also be read and written “wholesale” using special instructions.
There are a lot more registers on x86-64. Most of them are used for SIMD or floating-point instructions, and we’ll not be considering them in this series.
You can think of memory as a large array of byte-sized “cells”, numbered starting at 0. We’ll call these numbers “memory addresses”. Simple, right?
Well… addressing memory used to be rather annoying back in the old days. You see, registers in old x86 processors used to be only 16-bit wide. Sixteen bits is enough to address 64 kilobytes worth of memory, but not more. The hardware was actually capable of using addresses as wide as 20 bits, but you had put a “base” address into a special segment register, and instructions that read or wrote memory would use a 16-bit offset into that segment to obtain the final 20-bit “linear” address. There were separate segment registers for code, data and stack portions (and a few more “extra” ones), and segments could overlap.
In x86-64 these concerns are non-existant. The segment registers for code, data and stack are still present, and they’re loaded with some special values, but as a user-space programmer you needn’t concern yourself with them. For all intents and purposes you can assume that all segments start at 0 and extend for the entire addressable length of memory. So, as far as we’re concerned, on x86-64 our programs see memory as a “flat” contiguous array of bytes, with sequential addresses, starting at 0, just like we said in the beginning of this section…
Okay, I may have distorted the truth a little bit. Things aren’t quite as simple. While it is true that on 64-bit Windows your programs see memory as a flat contiguous array of bytes with addresses starting at 0, it is actually an elaborate illusion maintained by the OS and CPU working together.
The truth is, if you were really able to read and write any byte in memory willy-nilly, you’d stomp all over other programs’ code and data (something that indeed could happen in the Bad Old Days). To prevent that, special protection mechanisms exist. I won’t get too deep into their inner workings here because this stuff matters mostly for OS developers. Nevertheless, here’s a very short overview:
Each process gets a “flat” address space as described above (we’ll call it the “virtual address space”). For each process, the OS sets up a mapping between its virtual addresses and actual physical addresses in memory. This mapping is respected by the hardware: the “virtual” addresses get translated to physical addresses dynamically at runtime. Thus, the same address (e.g. 0x410F119C) can map to two different locations in physical memory for two different processes. This, in a nutshell, is how the separation between processes in enforced.
The final thing I want to invite your attention to here is how the instructions and data which they operate on are held in the same memory. While it may seem an obvious choice, it’s not how computers necessarily have to work. This is a property characteristic of the von Neumann model - as opposed to the Harvard model, where instructions and data are held in separate memories. A real-world example of a Harvard computer is the AVR microcontroller on your Arduino.
Hopefully by this point you have downloaded FASM and are ready to write some code. Our first program will be really simple: it will load and then immediately exit. We mostly want it just to get acquainted with the tools.
Here’s the code for our first program in x86-64 assembly:
format PE64 NX GUI 6.0
entry start
section ‘.text’ code readable executable
start:
int3
ret
We’ll go through this line-by-line.
format PE64 NX GUI 6.0 - this is a directive telling FASM the format of the binary we expect it to produce - in our case, Portable Executable Format (which is what most Windows programs use). We’ll talk about it in a bit more detail later.
entry start - this defines the entry point into our program. The entry directive requires a label, which in this case is “start”. A label can be thought of as a name for an address within our program, so in this case we’re saying “the entry point to the program is at whatever address the ‘start’ label is”. Note that you’re allowed to refer to labels even if they’re defined later in the program code (as is the case here).
section ‘.text’ code readable executable - this directive indicates the beginning of a new section in a Portable Executable file, in this case a section containing executable code. More on this later.
start: - this is the label that denotes the entry point to our program. We referred to it earlier in the “entry” directive. Note that labels themselves don’t produce any executable machine code: they’re just a way for the programmer to mark locations within the executable’s address space.
int3 - this is a special instruction that causes the program to call the debug exception handler - when running under a debugger, this will pause the program and allow us to examine its state or proceed with the execution step-by-step. This is how breakpoints are actually implemented - the debugger replaces a single byte in the executable with the opcode corresponding to int3, and when the program hits it, the debugger takes over (obviously, the original content of the memory at breakpoint address has to be remembered and restored before proceeding with execution or single-stepping). In our case, we are hard-coding a breakpoint immediately at the entry point for convenience, so that we don’t have to set it manually via the debugger every time.
ret - this instruction pops off an address from the top of the stack, and transfers execution to that address. In our case, we’ll return into the OS code that initially invoked our entry point.
Fire up FASMW. EXE, paste the code above into the editor, save the file and press Ctrl+F9. Your first assembly program is now complete! Let’s now load it up in a debugger and single-step through it to see it actually working.
Open up WinDbg. Go to the View tab and make sure the following windows are visible: Disassembly, Registers, Stack, Memory and Command. Go to File > Launch Executable and select the executable you just built with FASM. At this point your workspace should resemble something like this:
In the disassembly window you can see the code that is currently being executed. Right now it’s not our program’s code, but some OS loader code - this stuff will load our program into memory and eventually transfer execution to our entry point. WinDbg ensures a breakpoint is triggered before any of that happens.
In the registers window, you can see the contents of x86-64 registers that we discussed earlier.
The memory window shows the raw content of the program’s memory around a given virtual address. We’ll use it later.
The stack window shows the current call stack (as you can see, it’s all inside ntdll.dll right now).
Finally, the command window allows entering text commands and shows log messages.
If you press F5 at this time, it will cause the program to continue running until it hits another breakpoint. The next breakpoint it will hit is the one we hardcoded. Try pressing F5, and you’ll see something like this:
You should be able to recognize the two instructions we wrote - int3 and ret. To advance to the next instruction, press F8. When you do that, pay attention to the registers window - you should see the rip register being updated as you advance (WinDbg highlights the registers that change in red).
Right after the ret instruction is executed, you will return to the code that invoked our program’s entry point.
As you can see from the image above, the next thing that will happen is a call to RtlExitUserThread (a pretty self-explanatory name). If you press F5 now, your program’s main thread will clean up and end, and so will the program. Or will it?…
The truth is, by using ret, I took a bit of a shortcut. On Windows a process will terminate if any of the following conditions are met:
But, we’re exiting the main thread here so we should be good, right? Well, sort of. There’s no guarantee that Windows hasn’t started any other background threads (for example, to load DLLs or something like that) within our process. It seems that at least in this example, the main thread is the only one (I’ve checked and the process doesn’t stick around), but this may change. A well-behaved Windows program should always call ExitProcess at the appropriate time.
In order to be able to call WinAPI functions, we need to learn a few things about the Portable Executable file format, how DLLs are loaded and calling conventions.
The ExitProcess function lives in KERNEL32. DLL (yes, that’s not a typo, KERNEL32 is the name of the 64-bit library. The 32-bit versions of those libs provided for back-compat pueporses, live in a folder names SysWOW64. I’m not joking.). In order to be able to call it, we first need to import it.
We won’t cover the Portable Executable format in its entirety here. It is documented extensively on the Microsoft docs website. Here are a couple of basic facts we’ll need to know:
PE files are comprised of sections. We have already seen a section containing executable code in our program, but sections may contain other types of data.
Information about what symbols are imported from what DLLs is stored in a special section called ‘.idata’.
Let’s have a look at the .idata section.
As per the docs, the .idata section begins with an import directory table (IDT). Each entry in the IDT corresponds to one DLL, is 20 bytes in length and consists of the following fields:
A 4-byte relative virtual address (RVA) of the Import Lookup Table (ILT), which contains the names of functions to import. More on that later
A 4-byte RVA of a null-terminated string containing the name of the DLL
A 4-byte RVA of the Import Address Table (IAT). The structure of the IAT is the same as ILT, the only difference is that the content of IAT is modified at runtime by the loader - it overwrites each entry with the address of the corresponding imported function. So theoretically, you can have both ILT and IAT fields point to the same exact piece of memory. Moreover, I’ve found that setting the ILT pointer to zero also works, although I am not sure if this behavior is officially supported.
The Import Directory Table is terminated by an entry where all fields are equal zero.
The ILT/IAT is an array of 64-bit values terminated by a null value. The bottom 31 bits of each entry contain the RVA of an entry in a hint/name table (containing the name of the imported function). During runtime, the entries of the IAT are replaced with the actual addresses of the imported functions.
The hint/name table mentioned above consists of entries, each of which needs to be aligned on an even boundary. Each entry begins by a 2-byte hint (which we’ll ignore for now) and a null-terminated string containing the imported function name, and a null byte (if necessary), to align the next entry on an even boundary.
With that out of the way, let’s see how we would define our executable’s .idata section in FASM
section ‘.idata’ import readable writeable
idt: ; import directory table starts here
; entry for KERNEL32.DLL
dd rva kernel32_iat
dd 0
dd 0
dd rva kernel32_name
dd rva kernel32_iat
; NULL entry - end of IDT
dd 5 dup(0)
name_table: ; hint/name table
_ExitProcess_Name dw 0
db “ExitProcess”, 0, 0
kernel32_name: db “KERNEL32.DLL”, 0
kernel32_iat: ; import address table for KERNEL32.DLL
ExitProcess dq rva _ExitProcess_Name
dq 0 ; end of KERNEL32′s IAT
The directive for a new PE section is already familiar to us. In this case, we’re communicating that the section we’re about to introduce contains the imports data and needs to be made writeable when loaded into memory (since addresses of the imported functions will be written in there).
The directives db, dw, dd and dq all cause FASM to emit a raw byte/word/double-word/quad-word value respectively. The rva operator, unsurprisingly, yields the relative virtual address of its argument. So, dd rva kernel32_iat will cause FASM to emit a 4-byte binary value equal to the RVA of kernel32_iat label.
Here we’ve just made use of fasm’s db/dw/etc. directives to precisely describe the contents of our .idata section.
We’re now almost ready to finally call ExitProcess. One thing we have to answer though, is - how does a function call work? Think about it. There is a call instruction, which pushes the current value of rip onto the stack, and transfers execution to the address specified by its parameter. There is also the ret instruction, which pops off an address from the stack and transfers execution there. Nowhere is it specified how arguments should be passed to a function, or how to handle the return values. The hardware simply doesn’t care about that. It is the job of the caller and the callee to establish a contract between themselves. These rules might look along the lines of:
The caller shall push the arguments onto the stack (starting from the last one)
The callee shall remove the parameters from the stack before returning.
The callee shall place return values in the register eax
A set of rules like that is referred to as the calling convention, and there are many different calling conventions in use. When you try to call a function from assembly, you must know what type of calling convention it expects.
The good news is that on 64-bit Windows there’s pretty much only one calling convention that you need to be aware of - the Microsoft x64 calling convention. The bad news is that it’s a tricky one - unlike many of the older conventions, it requires the first few parameters to be passed via registers (as opposed to being passed on the stack), which can be good for performance.
You may read the full docs if you’re interested in details, I will cover only the parts of the calling convention relevant to us here:
The stack pointer has to be aligned to a 16-byte boundary
The first four integer or pointer arguments are passed in the registers rcx, rdx, r8 and r9; the first four floating point arguments are passed in registers xmm0 to xmm3. Any additional args are passed on the stack.
Even though the first 4 arguments aren’t passed on the stack, the caller is still required to allocate 32 bytes of space for them on the stack. This has to be done even if the function has less than 4 arguments.
The caller is responsible for cleaning up the stack.
...
Read the original on gpfault.net »
Your writing assistant that never leaves your Mac. Powered by local AI models with zero data collection.
Your writing stays on your Mac Your documents never leave your Mac. No servers, no tracking, no privacy concerns. Works offline, on flights, in coffee shops, anywhere you write. Your browser does not support the video tag.
Seamlessly integrates with all your favorite Mac applications. No setup required — just start writing and get grammar suggestions. And many more applications across macOS
Own it forever. No subscriptions, no hidden fees. Pay once, own it foreverPurchase a LicenseTry it before you buy! Download the free trial version to see if it fits your needs.
Can’t find the answer you’re looking for? Feel free to sent us an email at support@refine.shIs my data truly private?Yes, absolutely. Your documents, text, and writing never leave your Mac. We don’t collect, store, or transmit any of your personal content. All processing happens locally using offline large language models (LLMs) that run directly on your machine. What apps does it work with?Works with most macOS apps including Mail, Messages, Safari, Chrome, Pages, Word, Slack, Notion, and many more.Requires macOS 14.0 or later. Works with both Apple Silicon (M1, M2, etc.) and Intel-based MacsWe offer a 7-day free trial with full access to all features. No credit card required. Just download the app and start using it.Is there an educational discount?Yes! We offer a 50% discount for students and educators. Just write us an email with your current student/teacher/institution email.
Ready to protect your privacy while improving your writing?
...
Read the original on refine.sh »
On July 13th 2005, Jacob Kaplan-Moss made the first commit to the public repository that would become Django. Twenty years and 400+ releases later, here we are – Happy 20th birthday Django! 🎉
We want to share this special occasion with you all! Our new 20-years of Django website showcases all online and local events happening around the world, through all of 2025. As well as other opportunities to celebrate!
* A special quiz or two? see who knows all about Django trivia
As a birthday gift of sorts, consider whether you or your employer can support the project via donations to our non-profit Django Software Foundation. For this special event, we want to set a special goal!
Over the next 20 days, we want to see 200 new donors, supporting Django with $20 or more, with at least 20 monthly donors. Help us making this happen:
Once you’ve done it, post with #DjangoBirthday and tag us on Mastodon / on Bluesky / on X / on LinkedIn so we can say thank you!
20 years is a long time in open source — and we want to keep Django thriving for many more, so it keeps on being the web framework for perfectionists with deadlines as the industry evolves. We don’t know how the web will change it that time, but from Django, you can expect:
* Many new releases, each with years of support
* Thousands more packages in our thriving ecosystem
* An inclusive and supportive community with hundreds of thousands of developers
...
Read the original on www.djangoproject.com »
For many years, data brokers have existed in the shadows, exploiting gaps in privacy laws to harvest our information—all for their own profit. They sell our precise movements without our knowledge or meaningful consent to a variety of private and state actors, including law enforcement agencies. And they show no sign of stopping.
This incentivizes other bad actors. If companies collect any kind of personal data and want to make a quick buck, there’s a data broker willing to buy it and sell it to the highest bidder–often law enforcement and intelligence agencies.
One recent investigation by 404 Media revealed that the Airlines Reporting Corporation (ARC), a data broker owned and operated by at least eight major U. S. airlines, including United Airlines and American Airlines, collected travelers’ domestic flight records and secretly sold access to U.S. Customs and Border Protection (CBP). Despite selling passengers’ names, full flight itineraries, and financial details, the data broker prevented U.S. border forces from revealing it as the origin of the information. So, not only is the government doing an end run around the Fourth Amendment to get information where they would otherwise need a warrantthey’ve also been trying to hide how they know these things about us.
ARC’s Travel Intelligence Program (TIP) aggregates passenger data and contains more than one billion records spanning 39 months of past and future travel by both U. S. and non-U.S. citizens. CBP, which sits within the U.S. Department of Homeland Security (DHS), claims it needs this data to support local and state police keeping track of people of interest. But at a time of growing concerns about increased immigration enforcement at U.S. ports of entry, including unjustified searches, law enforcement officials will use this additional surveillance tool to expand the web of suspicion to even larger numbers of innocent travelers.
More than 200 airlines settle tickets through ARC, with information on more than 54% of flights taken globally. ARC’s board of directors includes representatives from U. S. airlines like JetBlue and Delta, as well as international airlines like Lufthansa, Air France, and Air Canada.
In selling law enforcement agencies bulk access to such sensitive information, these airlines—through their data broker—are putting their own profits over travelers’ privacy. U. S. Immigration and Customs Enforcement (ICE) recently detailed its own purchase of personal data from ARC. In the current climate, this can have a detrimental impact on people’s lives.
Movement unrestricted by governments is a hallmark of a free society. In our current moment, when the federal government is threatening legal consequences based on people’s national, religious, and political affiliations, having air travel in and out of the United States tracked by any ARC customer is a recipe for state retribution.
Sadly, data brokers are doing even broader harm to our privacy. Sensitive location data is harvested from smartphones and sold to cops, internet backbone data is sold to federal counterintelligence agencies, and utility databases containing phone, water, and electricity records are shared with ICE officers.
At a time when immigration authorities are eroding fundamental freedoms through increased—and arbitrary—actions at the U. S. border, this news further exacerbates concerns that creeping authoritarianism can be fueled by the extraction of our most personal data—all without our knowledge or consent.
The new revelations about ARC’s data sales to CBP and ICE is a fresh reminder of the need for “privacy first” legislation that imposes consent and minimization limits on corporate processing of our data. We also need to pass the “Fourth Amendment is not for sale” act to stop police from bypassing judicial review of their data seizures by means of purchasing data from brokers. And let’s enforce data broker registration laws.
...
Read the original on www.eff.org »
Software is built under time and quality constraints. We want to write good code and have it done quickly.
If you go too fast, your work is buggy and hard to maintain. If you go too slowly, nothing gets shipped. I have not mastered this tension, but I’ll share a few lessons I’ve learned.
This post focuses on being a developer on a small team, maintaining software over multiple years. It doesn’t focus on creating quick prototypes. And this is only based on my own experience!
Early in my career, I wanted all my code to be perfect: every function well-tested, every identifier elegantly named, every abstraction easily understood. And absolutely no bugs!
But I learned a lesson that now seems obvious in hindsight: there isn’t one “right way” to build software.
For example, if you’re making a game for a 24-hour game jam, you probably don’t want to prioritize clean code. That would be a waste of time! Who really cares if your code is elegant and bug-free?
On the other hand, if you’re building a pacemaker device, a mistake could really hurt someone. Your work should be much better! I wouldn’t want to risk my life with someone’s spaghetti code!
Most of my work has been somewhere in the middle. Some employers have aggressive deadlines where some bugs are acceptable, while other projects demand a higher quality bar with more relaxed schedules. Sussing this out has helped me determine where to invest my time. What is my team’s idea of “good enough”? What bugs are acceptable, if any? Where can I do a less-than-perfect job if it means getting things done sooner?
In general, my personal rule of thumb is to aim for an 8 out of 10 score, delivered on time. The code is good and does its job. It has minor issues but nothing major. And it’s done on time! (To be clear, I aim for this. I don’t always hit it!) But again, it depends on the project—sometimes I want a perfect score even if it’s delayed, and other times I write buggy code that’s finished hastily.
Software, like writing, can benefit from a rough draft. This is sometimes called a “spike” or a “walking skeleton”.
I like implementing a rough draft as quickly as I can. Later, I shape it into the final solution.
My rough draft code is embarrassing. Here are some qualities of my typical spikes:
* Error cases are not handled. (I recently had a branch where an error message was logged 20 times per second.)
* Commit messages are just three letters: “WIP”, short for “work in progress”.
* 3 packages were added and none of them are used anymore.
This sounds pretty bad, but it has one important quality: it vaguely resembles a good solution.
As you might imagine, I fix these mistakes before the final patch! (Some teams might pressure me to ship this messy code, which I try to resist. I don’t want the rough draft to be treated like a final draft!)
This “rough draft” approach has a few advantages:
* It can reveal “unknown unknowns”. Often, prototypes uncover things I couldn’t have anticipated. It’s generally good to discover those ASAP, not after I’ve perfected some code that ultimately gets discarded.
* Lots of these problems disappear over the course of the rough draft and I never have to fix them. For example, I write a function that’s too slow but works well enough for a prototype. Later, I realize I didn’t need that function at all. Good thing I didn’t waste time speeding it up! (I can’t tell you how many functions I’ve fully unit tested and then deleted. What a waste of time!)
* It helps me focus. I’m not fixing a problem in another part of the codebase or worrying about the perfect function name. I’m speedrunning this rough draft to understand the problem better.
* It helps me avoid premature abstractions. If I’m rushing to get something ugly working, I’m less likely to try to build some byzantine abstraction. I build what I need for the specific problem, not what I think I might need for future problems that may never come.
* It becomes easier to communicate progress to others in two ways: first, I can usually give a more accurate estimate of when I’ll be done because I know approximately what’s left. Second, I can demo something, which helps stakeholders understand what I’m building and provide better feedback. This feedback might change the direction of the work, which is better to know sooner.
Here are some concrete things I do when building rough drafts:
* Focus on binding decisions. Some choices, like the selection of programming language or database schema design, can be hard to change later. A rough draft is a good time for me to explore these, and make sure I’m not boxing myself into a choice that I’ll regret in a year.
* Keep track of hacks. Every time I cut a corner, I add a TODO comment or equivalent. Later, when it’s time for polish, I run git grep TODO to see everything that needs attention.
* Build “top to bottom”. For example, in an application, I prefer to scaffold the UI before the business logic, even if lots of stuff is hard-coded. I’ve sometimes written business logic first, which I later discarded once the UI came into play, because I miscalculated how it would be used. Build the top layer first—the “dream code” I want to write or the API I wish existed—rather than trying to build the “bottom” layer first. It’s easier to make the right API decisions when I start with how it will be used. It can also be easier to gather feedback on.
* Extract smaller changes while working. Sometimes, during a rough draft, I realize that some improvement needs to be made elsewhere in the code. Maybe there’s a dependency that needs updating. Before finishing the final draft, make a separate patch to just update that dependency. This is useful on its own and will benefit the upcoming change. I can push it for code review separately, and hopefully, it’ll be merged by the time I finish my final draft.
See also: “Throw away your first draft of your code” and “Best Simple System for Now”. “YAGNI” is also somewhat related to this topic.
Generally, doing less is faster and easier! Depending on the task, you may be able to soften the requirements.
Some example questions to ask:
* Could I combine multiple screens into one?
* Is it okay if we don’t handle a particularly tricky edge case?
* Instead of an API supporting 1000 inputs, what if it just supported 10?
* Is it okay to build a prototype instead of a full version?
* What if we didn’t do this at all?
More generally, I sometimes try to nudge the culture of the organization towards a slower pace. This is a big topic, and I’m no expert on organizational change! But I’ve found that making big demands rarely works; I’ve had better luck with small, gradual suggestions that slowly shift discussions. I don’t know much about unionizing, but I wonder if it could help here too.
The modern world is full of distractions: notifications from your phone, messages from colleagues, and dreaded meetings. I don’t have smart answers for handling these.
But there’s another kind of distraction: I start wandering through the code. I begin working on one thing, and two hours later, I’m changing something completely unrelated. Maybe I’m theoretically being productive and improving the codebase, but that bug I was assigned isn’t getting fixed! I’m “lost in the sauce”!
I’ve found two concrete ways to manage this:
* Set a timer. When I start working on a discrete task, I often set a timer. Maybe I think this function is going to take me 15 minutes to write. Maybe I think it’ll take me 1 hour to understand the source of this bug. My estimates are frequently wrong, but when the timer goes off, I’m often jolted out of some silly distraction. And there’s nothing as satisfying as running git commit right as my timer goes off—a perfect estimation. (This also helps me practice the impossible art of time estimation, though I’m still not great at it.)
* Pair programming helps keep me focused. Another soul is less likely to let me waste their time with some rabbit hole.
Some programmers naturally avoid this kind of distraction, but not me! Discipline and deliberate action help me focus.
The worst boss I ever had encouraged us to make large patches. These changes were wide in scope, usually touching multiple parts of the code at once. From my experience, this was terrible advice.
Small, focused diffs have almost always served me better. They have several advantages:
* They are usually easier to write, because there’s less to keep in your head.
* They are usually easier to review. This lightens teammates’ cognitive load, makes my mistakes easier to spot, and usually means my code is merged sooner.
* They are usually easier to revert if something goes wrong.
* They reduce the risk of introducing new bugs since you’re changing less at once.
I also like to make smaller changes that build up to a larger one. For example, if I’m adding a screen that requires fixing a bug and upgrading a dependency, that could be three separate patches: one to fix the bug, one to upgrade the dependency, and one to add the screen.
Small changes usually help me build software more quickly and with higher quality.
Most of the above is fairly high-level. Several more specific skills have come in handy, especially when trying to build software quickly:
* Reading code is, by far, the most important skill I’ve acquired as a programmer. I’ve had to work on this a lot! It helps in so many ways: debugging is easier because I can see how some function works, bugs and poor documentation in third-party dependencies are less scary, it’s a huge source of learning, and so much more.
* Data modeling is usually important to get right, even if it takes a little longer. Making invalid states unrepresentable can prevent whole classes of bugs. Getting a database schema wrong can cause all sorts of headaches later. I think it’s worth spending time to design your data models carefully, especially when they’re persisted or exchanged.
* Scripting. Being able to comfortably write quick Bash or Python scripts has sped me up. I write a few scripts a week for various tasks, such as sorting Markdown lists, cleaning up some data, or finding duplicate files. I highly recommend Shellcheck for Bash as it catches many common mistakes. LLMs tend to be good at these scripts, especially if they don’t need to be robust.
* Debuggers have saved me lots of time. There’s no substitute for a proper debugger. It makes it much easier to understand what’s going on (whether there’s a bug or not!), and quickly becomes faster than print()-based debugging.
* Knowing when to take a break. If I’m stuck on a problem without making progress, I should probably take a break. This has happened to me many times: I struggle with a problem for hours, step away for a few minutes, come back, and solve it in 5 minutes.
* Prefer pure functions and immutable data. The functional programming style eliminates many bugs and reduces mental overhead. It’s often easier than designing complex class hierarchies. Not always practical, but it’s my default choice.
* LLMs, despite their issues, can accelerate some parts of the development process. It’s taken me awhile to understand their strengths and weaknesses, but I use them in my day-to-day programming. Lots of ink has been spilled on the topic of LLM-assisted software development and I don’t have much to add. I like the “vibecoding” tag on Lobsters, but there are lots of other places to read.
All of these are skills I’ve practiced a bunch, and I feel the investment has made me a faster developer.
* Know how good your code needs to be for the task at hand.
* Try to soften requirements if you can.
Everything in this list seems obvious in hindsight, but these are lessons that took me a long time to learn.
I’m curious to what you’ve discovered on this topic. Are there more tricks to know, or practices of mine you disagree with? Contact me any time. I’d love to learn from you!
Thanks to the anonymous reviewers who provided feedback on drafts of this post, and to tcard on Lobsters for a comment I incorporated.
If this post left you thinking, “I want to work with this person! They build software quickly!”…well, you can. I’m looking for work, ideally at a non-profit, co-op, or social good venture. See my list of projects or LinkedIn. If you like what you see, contact me.
...
Read the original on evanhahn.com »
AI slows down open source developers. Peter Naur can teach us why.
AI slows down open source developers. Peter Naur can teach us why.
Metr recently published a paper about the impact AI tools have on open-source developer productivity1. They show that when open source developers working in codebases that they are deeply familiar with use AI tools to complete a task, then they take longer to complete that task compared to other tasks where they are barred from using AI tools. Interestingly the developers predict that AI will make them faster, and continue to believe that it did make them faster, even after completing the task slower than they otherwise would!
When developers are allowed to use AI tools, they take 19% longer to complete issues—a significant slowdown that goes against developer beliefs and expert forecasts. This gap between perception and reality is striking: developers expected AI to speed them up by 24%, and even after experiencing the slowdown, they still believed AI had sped them up by 20%.
Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity
We can’t generalise these results to all software developers. The developers in this study are a very particular sort of developer, working on very particular projects. They are experienced open source developers, working on their own projects. This study tells us that the current suite of AI tools appear to slow such developers down - but it doesn’t mean that we can assume the same applies to other developers. For example, we might expect that for corporate drones working on a next.js apps that were mostly built by other people who’ve long since left the company (me) see huge productivity improvements!
One thing we can also do, is theorise about why these particular open source developers were slowed down by tools that promise to speed them up.
I’m going to focus in particular on why they were slowed down, not the gap between perceived and real performance. The inability of developers to tell if a tool sped them up or slowed them down is fascinating in itself, probably applies to many other forms of human endeavour, and explains things as varied as why so many people think that AI has made them 10 times more productive, why I continue to use Vim, why people drive in London etc. I just don’t have any particular thoughts about why this gap arises beyond. I do have an opinion about why they are slowed down.
A while ago I wrote, somewhat tangentially, about an old paper by Peter Naur called programming as theory building. That paper states
programming properly should be regarded as an activity by which the programmers form or achieve a certain kind of insight, a theory, of the matters at hand
That is to say that the real product when we write software is our mental model of the program we’ve created. This model is what allowed us to build the software, and in future is what allows us to understand the system, diagnose problems within it, and work on it effectively. If you agree with this theory, which I do, then it explains things like why everyone hates legacy code, why small teams can outperform larger ones, why outsourcing generally goes badly, etc.
We know that the programmers in Metr’s study are all people with extremely well developed mental models of the projects they work on. And we also know that the LLMs they used had no real access to those mental models. The developers could provide chunks of that mental model to their AI tools - but doing so is a slow and lossy process that will never truly capture the theory of the program that exists in their minds. By offloading their software development work to an LLM they hampered their unique ability to work on their codebases effectively.
Think of a time that you’ve tried to delegate a simple task to someone else, say putting a baby to bed. You can write down what you think are unambiguous instructions - “give the baby milk, put it to bed, if it cries do not respond” but you will find that nine times out of ten, when you get home the person following those instructions will do the exact opposite of what you intended. Maybe they’ll have gotten the crying baby out of bed and taken it on a walk to see some frogs.
The mental models with which we understand the world are incredibly rich, to the extent that even the simplest of them take an incredible amount of effort to transfer to another person. What’s more that transfer can never be totally successful, and it’s very hard to determine how successful the transfer has been, until we run into problems caused by a lack of shared understanding. These problems are what allow us to notice a mismatch, and mutually adapt our mental models to perform better in future. When you are limited to transferring a mental model through text, to an entity that will never challenge or ask clarifying questions, which can’t really learn, and which cannot treat one statement as more important than any other - well the task becomes essentially impossible.
This is why AI coding tools, as they exist today, will generally slow someone down if they know what they are doing, and are working on a project that they understand.
Well, maybe not. In the previous paragraph I wrote that AI tools will slow down someone who “knows what they are doing, and who is working on a project they understand” - does this describe the average software developer in industry? I doubt it. Does it describe software developers in your workplace?
It’s common for engineers to end up working on projects which they don’t have an accurate mental model of. Projects built by people who have long since left the company for pastures new. It’s equally common for developers to work in environments where little value is placed on understanding systems, but a lot of value is placed on quickly delivering changes that mostly work. In this context, I think that AI tools have more of an advantage. They can ingest the unfamiliar codebase faster than any human can, and can often generate changes that will essentially work.
So if we take this narrow and short termed view of productivity and say that it is simply time to produce business value - then yes I think that an LLM can make developers more productive. I can’t prove it - not having any data - but I’d love if someone did do this study. If there are no takers then I might try experimenting on myself.
But there is a problem with using AI tools in this context.
Okay, so if you don’t have a mental model of a program, then maybe an LLM could improve your productivity. However, we agreed earlier that the main purpose of writing software is to build a mental model. If we outsource our work to the LLM are we still able to effectively build the mental model? I doubt it2.
So should you avoid using these tools? Maybe. If you expect to work on a project long term, want to truly understand it, and wish to be empowered to make changes effectively then I think you should just write some code yourself3. If on the other hand you are just slopping out slop at the slop factory, then install cursor4 and crack on - yolo.
1 It’s a really fabulous study, and I strongly suggest reading at least the summary.
2 One of the commonly suggested uses of Claude Code et al is that you can use them to quickly onboard into new projects by asking questions about that project. Does that help us build a mental model. Maybe yes! Does generating code 10 times faster than a normal developer lead to a strong mental model of the system that is being created? Almost certainly not.
3 None of this is to say that there couldn’t be AI tools which meaningfully speed up developers with a mental model of their projects, or which help them build those mental models. But the current suite of tools that exist don’t seem to be heading in that direction. It’s possible that if models improve then we might get to a point that there’s no need for any human to ever hold a mental model of a software artifact. But we’re certainly not there yet.
4 Don’t install cursor, it sucks. Use Claude Code like an adult.
...
Read the original on johnwhiles.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.