10 interesting stories served every morning and every evening.




1 841 shares, 39 trendiness

no days off

I did­n’t start run­ning un­til I was in my late twen­ties, and even so I would end up in a pat­tern where I’d get mo­ti­vated and go on a cou­ple of runs, take a few days off, go on an­other run the fol­low­ing week, and next thing you know it’s been a month since I last run. Rinse and re­peat.

In July 2015, some­thing changed. I headed out on a run on a Tuesday, then did an­other one the next day, and the day af­ter, and… I took the Friday off. When I woke up on July 11, 2015 I re­mem­ber think­ing I could have done 4 days in a row, so I set out to try and do that. 4 days turned into a week, then a month, then two, then six, then a year, and here I am, ten years later.

I’ve had the priv­i­lege to run in some amaz­ing places, from the streets of my home­town to the trails of na­tional parks, on all seven con­ti­nents. I’ve run solo and I’ve run with friends, I’ve run with mu­sic and I’ve run with my own thoughts. I’ve run through stress frac­tures, heart pro­ce­dures, flus and other phys­i­cal ail­ments. I’ve run in frigid sub zero weather and in swel­ter­ing heat. Each run has been a new ad­ven­ture, and I’ve learned some­thing dif­fer­ent from every ex­pe­ri­ence.

Running has changed my life, and I hope I’ll still keep this go­ing in an­other decade. I’ve been ex­tremely lucky to have had the sup­port of my won­der­ful wife Molly through­out this jour­ney, and I could­n’t have done it with­out her pa­tience, how many times has she heard me say I’ll be back in a few! in the morn­ings.

...

Read the original on nodaysoff.run »

2 552 shares, 81 trendiness

Introducing Kiro

A new agen­tic IDE that works along­side you from pro­to­type to pro­duc­tion­I’m sure you’ve been there: prompt, prompt, prompt, and you have a work­ing ap­pli­ca­tion. It’s fun and feels like magic. But get­ting it to pro­duc­tion re­quires more. What as­sump­tions did the model make when build­ing it? You guided the agent through­out, but those de­ci­sions aren’t doc­u­mented. Requirements are fuzzy and you can’t tell if the ap­pli­ca­tion meets them. You can’t quickly un­der­stand how the sys­tem is de­signed and how that de­sign will af­fect your en­vi­ron­ment and per­for­mance. Sometimes it’s bet­ter to take a step back, think through de­ci­sions, and you’ll end up with a bet­ter ap­pli­ca­tion that you can eas­ily main­tain. That’s what Kiro helps you do with spec-dri­ven de­vel­op­ment. I’m ex­cited to an­nounce Kiro, an AI IDE that helps you de­liver from con­cept to pro­duc­tion through a sim­pli­fied de­vel­oper ex­pe­ri­ence for work­ing with AI agents. Kiro is great at vibe cod­ing’ but goes way be­yond that—Kiro’s strength is get­ting those pro­to­types into pro­duc­tion sys­tems with fea­tures such as specs and hooks. Kiro specs are ar­ti­facts that prove use­ful any­time you need to think through a fea­ture in-depth, refac­tor work that needs up­front plan­ning, or when you want to un­der­stand the be­hav­ior of sys­tems—in short, most things you need to get to pro­duc­tion. Requirements are usu­ally un­cer­tain when you start build­ing, which is why de­vel­op­ers use specs for plan­ning and clar­ity. Specs can guide AI agents to a bet­ter im­ple­men­ta­tion in the same way.Kiro hooks act like an ex­pe­ri­enced de­vel­oper catch­ing things you miss or com­plet­ing boil­er­plate tasks in the back­ground as you work. These event-dri­ven au­toma­tions trig­ger an agent to ex­e­cute a task in the back­ground when you save, cre­ate, delete files, or on a man­ual trig­ger.Kiro ac­cel­er­ates the spec work­flow by mak­ing it more in­te­grated with de­vel­op­ment. In our ex­am­ple, we have an e-com­merce ap­pli­ca­tion for sell­ing crafts to which we want to add a re­view sys­tem for users to leave feed­back on crafts. Let’s walk through the three-step process of build­ing with specs.The ecom­merce app that we are work­ing with­Kiro un­packs re­quire­ments from a sin­gle prompt—type Add a re­view sys­tem for prod­ucts” and it gen­er­ates user sto­ries for view­ing, cre­at­ing, fil­ter­ing, and rat­ing re­views. Each user story in­cludes EARS (Easy Approach to Requirements Syntax) no­ta­tion ac­cep­tance cri­te­ria cov­er­ing edge cases de­vel­op­ers typ­i­cally han­dle when build­ing from ba­sic user sto­ries. This makes your prompt as­sump­tions ex­plicit, so you know Kiro is build­ing what you want.Kiro then gen­er­ates a de­sign doc­u­ment by an­a­lyz­ing your code­base and ap­proved spec re­quire­ments. It cre­ates data flow di­a­grams, TypeScript in­ter­faces, data­base schemas, and API end­points—like the Review in­ter­faces for our re­view sys­tem. This elim­i­nates the lengthy back-and-forth on re­quire­ments clar­ity that typ­i­cally slows de­vel­op­ment.Kiro gen­er­ates tasks and sub-tasks, se­quences them cor­rectly based on de­pen­den­cies, and links each to re­quire­ments. Each task in­cludes de­tails such as unit tests, in­te­gra­tion tests, load­ing states, mo­bile re­spon­sive­ness, and ac­ces­si­bil­ity re­quire­ments for im­ple­men­ta­tion. This lets you check work in steps rather than dis­cov­er­ing miss­ing pieces af­ter you think you’re done.Kiro sim­pli­fies this en­tire process by au­to­gen­er­at­ing the tasks and sub-tasks, se­quenc­ing them in the right or­der, and link­ing each task back to re­quire­ments so noth­ing falls through the cracks. As you can see be­low, Kiro has thought of writ­ing unit tests for each task, added load­ing states, in­te­gra­tion tests for the in­ter­ac­tion be­tween prod­ucts and re­views, and re­spon­sive de­sign and ac­ces­si­bil­ity.The task in­ter­face lets you trig­ger tasks one-by-one with a progress in­di­ca­tor show­ing ex­e­cu­tion sta­tus. Once com­plete, you can see the com­ple­tion sta­tus in­line and au­dit the work by view­ing code diffs and agent ex­e­cu­tion his­tory.Kiro’s specs stay synced with your evolv­ing code­base. Developers can au­thor code and ask Kiro to up­date specs or man­u­ally up­date specs to re­fresh tasks. This solves the com­mon prob­lem where de­vel­op­ers stop up­dat­ing orig­i­nal ar­ti­facts dur­ing im­ple­men­ta­tion, caus­ing doc­u­men­ta­tion mis­matches that com­pli­cate fu­ture main­te­nance.4. Catch is­sues be­fore they ship with hooks­Be­fore sub­mit­ting code, most de­vel­op­ers run through a men­tal check­list: Did I break any­thing? Are tests up­dated? Is doc­u­men­ta­tion cur­rent? This cau­tion is healthy but can take a lot of man­ual work to im­ple­ment.Kiro’s agent hooks act like an ex­pe­ri­enced de­vel­oper catch­ing things you miss. Hooks are event-dri­ven au­toma­tions that ex­e­cute when you save or cre­ate files—it’s like del­e­gat­ing tasks to a col­lab­o­ra­tor. Set up a hook once, and Kiro han­dles the rest. Some ex­am­ples:When you save a React com­po­nent, hooks up­date the test file.When you’re ready to com­mit, se­cu­rity hooks scan for leaked cre­den­tials.Hooks en­force con­sis­tency across your en­tire team. Everyone ben­e­fits from the same qual­ity checks, code stan­dards, and se­cu­rity val­i­da­tion fixes. For our re­view fea­ture, I want to en­sure any new React com­po­nent fol­lows the Single Responsibility Principle so de­vel­op­ers don’t cre­ate com­po­nents that do too many things. Kiro takes my prompt, gen­er­ates an op­ti­mized sys­tem prompt, and se­lects the repos­i­tory fold­ers to mon­i­tor. Once this hook is com­mit­ted to Git, it en­forces the cod­ing stan­dard across my en­tire team—when­ever any­one adds a new com­po­nent, the agent au­to­mat­i­cally val­i­dates it against the guide­lines. Beyond specs and hooks, Kiro in­cludes all the fea­tures you’d ex­pect from an AI code ed­i­tor: Model Context Protocol (MCP) sup­port for con­nect­ing spe­cial­ized tools, steer­ing rules to guide AI be­hav­ior across your pro­ject, and agen­tic chat for ad-hoc cod­ing tasks with file, URL, Doc’s con­text providers. Kiro is built on Code OSS, so you can keep your VS Code set­tings and Open VSX com­pat­i­ble plu­g­ins while work­ing with our IDE. You get the full AI cod­ing ex­pe­ri­ence, plus the fun­da­men­tals needed for pro­duc­tion.Our vi­sion is to solve the fun­da­men­tal chal­lenges that make build­ing soft­ware prod­ucts so dif­fi­cult—from en­sur­ing de­sign align­ment across teams and re­solv­ing con­flict­ing re­quire­ments, to elim­i­nat­ing tech debt, bring­ing rigor to code re­views, and pre­serv­ing in­sti­tu­tional knowl­edge when se­nior en­gi­neers leave. The way hu­mans and ma­chines co­or­di­nate to build soft­ware is still messy and frag­mented, but we’re work­ing to change that. Specs is a ma­jor step in that di­rec­tion.

Ready to ex­pe­ri­ence spec-dri­ven de­vel­op­ment? Kiro is free dur­ing pre­view, with some lim­its. We’re ex­cited to see you try it out to build real apps and would love to hear from you on our Discord server.To get started, Download Kiro and sign in with one of our four lo­gin meth­ods in­clud­ing Google and GitHub. We sup­port Mac, Windows, and Linux, and most pop­u­lar pro­gram­ming lan­guages. Our hands-on tu­to­r­ial walks you through build­ing a com­plete fea­ture from spec to de­ploy­ment. Start the tu­to­r­ial.Let’s con­nect - tag @kirodotdev on X, LinkedIn, or Instagram, and @kiro.dev on Bluesky, and share what you’ve built us­ing hash­tag #builtwithkiro

...

Read the original on kiro.dev »

3 497 shares, 75 trendiness

SF, Oakland cops illegally funneled license plate data to feds

Privacy ad­vo­cates and elected of­fi­cials have blasted the mis­han­dling of the data by California law en­force­ment agen­cies. But the is­sue re­ceived re­newed scrutiny af­ter 404 Media re­ported in May that ICE was ac­cess­ing in­for­ma­tion from the na­tion­wide net­work of cam­eras man­u­fac­tured by Atlanta-based Flock. Numerous Southern California agen­cies sim­i­larly shared Flock data with the feds, CalMatters wrote last month.

The cam­eras, made by Flock Safety, cap­ture the li­cense plate of every ve­hi­cle that passes them, then store the in­for­ma­tion in a data­base for use in po­lice in­ves­ti­ga­tions.

Under a decade-old state law, California po­lice are pro­hib­ited from shar­ing data from au­to­mated li­cense plate read­ers with out-of-state and fed­eral agen­cies. Attorney General Rob Bonta af­firmed that fact in a 2023 no­tice to po­lice.

The logs show that since in­stalling hun­dreds of plate read­ers last year, the de­part­ments have shared data for in­ves­ti­ga­tions by seven fed­eral agen­cies, in­clud­ing the FBI. In at least one case, the Oakland Police Department ful­filled a re­quest re­lated to an Immigration and Customs Enforcement in­ves­ti­ga­tion.

San Francisco and Oakland po­lice ap­pear to have re­peat­edly bro­ken state law by shar­ing data from au­to­mated li­cense plate cam­eras with fed­eral law en­force­ment, ac­cord­ing to records ob­tained by The Standard.

Everything you need to know to start your day.

Privacy ad­vo­cates and elected of­fi­cials have blasted the mis­han­dling of the data by California law en­force­ment agen­cies. But the is­sue re­ceived re­newed scrutiny af­ter 404 Media re­ported in May that ICE was ac­cess­ing in­for­ma­tion from the na­tion­wide net­work of cam­eras man­u­fac­tured by Atlanta-based Flock. Numerous Southern California agen­cies sim­i­larly shared Flock data with the feds, CalMatters wrote last month.

The cam­eras, made by Flock Safety, cap­ture the li­cense plate of every ve­hi­cle that passes them, then store the in­for­ma­tion in a data­base for use in po­lice in­ves­ti­ga­tions.

Under a decade-old state law, California po­lice are pro­hib­ited from shar­ing data from au­to­mated li­cense plate read­ers with out-of-state and fed­eral agen­cies. Attorney General Rob Bonta af­firmed that fact in a 2023 no­tice to po­lice.

The logs show that since in­stalling hun­dreds of plate read­ers last year, the de­part­ments have shared data for in­ves­ti­ga­tions by seven fed­eral agen­cies, in­clud­ing the FBI. In at least one case, the Oakland Police Department ful­filled a re­quest re­lated to an Immigration and Customs Enforcement in­ves­ti­ga­tion.

San Francisco and Oakland po­lice ap­pear to have re­peat­edly bro­ken state law by shar­ing data from au­to­mated li­cense plate cam­eras with fed­eral law en­force­ment, ac­cord­ing to records ob­tained by The Standard.

Everything you need to know to start your day.

The logs, which The Standard ob­tained via a pub­lic records re­quest, show every time the Oakland Police Department searched its net­work of plate read­ers or granted other agen­cies ac­cess to its data. Each en­try in­cludes the re­quest­ing agency, rea­son for the re­quest, time of sub­mis­sion, and the num­ber of Flock net­works in­cluded in the search.

The OPD did­n’t share in­for­ma­tion di­rectly with the fed­eral agen­cies. Rather, other California po­lice de­part­ments searched Oakland’s sys­tem on be­half of fed­eral coun­ter­parts more than 200 times — pro­vid­ing rea­sons such as FBI in­ves­ti­ga­tion” for the searches — which ap­pears to mir­ror a strat­egy first re­ported by 404 Media, in which fed­eral agen­cies that don’t have con­tracts with Flock turn to lo­cal po­lice for back­door ac­cess.

Oakland cops shared data with feds soon af­ter their cam­eras went live in August 2024. Two of the OPDs ear­li­est logs are Sept. 16 searches in which the San Francisco Police Department pulled data from Oakland’s cam­eras on be­half of in­ves­ti­ga­tors at the FBI and the fed­eral Bureau of Alcohol, Tobacco, Firearms, and Explosives.

One search by the California Highway Patrol of the OPDs sys­tem on April 22 is listed as an ICE case,” with no clar­i­fi­ca­tion. A CHP spokesper­son said the agency was actively in­ves­ti­gat­ing” the search af­ter The Standard asked for com­ment.

If any CHP per­son­nel re­quested li­cense plate data on be­half of ICE for pur­poses of im­mi­gra­tion en­force­ment, that would be a bla­tant vi­o­la­tion of both state law and long­stand­ing de­part­ment pol­icy,” the spokesper­son wrote. If these al­le­ga­tions are con­firmed, there will be con­se­quences.”

The SFPD has­n’t re­sponded to a re­quest for its own logs filed by The Standard more than a month ago, but OPD records show San Francisco cops ac­cessed Oakland’s data at least 100 times on be­half of fed­eral agen­cies.

OPD will care­fully re­view this in­for­ma­tion to de­ter­mine whether any ac­tions are in­con­sis­tent with our poli­cies,” an OPD spokesper­son wrote. We will also col­lab­o­rate with ex­ter­nal agen­cies to iden­tify any po­ten­tial is­sues and en­sure ac­count­abil­ity.”

We take pri­vacy se­ri­ously and we have ro­bust poli­cies in place to pro­tect per­sonal in­for­ma­tion and en­sure the re­spon­si­ble, law­ful use of sur­veil­lance tech­nol­ogy,” the spokesper­son wrote.

Adam Schwartz, pri­vacy lit­i­ga­tion di­rec­tor at the Electronic Frontier Foundation, con­firmed that Senate Bill 34 of 2015 pro­hibits California po­lice from shar­ing data from au­to­mated li­cense plate read­ers with out-of-state and fed­eral agen­cies, re­gard­less of what they plan to do with the data or whether they’re work­ing on a joint task force.

Just be­cause Oakland has col­lected ALPR data for pur­poses of deal­ing with lo­cal crime does­n’t mean this is a come one, come all’ buf­fet,” Schwartz said.

Oakland and San Francisco in March 2024 signed con­tracts to in­stall hun­dreds of Flock’s cam­eras. City of­fi­cials have touted the cam­eras’ abil­ity to tamp down on shoot­ings, rob­beries, and road rage. Indeed, logs show that the vast ma­jor­ity of searches made by the San Francisco and Oakland po­lice de­part­ments are re­lated to lo­cal crime en­force­ment.

We’re proud to be a place that is safe for im­mi­grants, peo­ple seek­ing abor­tion, and peo­ple who are seek­ing gen­der-af­firm­ing health­care,” Schwartz said. In or­der to be fully a sanc­tu­ary state, we also have to be a data sanc­tu­ary state.”

Mike Katz-Lacabe, an ac­tivist at the group Oakland Privacy, said he’d like to see more lit­i­ga­tion or en­force­ment from state of­fi­cials to force po­lice to com­ply with the law ban­ning the shar­ing of data with fed­eral agen­cies.

Lawsuits that cost money gen­er­ally get agen­cies and ju­ris­dic­tions to change how they do things,” Katz-Lacabe said. Until they ac­tu­ally have ac­count­abil­ity, I don’t think that much is go­ing to hap­pen.”

...

Read the original on sfstandard.com »

4 457 shares, 32 trendiness

Apple’s Browser Engine Ban Persists, Even Under the DMA

TL;DR: Apple’s rules and tech­ni­cal re­stric­tions are block­ing other browser ven­dors from suc­cess­fully of­fer­ing their own en­gines to users in the EU. At the re­cent Digital Markets Act (DMA) work­shop, Apple claimed it did­n’t know why no browser ven­dor has ported their en­gine to iOS over the past 15 months. But the re­al­ity is Apple knows ex­actly what the bar­ri­ers are, and has cho­sen not to re­move them.

Safari is the high­est mar­gin prod­uct Apple has ever made, ac­counts for 14-16% of Apple’s an­nual op­er­at­ing profit and brings in $20 bil­lion per year in search en­gine rev­enue from Google. For each 1% browser mar­ket share that Apple loses for Safari, Apple is set to lose $200 mil­lion in rev­enue per year.

Ensuring other browsers are not able to com­pete fairly is crit­i­cal to Apple’s best and eas­i­est rev­enue stream, and al­lows Apple to re­tain full con­trol over the max­i­mum ca­pa­bil­i­ties of web apps, lim­it­ing their per­for­mance and util­ity to pre­vent them from mean­ing­fully com­pet­ing with na­tive apps dis­trib­uted through their app store. Consumers and de­vel­op­ers (native or web) then suf­fer due to a lack of com­pe­ti­tion.

This browser en­gine ban is unique to Apple and no other gate­keeper im­poses such a re­stric­tion. Until Apple lifts these bar­ri­ers they are not in ef­fec­tive com­pli­ance with the DMA.

We had the op­por­tu­nity to ques­tion Apple di­rectly on this at the 2025 DMA work­shop. Here’s how they re­sponded:

As a quick back­ground to new read­ers, we (Open Web Advocacy) are a non-profit ded­i­cated to im­prov­ing browser and web app com­pe­ti­tion on all op­er­at­ing sys­tems. We re­ceive no fund­ing from any gate­keeper, nor any of the browser ven­dors. We have en­gaged with mul­ti­ple reg­u­la­tors in­clud­ing in the EU, UK, Japan, Australia and the United States.

Our pri­mary con­cern is Apple’s rule ban­ning third-party browser en­gines from iOS and thus set­ting a ceil­ing on browser and web app com­pe­ti­tion.

We en­gaged ex­ten­sively with the UKs CMA and the EU on this topic and to our de­light spe­cific text was added to the EUs Digital Markets Act ex­plic­itly pro­hibit­ing the ban­ning of third-party browser en­gines, and stat­ing that the pur­pose was to pre­vent gate­keep­ers from de­ter­min­ing the per­for­mance, sta­bil­ity and func­tion­al­ity of third party browsers and the web apps they power.

The first batch of des­ig­nated gate­keep­ers Apple, Google, Meta, Amazon, Bytedance, Microsoft were re­quired to be in com­pli­ance with the DMA by March 7th, 2024.

Apple’s com­pli­ance did not start well. Faced with the gen­uine pos­si­bil­ity of third-party browsers ef­fec­tively pow­er­ing web apps, Apple’s first in­stinct was to re­move web app sup­port en­tirely from iOS with no no­tice to ei­ther busi­nesses or con­sumers. Under sig­nif­i­cant pres­sure from us and the Commission, Apple can­celed their plan to sab­o­tage web apps in the EU.

Both Google and Mozilla be­gan port­ing their browser en­gines Blink and Gecko re­spec­tively to iOS. Other browser ven­dors are de­pen­dent on these ports to bring their own en­gines to their browsers iOS, as their prod­ucts are typ­i­cally soft forks (copies with mod­i­fi­ca­tions) of Blink or Gecko.

However there were sig­nif­i­cant is­sues with Apple’s con­tract and tech­ni­cal re­stric­tions that made port­ing browser en­gines to iOS as painful as pos­si­ble” for browser ven­dors.

Apple’s pro­pos­als fail to give con­sumers vi­able choices by mak­ing it as painful as pos­si­ble for oth­ers to pro­vide com­pet­i­tive al­ter­na­tives to Safari […] This is an­other ex­am­ple of Apple cre­at­ing bar­ri­ers to pre­vent true browser com­pe­ti­tion on iOS.

Many of Apple’s bar­ri­ers rely on vague se­cu­rity and pri­vacy grounds for which Apple has pub­lished no de­tailed tech­ni­cal jus­ti­fi­ca­tion for both their ne­ces­sity and pro­por­tion­al­ity. As the USs Department of Justice wrote in their com­plaint:

In the end, Apple de­ploys pri­vacy and se­cu­rity jus­ti­fi­ca­tions as an elas­tic shield that can stretch or con­tract to serve Apple’s fi­nan­cial and busi­ness in­ter­ests.

In June, 2024 we pub­lished a pa­per out­lin­ing these bar­ri­ers.

We rec­og­nize un­der the DMA that we’ve been forced to change. And we have cre­ated a pro­gram that keeps se­cu­rity and pri­vacy in mind, that keeps the in­tegrity of the op­er­at­ing sys­tem in mind, and al­lows third par­ties to bring their browser en­gine, Google, Mozilla, to the plat­form. And for what­ever rea­son, they have cho­sen not to do so.

At the DMA work­shop last week, we di­rectly raised with Apple the pri­mary blocker pre­vent­ing third-party browser en­gines from ship­ping on iOS. Apple claimed that ven­dors like Google and Mozilla have everything they need” to ship a browser en­gine in the EU and sim­ply have cho­sen not to do so”.

Apple has been fully aware of these bar­ri­ers since at least June 2024, when we cov­ered them in ex­haus­tive de­tail. Multiple browser ven­dors have also dis­cussed these same is­sues with Apple di­rectly. The sug­ges­tion that Apple is un­aware of the prob­lems is not just ridicu­lous, it’s demon­stra­bly false. Apple knows ex­actly what the is­sues are. It is sim­ply re­fus­ing to ad­dress them.

The most crit­i­cal bar­ri­ers that con­tinue to block third-party en­gines on iOS in­clude:

Loss of ex­ist­ing EU users: Apple forces browser ven­dors to cre­ate en­tirely new apps to use their own en­gine, mean­ing they must aban­don all cur­rent EU users and start from scratch.

Web de­vel­oper test­ing: Apple al­lows na­tive app de­vel­op­ers out­side the EU to test EU-specific func­tion­al­ity, but of­fers no equiv­a­lent for web de­vel­op­ers to test their soft­ware us­ing third-party browser en­gines on iOS. Apple stated dur­ing the con­fer­ence to ex­pect up­dates here but pro­vided no de­tails.

No up­dates on long trips out­side EU: Apple has not con­firmed they will not dis­able browser up­dates (including se­cu­rity patches) if an EU user trav­els out­side the EU for more than 30 days. This, far from be­ing a se­cu­rity mea­sure, ac­tively low­ers users’ se­cu­rity by de­priv­ing them of se­cu­rity up­dates.

Hostile le­gal terms: The con­trac­tual con­di­tions Apple im­poses are harsh, one-sided, and in­com­pat­i­ble with the DMAs re­quire­ment that rules for API ac­cess can only be strictly nec­es­sary, pro­por­tion­ate se­cu­rity mea­sures.

Apple has ad­dressed two of the is­sues we raised in our orig­i­nal pa­per:

However, the most crit­i­cal bar­rier re­mains firmly in place: Apple still forces browser ven­dors to aban­don all their ex­ist­ing EU users if they want to ship a non-We­bKit en­gine. This sin­gle re­quire­ment de­stroys the busi­ness case for port­ing an en­gine to iOS. Building and main­tain­ing a full browser en­gine is a ma­jor un­der­tak­ing. Requiring ven­dors to start from scratch in one re­gion (even a re­gion as large as the EU), with zero users, makes the in­vest­ment com­mer­cially non­vi­able.

Instead, trans­ac­tion and over­head costs for de­vel­op­ers will be higher, rather than lower, since they must de­velop a ver­sion of their apps for the EU and an­other for the rest of the world. On top of that, if and when they ex­er­cise the pos­si­bil­ity to, for in­stance, in­cor­po­rate their own browser en­gines into their browsers (they for­merly worked on Apple’s pro­pri­etary WebKit), they must sub­mit a sep­a­rate bi­nary to Apple for its ap­proval. What does that mean ex­actly? That de­vel­op­ers must ship a new ver­sion of their app to its cus­tomers, and reacquire’ them from zero.

Those are the ma­jor block­ers to browser ven­dors port­ing their own en­gines to iOS. The list of changes that we be­lieve Apple needs to make to be com­pli­ant with the DMA with re­spect to browsers and web apps on iOS is far larger, and we out­line them in de­tail at the end of the ar­ti­cle.

Perhaps the most im­por­tant of these is the abil­ity for browsers to in­stall and man­age web apps with their own en­gines. Something that has been di­rectly rec­om­mended by both the UKs MIR in­ves­ti­ga­tion and the UKs SMS in­ves­ti­ga­tions.

Gatekeepers can ham­per the abil­ity of end users to ac­cess on­line con­tent and ser­vices, in­clud­ing soft­ware ap­pli­ca­tions. Therefore, rules should be es­tab­lished to en­sure that the rights of end users to ac­cess an open in­ter­net are not com­pro­mised by the con­duct of gate­keep­ers.

What sets the web apart is that it was never de­signed to con­fine users within closed ecosys­tems. It is the world’s only truly open and in­ter­op­er­a­ble plat­form, re­quir­ing no con­tracts with OS gate­keep­ers, no rev­enue shar­ing with in­ter­me­di­aries, and no ap­proval from dom­i­nant plat­form own­ers. Anyone can pub­lish a web­site or build a web app with­out per­mis­sion. There are no built-in lock-in mech­a­nisms keep­ing users tied to a sin­gle com­pa­ny’s hard­ware or ser­vices. Users can switch browsers, move be­tween de­vices, and across ecosys­tems, all with­out los­ing ac­cess to their data, tools, or dig­i­tal lives.

This kind of free­dom sim­ply does­n’t ex­ist in app store-con­trolled en­vi­ron­ments, where every app up­date, trans­ac­tion, and user in­ter­ac­tion is sub­ject to cen­tral­ized con­trol, cen­sor­ship, or a manda­tory fi­nan­cial cut. The we­b’s ar­chi­tec­ture pri­or­i­tizes user au­ton­omy, de­vel­oper free­dom, and cross-plat­form com­pat­i­bil­ity.

Apple’s jus­ti­fi­ca­tion for its gate­keep­ing is se­cu­rity. Its po­si­tion is that only Apple can be trusted to de­cide what soft­ware users are al­lowed to in­stall. Every third party must sub­mit to its re­view and ap­proval process, no ex­cep­tions.

But the se­cure, in­ter­op­er­a­ble, and ca­pa­ble al­ter­na­tive al­ready ex­ists, and it’s thriv­ing. That so­lu­tion is the Web, and more specif­i­cally, web apps. On open plat­forms like desk­top, web tech­nolo­gies al­ready ac­count for over 70% of user ac­tiv­ity, and that fig­ure is only grow­ing.

Web apps of­fer the key prop­er­ties needed to solve the cross-plat­form prob­lem. They run in­side the browser sand­box, which even Apple ad­mits is orders of mag­ni­tude more strin­gent than the sand­box for na­tive iOS apps”. They are fully in­ter­op­er­a­ble across op­er­at­ing sys­tems. They don’t re­quire con­tracts with OS ven­dors. And they’re highly ca­pa­ble: if there was ef­fec­tive com­pe­ti­tion, around 90% of the apps on your phone could be de­liv­ered as web apps.

However, this promise only holds if browser ven­dors are al­lowed to com­pete, us­ing their own en­gines, on every plat­form. Without that, Apple can uni­lat­er­ally limit what the web is ca­pa­ble of, not just on iOS, but every­where. If a fea­ture can’t be used on a plat­form as crit­i­cal as iOS, then for many de­vel­op­ers, it may as well not ex­ist.

That’s why en­force­ment of the Digital Markets Act on this is­sue is so vi­tal, not just for the EU, but for the world.

The web is the world’s only truly in­ter­op­er­a­ble check against op­er­at­ing sys­tem plat­form mo­nop­o­lies. It must be al­lowed to com­pete fairly.

A key ques­tion is whether Apple is re­quired to fix this un­der the Digital Markets Act. Apple’s rep­re­sen­ta­tives ar­gue that browser ven­dors can port their own en­gines to iOS in the EU and at a highly su­per­fi­cial and tech­ni­cal level this is true. However, what Apple does not ac­knowl­edge is that the con­di­tions it im­poses make do­ing so fi­nan­cially un­vi­able in prac­tice. Does this re­ally count as com­pli­ance?

To an­swer that, we need to ex­am­ine the DMA it­self.

The pri­mary rel­e­vant ar­ti­cle in the Digital Markets Act is Article 5(7):

The gate­keeper shall not re­quire end users to use, or busi­ness users to use, to of­fer, or to in­ter­op­er­ate with, an iden­ti­fi­ca­tion ser­vice, a web browser en­gine or a pay­ment ser­vice, or tech­ni­cal ser­vices that sup­port the pro­vi­sion of pay­ment ser­vices, such as pay­ment sys­tems for in-app pur­chases, of that gate­keeper in the con­text of ser­vices pro­vided by the busi­ness users us­ing that gate­keep­er’s core plat­form ser­vices.

At face value, Apple ap­pears to have com­plied with the let­ter of Article 5(7). It tech­ni­cally al­lows third-party en­gines, but only un­der tech­ni­cal plat­form con­straints and con­trac­tual con­di­tions that ren­der port­ing non-vi­able.

The gate­keeper shall en­sure and demon­strate com­pli­ance with the oblig­a­tions laid down in Articles 5, 6 and 7 of this Regulation. The mea­sures im­ple­mented by the gate­keeper to en­sure com­pli­ance with those Articles shall be ef­fec­tive in achiev­ing the ob­jec­tives of this Regulation and of the rel­e­vant oblig­a­tion.

The gate­keeper shall not en­gage in any be­hav­iour that un­der­mines ef­fec­tive com­pli­ance with the oblig­a­tions of Articles 5, 6 and 7 re­gard­less of whether that be­hav­iour is of a con­trac­tual, com­mer­cial or tech­ni­cal na­ture, or of any other na­ture, or con­sists in the use of be­hav­ioural tech­niques or in­ter­face de­sign.

These two ar­ti­cles clar­ify that it is not enough for Apple to sim­ply al­low al­ter­na­tive en­gines in the­ory. The mea­sures must be ef­fec­tive in achiev­ing the ar­ti­cle’s ob­jec­tives, and Apple must not un­der­mine those ob­jec­tives by tech­ni­cal or con­trac­tual means.

The in­tent of this pro­vi­sion is laid out clearly in the recitals of the DMA:

In par­tic­u­lar, each browser is built on a web browser en­gine, which is re­spon­si­ble for key browser func­tion­al­ity such as speed, re­li­a­bil­ity and web com­pat­i­bil­ity. When gate­keep­ers op­er­ate and im­pose web browser en­gines, they are in a po­si­tion to de­ter­mine the func­tion­al­ity and stan­dards that will ap­ply not only to their own web browsers, but also to com­pet­ing web browsers and, in turn, to web soft­ware ap­pli­ca­tions. Gatekeepers should there­fore not use their po­si­tion to re­quire their de­pen­dent busi­ness users to use any of the ser­vices pro­vided to­gether with, or in sup­port of, core plat­form ser­vices by the gate­keeper it­self as part of the pro­vi­sion of ser­vices or prod­ucts by those busi­ness users.

In other words, Apple should not be in a po­si­tion to dic­tate what fea­tures, per­for­mance, or stan­dards in com­pet­ing browsers and the web apps they power. That is, the in­tent is to guar­an­tee that browser ven­dors have the free­dom to im­ple­ment their own en­gines, thereby re­mov­ing Apple’s abil­ity to con­trol the per­for­mance, fea­tures, and stan­dards of com­pet­ing browsers and the web apps built on them.

Fifteen months since the DMA came into force, no browser ven­dor has suc­cess­fully ported a com­pet­ing en­gine to iOS. The fi­nan­cial, tech­ni­cal, and con­trac­tual bar­ri­ers Apple has put in place re­main in­sur­mount­able. These re­stric­tions are not grounded in strictly nec­es­sary or pro­por­tion­ate se­cu­rity jus­ti­fi­ca­tions.

This is not what ef­fec­tive com­pli­ance looks like. Article 5(7)’s goals, en­abling en­gine-level com­pe­ti­tion and free­ing web apps from Apple’s ceil­ing on func­tion­al­ity and sta­bil­ity, have not been met. Under Article 8(1) and Article 13(4), that makes Apple non-com­pli­ant.

Apple has a clear le­gal oblig­a­tion to fix this. But will it act with­out pres­sure?

Any suc­cess­ful so­lu­tion to al­low browsers to use their own en­gines in the EU is highly likely to be­come global. Multiple reg­u­la­tors and gov­ern­ment or­ga­ni­za­tions have rec­om­mended end­ing Apple’s ban on third-party browsers in­clud­ing in the UK, Japan, USA and Australia. Further mul­ti­ple new laws have al­ready been passed, in­clud­ing the UKs Digital Markets, Competition and Consumers Act (DMCC), and Japan’s Smartphone Act which di­rectly pro­hibits it. Australia and the United States are also con­sid­er­ing sim­i­lar leg­is­la­tion. Finally the U. S. Department of Justice is pur­su­ing an an­titrust case against Apple and their com­plaint di­rectly cites the is­sue.

With grow­ing in­ter­na­tional mo­men­tum, and con­tin­ued ad­vo­cacy push­ing for aligned global en­force­ment, Apple’s browser en­gine ban is fac­ing sus­tained and mount­ing pres­sure. If the EU suc­ceeds in forc­ing mean­ing­ful com­pli­ance un­der the DMA, it will set a global prece­dent. What reg­u­la­tor or gov­ern­ment would tol­er­ate such an ob­vi­ous re­stric­tion on com­pe­ti­tion in their own mar­ket once the EU has shown it can be dis­man­tled?

So why is Apple re­sist­ing this change so hard? They’ve al­ready fought, and lost, a high court bat­tle over it. Is this just a mat­ter of be­ing liti­gious? Hardly. Apple is act­ing ra­tio­nally, if un­eth­i­cally. At the end of the day, it’s all about pro­tect­ing rev­enue.

The UK reg­u­la­tor cites two in­cen­tives: pro­tect­ing their app store rev­enue from com­pe­ti­tion from web apps, and pro­tect­ing their Google search deal from com­pe­ti­tion from third-party browsers.

Apple re­ceives sig­nif­i­cant rev­enue from Google by set­ting Google Search as the de­fault search en­gine on Safari, and there­fore ben­e­fits fi­nan­cially from high us­age of Safari. […] The WebKit re­stric­tion may help to en­trench this po­si­tion by lim­it­ing the scope for other browsers on iOS to dif­fer­en­ti­ate them­selves from Safari […] As a re­sult, it is less likely that users will choose other browsers over Safari, which in turn se­cures Apple’s rev­enues from Google. […] Apple gen­er­ates rev­enue through its App Store, both by charg­ing de­vel­op­ers for ac­cess to the App Store and by tak­ing a com­mis­sion for pay­ments made via Apple IAP. Apple there­fore ben­e­fits from higher us­age of na­tive apps on iOS. By re­quir­ing all browsers on iOS to use the WebKit browser en­gine, Apple is able to ex­ert con­trol over the max­i­mum func­tion­al­ity of all browsers on iOS and, as a con­se­quence, hold up the de­vel­op­ment and use of web apps. This lim­its the com­pet­i­tive con­straint that web apps pose on na­tive apps, which in turn pro­tects and ben­e­fits Apple’s App Store rev­enues.

A third and in­ter­est­ing in­cen­tive which the USs Department of Justice cites, is that this be­hav­ior greatly weak­ens the in­ter­op­er­abil­ity of Apple’s de­vices, mak­ing it harder for con­sumers to switch or multi-home. It also greatly raises the bar­ri­ers of en­try for new mo­bile op­er­at­ing sys­tem en­trants by de­priv­ing them of a li­brary of in­ter­op­er­a­ble apps.

Apple has long un­der­stood how mid­dle­ware can help pro­mote com­pe­ti­tion and its myr­iad ben­e­fits, in­clud­ing in­creased in­no­va­tion and out­put, by in­creas­ing scale and in­ter­op­er­abil­ity. […] In the con­text of smart­phones, ex­am­ples of mid­dle­ware in­clude in­ter­net browsers, in­ter­net or cloud-based apps, su­per apps, and smart­watches, among other prod­ucts and ser­vices. […] Apple has lim­ited the ca­pa­bil­i­ties of third-party iOS web browsers, in­clud­ing by re­quir­ing that they use Apple’s browser en­gine, WebKit. […] Apple has sole dis­cre­tion to re­view and ap­prove all apps and app up­dates. Apple se­lec­tively ex­er­cises that dis­cre­tion to its own ben­e­fit, de­vi­at­ing from or chang­ing its guide­lines when it suits Apple’s in­ter­ests and al­low­ing Apple ex­ec­u­tives to con­trol app re­views and de­cide whether to ap­prove in­di­vid­ual apps or up­dates. Apple of­ten en­forces its App Store rules ar­bi­trar­ily. And it fre­quently uses App Store rules and re­stric­tions to pe­nal­ize and re­strict de­vel­op­ers that take ad­van­tage of tech­nolo­gies that threaten to dis­rupt, dis­in­ter­me­di­ate, com­pete with, or erode Apple’s mo­nop­oly power.

Interoperability via mid­dle­ware would re­duce lock-in for Apple’s de­vices. Lock-in is a clear rea­son for Apple to block in­ter­op­er­abil­ity, as can be seen in this email ex­change where Apple ex­ec­u­tives dis­miss the idea of bring­ing iMes­sage to Android.

The #1 most dif­fi­cult [reason] to leave the Apple uni­verse app is iMes­sage … iMes­sage amounts to se­ri­ous lock-in

iMes­sage on Android would sim­ply serve to re­move [an] ob­sta­cle to iPhone fam­i­lies giv­ing their kids Android phones … mov­ing iMes­sage to Android will hurt us more than help us, this email il­lus­trates why.

Apple has also long been con­cerned that the web could be a threat to its app store. In 2011, Philip Schiller in­ter­nally sent an email to Eddie Cue to dis­cuss the threat of HTML5 to the Apple App Store ti­tled HTML5 poses a threat to both Flash and the App Store”.

Food for thought: Do we think our 30/70% split will last for­ever? While I am a staunch sup­porter of the 30/70% split and keep­ing it sim­ple and con­sis­tent across our stores, I don’t think 30/70 will last un­changed for­ever. I think some­day we will see a chal­lenge from an­other plat­form or a web based so­lu­tion to want to ad­just our model

It is cru­cial that read­ers and reg­u­la­tors un­der­stand that this is not some triv­ial mat­ter for Apple. Allowing both browsers and the web to com­pete fairly on iOS will se­ri­ously harm Apple’s mar­gins and rev­enue.

Apple gets an as­ton­ish­ing $20 bil­lion a year from Google to set its search en­gine as the de­fault in Safari, ac­count­ing for 14-16 per­cent of Apple’s an­nual op­er­at­ing prof­its. Safari’s bud­get is a mere frac­tion of this, likely in the or­der of $300-400 mil­lion per year. This means that Safari is one of Apple’s most fi­nan­cially suc­cess­ful prod­ucts and the high­est mar­gin prod­uct Apple has ever made. For each 1% browser mar­ket share that Apple loses for Safari, Apple is set to lose $200 mil­lion in rev­enue per year.

In 2024, Apple is es­ti­mated to have col­lected $27.4 bil­lion from $91.3 bil­lion in sales on its app store, un­der­scor­ing its role as a crit­i­cal and ex­pand­ing source of profit. By con­trast, the ma­cOS App Store, where Apple does not ex­er­cise the same gate­keep­ing power over browsers or app dis­tri­b­u­tion, re­mains a much smaller op­er­a­tion, with rev­enue that Apple chooses not to re­port.

Web apps, which al­ready have a dom­i­nant 70% share on desk­top, can re­place most of the apps on your phone. Even a far more mod­est 20% shift to­wards web apps would rep­re­sent a $5.5 bil­lion an­nual loss in rev­enue.

This is im­por­tant be­cause it ex­plains why Apple will not vol­un­tar­ily make these changes. No ra­tio­nal ac­tor with such a tight mo­nop­o­lis­tic grip on a mar­ket (the mar­ket for browsers and the mar­ket for apps on iOS) would give that up if they could plau­si­bly hang onto it by sub­tly or ex­plic­itly un­der­min­ing at­tempts to open it up. Apple’s state­ments about en­gag­ing or mak­ing changes are mean­ing­less, it is only the con­crete ac­tions that they have taken to date that must be mea­sured.

These changes, and the com­pe­ti­tion and in­ter­op­er­abil­ity they bring, will lit­er­ally cost Apple bil­lions if not tens of bil­lions per year. On the flip side these are sav­ings that de­vel­op­ers and con­sumers are miss­ing out on, both in terms of qual­ity of apps and ser­vices, and di­rect costs. This is money that Apple is ex­tract­ing out the mar­ket via their con­trol of iOS on high-cost and high-mar­gin de­vices sold to con­sumers at full price.

With a mar­ket value of $3 tril­lion, Apple has a le­gal bud­get of over $1 bil­lion a year, giv­ing it le­gal power that out­strips that of small na­tions. It is also not afraid to step as close to the line of non-com­pli­ance as pos­si­ble, as Apple’s for­mer gen­eral coun­sel ex­plains:

work out how to get closer to a par­tic­u­lar risk but be pre­pared to man­age it if it does go nu­clear, … steer the ship as close as you can to that line be­cause that’s where the com­pet­i­tive ad­van­tage oc­curs. … Apple had to pay a large fine, Tim [Cook]’s re­ac­tion was that’s the right choice, don’t let that scare you, I don’t want you to stop push­ing the en­ve­lope.

This, un­for­tu­nately, means that reg­u­la­tion is the only an­swer. Even Open Web Advocacy was only formed af­ter we had ex­hausted every pos­si­ble av­enue at try­ing to con­vince Apple to de­velop crit­i­cal web func­tion­al­ity.

Many other par­ties have at­tempted to ne­go­ti­ate with Apple on these top­ics over the last 15 years and all have come to naught, the power im­bal­ance and the in­cen­tives for Apple not to do this is sim­ply too strong.

Some have tried to frame the DMA as a clash be­tween the EU and the US, with the DMA un­fairly tar­get­ing American tech gi­ants, but that is not the case.

For US ne­go­tia­tors to carve out ex­emp­tions for American com­pa­nies now would de­fang the DMA and stall its pro-com­pe­ti­tion ben­e­fits just as they be­gin to be felt. […] The vic­tims of a DMA pause would be America’s most in­no­v­a­tive up­starts — es­pe­cially AI start-ups. The DMAs in­ter­op­er­abil­ity and fair­ness rules were de­signed to pry open closed plat­forms and give smaller com­pa­nies a fight­ing chance. […] Big Tech lob­by­ists por­tray the DMA as anti-Amer­i­can. In re­al­ity, the DMAs goals align with American ideals of fair com­pe­ti­tion. This is­n’t Europe ver­sus America; it’s open mar­kets ver­sus closed ones.

The re­al­ity is this: Apple stands alone in en­forc­ing a ban on com­pet­ing browser en­gines and sup­press­ing web app com­pe­ti­tion on iOS. No other gate­keeper im­poses such a re­stric­tion.

In fact, the three ma­jor or­ga­ni­za­tions work­ing to port al­ter­na­tive browser en­gines to iOS, Google, Mozilla, and Microsoft are them­selves American. Smaller browser ven­dors, many of whom are also based in the US, are de­pend­ing on these ef­forts. Apple’s re­stric­tions don’t serve con­sumers, star­tups, web de­vel­op­ers, na­tive app cre­ators, or even other American tech com­pa­nies. They serve only Apple, who makes bil­lions per year from un­der­min­ing both browser and web app com­pe­ti­tion on iOS.

Through front groups like ACT, which Apple pri­mar­ily funds, the com­pany may at­tempt to re­frame this is­sue as the EU tar­get­ing suc­cess­ful US firms. But that’s a dis­trac­tion. This is­n’t Europe ver­sus America, it’s Apple ver­sus the World.

At the DMA work­shop last Monday, we had a chance to ask some of these ques­tions, and to chat with Apple’s ever-charm­ing Gary Davis (Senior Director, Apple Legal) on the side­lines. While we are strongly op­posed to Apple’s on­go­ing anti-com­pet­i­tive con­duct, we do deeply ap­pre­ci­ate that Gary and Kyle were will­ing to come over and par­tic­i­pate in per­son.

To kick off the first of OWAs ques­tions on browser en­gines, Roderick Gadellaa asked the key ques­tion: Why has no browser ven­dor been able to bring their own en­gine to iOS, even af­ter 15 months of the DMA be­ing in force?

The DMA has been in force now for 15 months. Despite this, not a sin­gle browser ven­dor has been able to port their browser us­ing its own en­gine to iOS. It’s not be­cause they’re in­ca­pable or they don’t want to, it’s be­cause Apple’s strange poli­cies are mak­ing this nearly im­pos­si­ble.

One of the key is­sues slow­ing progress is that Apple is not al­low­ing browser ven­dors to up­date their ex­ist­ing browser app to use their own en­gine in the EU, and Apple’s WebKit en­gine else­where. This means that browser ven­dors have to ship a whole new app just for the EU and tell their ex­ist­ing EU cus­tomers to down­load their new app and start build­ing the user base from scratch.

Now, we would love for Apple to al­low com­pet­ing browsers to ship their own en­gines glob­ally. But if they in­sist on al­low­ing this only in the EU, Apple can eas­ily re­solve this prob­lem. Here’s how:

They can al­low browsers to ship two sep­a­rate ver­sions of their ex­ist­ing browser to the App Store, one ver­sion for the EU and one for the rest of the world. Something which is cur­rently pos­si­ble in other App Stores. This would al­low ex­ist­ing European users to get the European ver­sion of the app with­out hav­ing to down­load a sep­a­rate app sim­ply by re­ceiv­ing a soft­ware up­date. But it seems Apple does­n’t want that, and they make this very clear in their browser en­gine en­ti­tle­ment con­tract.

Given that, Apple can eas­ily re­solve this prob­lem sim­ply by al­low­ing browsers to ship a sep­a­rate ver­sion of the app to the EU un­der the same bun­dle ID. Why is Apple still in­sist­ing that browser ven­dors lose all their ex­ist­ing EU cus­tomers in or­der to take ad­van­tage of the rights granted un­der the DMA? Thank you.

Coalition for Open Digital Ecosystems (an European ad­vo­cacy group with mem­bers in­clud­ing Google, Meta, Qualcomm and Opera) also asked about the dif­fi­culty in port­ing browser en­gines:

Apple has made some changes to its rule gov­ern­ing third-party browsers and the abil­ity to use other browsers en­gines in the EU. However, as was al­ready men­tioned, they have var­i­ous re­stric­tions, in­clud­ing hav­ing two dif­fer­ent ver­sions of the app, lim­i­ta­tions on test­ing, cum­ber­some con­tract re­quire­ments, still mak­ing it oner­ous to mean­ing­fully take ad­van­tage of the browser en­gine in­ter­op­er­abil­ity. Which is why no one has re­ally suc­cess­fully launched on iOS us­ing an al­ter­na­tive browser en­gine. What is Apple go­ing to do to en­able the third par­ties to launch a browser on iOS via an al­ter­na­tive en­gine?

Gary Davis (Senior Director Apple Legal) and Kyle Andeer (Vice President Apple Legal) were there to an­swer the ques­tions:

Let me take the browser en­gine first. I know this is all just con­ver­sa­tion is sup­posed to be about browser choice screens and de­faults, but I know some of you, many of you with the same group, have trav­eled very far to have this con­ver­sa­tion. And so I’ll take a ques­tion on that, which is, lis­ten: as every­one knows, when we de­signed and re­leased iOS and iPa­dOS over 15 years, we were hy­per fo­cused on how do we cre­ate the most se­cure com­put­ing plat­form in the world. We built it from the ground up with se­cu­rity and pri­vacy in mind. The browser en­gine was a crit­i­cal as­pect of that de­sign. Webkit was that as­pect of the de­sign. And that has worked for 18 years. We rec­og­nize un­der the DMA that we’ve been forced to change And we have cre­ated a pro­gram that keeps se­cu­rity and pri­vacy in mind, that keeps the in­tegrity of the op­er­at­ing sys­tem in mind, and al­lows third par­ties to bring their browser en­gine, Google, Mozilla, to the plat­form. And for what­ever rea­son, they’ve cho­sen not to do so. And so we re­main open. We re­main open to en­gage­ment. We have had con­ver­sa­tions, con­struc­tive con­ver­sa­tions with Mozilla, less con­struc­tive en­gage­ment from the other party, but we are work­ing to re­solve that, those dif­fer­ences, and bring them to iOS in a way that we feel com­fort­able with in terms of se­cu­rity, pri­vacy, and in­tegrity per­spec­tive.

Kyle be­gan by in­cor­rectly as­sert­ing that the ses­sion was fo­cused solely on browser choice screens and de­faults, de­spite the ses­sion be­ing ex­plic­itly ti­tled Browsers”. This ap­peared to sug­gest that our ques­tion on browser en­gines was some­how out of scope.

He ac­knowl­edged that un­der the DMA, Apple is now re­quired to al­low third-party browser en­gines on iOS. He then re­it­er­ated Apple’s long stand­ing talk­ing points: that iOS was built from the ground up with se­cu­rity and pri­vacy in mind, that WebKit is a core part of that de­sign, and that any changes must pre­serve what Apple deems the integrity” of the plat­form.

However, the fact that Safari heav­ily reuses iOS code and com­po­nents is un­likely to be a gen­uine se­cu­rity fea­ture and is al­most cer­tainly a cost-sav­ing mea­sure. By reusing code and li­braries be­tween iOS com­po­nents, Apple can save sig­nif­i­cant amounts on staffing. This comes with two sig­nif­i­cant down­sides: First it wors­ens se­cu­rity by lock­ing Safari up­dates to iOS up­dates, in­creas­ing the time it takes se­cu­rity patches to reach users. Second, this tight cou­pling harms Safari it­self by mak­ing it dif­fi­cult for Apple to port its browser to other op­er­at­ing sys­tems, ul­ti­mately weak­en­ing its com­pet­i­tive­ness and reach. It also means that Apple can’t of­fer beta ver­sions of Safari to iOS users with­out them in­stalling an en­tire beta ver­sion of the op­er­at­ing sys­tem, a lim­i­ta­tion that other browsers do not have.

According to Kyle, Apple has cre­ated a pro­gram that al­lows third-party en­gines in a way we feel com­fort­able with in terms of se­cu­rity, pri­vacy, and in­tegrity” but of­fered no specifics. He then shifted blame onto browser ven­dors, stat­ing that Mozilla and Google have sim­ply chosen not to” bring their en­gines to iOS, omit­ting the fact that Apple’s tech­ni­cal and con­trac­tual con­straints make do­ing so un­vi­able.

There’s a lot of OWA peo­ple here in the rooms so well done on that. I also half the ques­tions at least were about browser en­gines, which is ob­vi­ously an Article 5(7) as op­posed to a 6(3) is­sue. More than happy as Kyle al­ready did to ad­dress the ques­tion, but I think it would be a shame that a ses­sion that is about choice screens and unin­stal­la­tion and the de­faults be­come a browser en­gine dis­cus­sion. I was pleased that Kush was nod­ding when Kyle was point­ing out the on­go­ing en­gage­ments with Google and Mozilla, which are con­tin­u­ing right up even to last week, and I think just some more this week. There was a bot­tom line is­sue, how­ever, which is that both Google and Mozilla have every­thing they need to build their en­gines and ship them on iOS to­day. We heard some other is­sues men­tioned. We are happy to en­gage on those is­sues. We are en­gag­ing on those is­sues, but every­thing is in place to ship here in the EU to­day. I think that’s an ex­tremely im­por­tant point to take away from this.

Gary re­it­er­ated Kyle’s sug­ges­tion that the ques­tions on browser en­gines were out of scope and that browser ven­dors have every­thing they need to ship a browser en­gine on iOS to­day.

I think one other point I wanna make sure I ad­dress as I re­flected upon the end, there was a ques­tion about why we don’t do this on a global ba­sis. And I think we’ve al­ways ap­proached the DMA as to the European law that re­lates to Europe. And we are not go­ing to ex­port European law to the United States, and we’re not go­ing to ex­port European law to other ju­ris­dic­tions. Each ju­ris­dic­tion should have the free­dom and de­ci­sion mak­ing to make its own de­ci­sions. And so we’re go­ing to abide by that.

Kyle con­cluded by as­sert­ing that Apple would com­ply with the DMA only within the EU, stat­ing that it would not export a European law to the United States”. This ig­nores the re­al­ity that Apple has, in fact, al­ready ex­tended sev­eral EU-driven changes glob­ally, in­clud­ing USB-C charg­ing for iPhones, sup­port for game em­u­la­tors, NFC ac­cess for third-party pay­ments, the new de­fault apps page and no longer hid­ing the op­tion to change de­fault browser if Safari was the de­fault.

While we would pre­fer that Apple en­able browser com­pe­ti­tion glob­ally on iOS, we rec­og­nize that the DMA does not re­quire it to do so. We high­light these glob­ally adopted changes sim­ply to point out that Apple could choose to take the same pro-com­peta­tive ap­proach here. This re­stric­tion not only un­der­mines global in­ter­op­er­abil­ity, but also weak­ens the ef­fec­tive­ness of the so­lu­tion for EU users them­selves.

[…] it’s OK to ask ques­tions which are other ques­tions re­lated to browsers. So I think that’s to­tally OK given the name of the ses­sion.

...

Read the original on open-web-advocacy.org »

5 384 shares, 16 trendiness

Let's Learn x86-64 Assembly! Part 0

The way I was taught x86 as­sem­bly at the uni­ver­sity had been com­pletely out­dated for many years by the time I had my first class. It was around 2008 or 2009, and 64-bit proces­sors had al­ready started be­com­ing a thing even in my neck of the woods. Meanwhile, we were do­ing DOS, real-mode, mem­ory seg­men­ta­tion and all the other stuff from the bad old days.

Nevertheless, I picked up enough of it dur­ing the classes (and over the sub­se­quent years) to be able to un­der­stand the stuff com­ing out of the other end of a com­piler, and that has helped me a few times. However, I’ve never man­u­ally writ­ten any sub­stan­tial amount of x86 as­sem­bly for some­thing non-triv­ial. Due to be­ing locked up in­side (on ac­count of a global pan­demic), I de­cided to change that sit­u­a­tion, to pass the time.

I wanted to fo­cus on x86-64 specif­i­cally, and com­pletely for­get/​skip any and all legacy crap that is no longer rel­e­vant for this ar­chi­tec­ture. After get­ting a bit deeper into it, I also de­cided to pub­lish my notes in the form of tu­to­ri­als on this blog since there seems to be a de­sire for this type of con­tent.

Everything I write in these posts will be a nor­mal, 64-bit, Windows pro­gram. We’ll be us­ing Windows be­cause that is the OS I’m run­ning on all of my non-work ma­chines, and when you drop down to the level of writ­ing as­sem­bly it starts be­com­ing in­cresingly im­pos­si­ble to ig­nore the op­er­at­ing sys­tem you’re run­ning on. I will also try to go as from scratch” as pos­si­ble - no li­braries, we’re only al­lowed to call out to the op­er­at­ing sys­tem and that’s it.

In this first, in­tro­duc­tory part (yeah, I’m plan­ning a se­ries and I know I will re­gret this later), I will talk about the tools we will need, show how to use them, ex­plain how I gen­er­ally think about pro­gram­ming in as­sem­bly and show how to write what is per­haps the small­est vi­able Windows pro­gram.

There are two main tools that we will use through­out this se­ries.

CPUs ex­e­cute ma­chine code - an ef­fi­cient rep­re­sen­ta­tion of in­struc­tions for the proces­sor that is al­most com­pletely im­pen­e­tra­ble to hu­mans. The as­sem­bly lan­guage is a hu­man-read­able rep­re­sen­ta­tion of it. A pro­gram that con­verts this sym­bolic rep­re­sen­ta­tion into ma­chine code ready to be ex­e­cuted by a CPU is called an as­sem­bler.

There is no sin­gle, agreed-upon stan­dard for x86-64 as­sem­bly lan­guage. There are many as­sem­blers out there, and even though some of them share a great deal of sim­i­lar­i­ties, each has its own set of fea­tures and quirks. It is there­fore im­por­tant which as­sem­bler you choose. In this se­ries, we will be us­ing

Flat Assembler (or FASM for short). I like it be­cause it’s small, easy to ob­tain and use, has a nice macro sys­tem and comes with a handy lit­tle ed­i­tor.

Another im­por­tant tool is the de­bug­ger. We’ll use it to ex­am­ine the state of our pro­grams. While I’m pretty sure it’s pos­si­ble to use Visual Studio’s in­te­grated de­bug­ger for this, I think a stand­alone de­bug­ger is bet­ter when all you want to do is look at the dis­as­sem­bly, mem­ory and reg­is­ters. I’ve al­ways used OllyDbg for stuff like that, but un­for­tu­nately it does not have a 64-bit ver­sion. Therefore we will be us­ing WinDbg. The ver­sion linked here is a re­vamp of this ven­er­a­ble tool with a slightly nicer in­ter­face. Alternatively, you can get the non-Win­dows-store ver­sion here as part of the Windows 10 SDK. Just make sure you de­s­e­lect every­thing else be­sides WinDbg dur­ing in­stal­la­tion. For our pur­poses, the two ver­sions are mostly in­ter­change­able.

Now that we have our tools, I want to spend a bit of time to dis­cuss some ba­sics. For the pur­pose of these tu­to­ri­als I’m as­sum­ing some knowl­edge of lan­guages like C or C++, but lit­tle or no pre­vi­ous ex­po­sure to as­sem­bly, there­fore many read­ers will find this stuff fa­mil­iar.

CPUs only know” how to do a fixed num­ber of cer­tain things. When you hear some­one talk about an instruction set”, they’re re­fer­ring to the set of things a par­tic­u­lar CPU has been de­signed to do, and the term instruction” just means one of the things a CPU can do”. Most in­struc­tions are pa­ra­me­ter­ized in one way or an­other, and they’re gen­er­ally re­ally sim­ple. Usually an in­struc­tion is somthing along the lines of write a given 8-bit value to a given lo­ca­tion in mem­ory”, or interpreting the val­ues from reg­is­ters A and B as 16-bit signed in­te­gers, mul­ti­ply them and record the re­sult into reg­is­ter A”.

Below is a sim­ple men­tal model of the ar­chi­tec­ture that we’ll start with.

This skips a ton of things (there can be more than one core ex­e­cut­ing in­struc­tions and read­ing/​writ­ing mem­ory, there’s dif­fer­ent lev­els of cache, etc. etc.), but should serve as a good start­ing point.

To be ef­fec­tive at low-level pro­gram­ming or de­bug­ging you need to un­der­stand that every high-level con­cept even­tu­ally maps to this low-level model, and learn­ing how the map­ping works will help you.

You can think of reg­is­ters as a spe­cial kind of mem­ory built right into the CPU that is very small, but ex­tremely fast to ac­cess. There are many dif­fer­ent kinds of reg­is­ters in x86-64, and for now we’ll con­cern our­selves only with the so-called gen­eral-pur­pose reg­is­ters, of which there are six­teen. Each of them is 64 bits wide, and for each of them the lower byte, word and dou­ble-word can be ad­dressed in­di­vid­u­ally (incidentally, 1 word” = 2 bytes, 1 double-word” = 4 bytes, in case you haven’t heard this ter­mi­nol­ogy be­fore).

Additionally, the higher 8 bits of rax, rbx, rcx and rdx can be re­ferred to as ah, bh, ch and dh.

Note that even though I said those were general-purpose” reg­is­ters, some in­struc­tions can only be used with cer­tain reg­is­ters, and some reg­is­ters have spe­cial mean­ing for cer­tain in­struc­tions. In par­tic­u­lar, rsp holds the stack pointer (which is used by in­struc­tions like push, pop, call and ret), and rsi and rdi serve as source and des­ti­na­tion in­dex for string ma­nip­u­la­tion” in­struc­tions. Another ex­am­ple where cer­tain reg­is­ters get special treat­ment” are the mul­ti­pli­ca­tion in­struc­tions, which re­quire one of the mul­ti­plier val­ues to be in the reg­is­ter rax, and write the re­sult into the pair of reg­is­ters rax and rdx.

In ad­di­tion to these reg­is­ters, we will also con­sider the spe­cial reg­is­ters rip and rflags. rip holds the ad­dress of the next in­struc­tion to ex­e­cute. It is mod­i­fied by con­trol flow in­struc­tions like call or jmp. rflags holds a bunch of bi­nary flags in­di­cat­ing var­i­ous as­pects of the pro­gram’s state, such as whether the re­sult of the last arith­metic op­er­a­tion was less, equal or greater than zero. The be­hav­ior of many in­struc­tions de­pends on those flags, and many in­struc­tions up­date cer­tain flags as part of their ex­e­cu­tion. The flags reg­is­ter can also be read and writ­ten wholesale” us­ing spe­cial in­struc­tions.

There are a lot more reg­is­ters on x86-64. Most of them are used for SIMD or float­ing-point in­struc­tions, and we’ll not be con­sid­er­ing them in this se­ries.

You can think of mem­ory as a large ar­ray of byte-sized cells”, num­bered start­ing at 0. We’ll call these num­bers memory ad­dresses”. Simple, right?

Well… ad­dress­ing mem­ory used to be rather an­noy­ing back in the old days. You see, reg­is­ters in old x86 proces­sors used to be only 16-bit wide. Sixteen bits is enough to ad­dress 64 kilo­bytes worth of mem­ory, but not more. The hard­ware was ac­tu­ally ca­pa­ble of us­ing ad­dresses as wide as 20 bits, but you had put a base” ad­dress into a spe­cial seg­ment reg­is­ter, and in­struc­tions that read or wrote mem­ory would use a 16-bit off­set into that seg­ment to ob­tain the fi­nal 20-bit linear” ad­dress. There were sep­a­rate seg­ment reg­is­ters for code, data and stack por­tions (and a few more extra” ones), and seg­ments could over­lap.

In x86-64 these con­cerns are non-ex­is­tant. The seg­ment reg­is­ters for code, data and stack are still pre­sent, and they’re loaded with some spe­cial val­ues, but as a user-space pro­gram­mer you need­n’t con­cern your­self with them. For all in­tents and pur­poses you can as­sume that all seg­ments start at 0 and ex­tend for the en­tire ad­dress­able length of mem­ory. So, as far as we’re con­cerned, on x86-64 our pro­grams see mem­ory as a flat” con­tigu­ous ar­ray of bytes, with se­quen­tial ad­dresses, start­ing at 0, just like we said in the be­gin­ning of this sec­tion…

Okay, I may have dis­torted the truth a lit­tle bit. Things aren’t quite as sim­ple. While it is true that on 64-bit Windows your pro­grams see mem­ory as a flat con­tigu­ous ar­ray of bytes with ad­dresses start­ing at 0, it is ac­tu­ally an elab­o­rate il­lu­sion main­tained by the OS and CPU work­ing to­gether.

The truth is, if you were re­ally able to read and write any byte in mem­ory willy-nilly, you’d stomp all over other pro­grams’ code and data (something that in­deed could hap­pen in the Bad Old Days). To pre­vent that, spe­cial pro­tec­tion mech­a­nisms ex­ist. I won’t get too deep into their in­ner work­ings here be­cause this stuff mat­ters mostly for OS de­vel­op­ers. Nevertheless, here’s a very short overview:

Each process gets a flat” ad­dress space as de­scribed above (we’ll call it the virtual ad­dress space”). For each process, the OS sets up a map­ping be­tween its vir­tual ad­dresses and ac­tual phys­i­cal ad­dresses in mem­ory. This map­ping is re­spected by the hard­ware: the virtual” ad­dresses get trans­lated to phys­i­cal ad­dresses dy­nam­i­cally at run­time. Thus, the same ad­dress (e.g. 0x410F119C) can map to two dif­fer­ent lo­ca­tions in phys­i­cal mem­ory for two dif­fer­ent processes. This, in a nut­shell, is how the sep­a­ra­tion be­tween processes in en­forced.

The fi­nal thing I want to in­vite your at­ten­tion to here is how the in­struc­tions and data which they op­er­ate on are held in the same mem­ory. While it may seem an ob­vi­ous choice, it’s not how com­put­ers nec­es­sar­ily have to work. This is a prop­erty char­ac­ter­is­tic of the von Neumann model - as op­posed to the Harvard model, where in­struc­tions and data are held in sep­a­rate mem­o­ries. A real-world ex­am­ple of a Harvard com­puter is the AVR mi­cro­con­troller on your Arduino.

Hopefully by this point you have down­loaded FASM and are ready to write some code. Our first pro­gram will be re­ally sim­ple: it will load and then im­me­di­ately exit. We mostly want it just to get ac­quainted with the tools.

Here’s the code for our first pro­gram in x86-64 as­sem­bly:

for­mat PE64 NX GUI 6.0

en­try start

sec­tion .text’ code read­able ex­e­cutable

start:

int3

ret

We’ll go through this line-by-line.

for­mat PE64 NX GUI 6.0 - this is a di­rec­tive telling FASM the for­mat of the bi­nary we ex­pect it to pro­duce - in our case, Portable Executable Format (which is what most Windows pro­grams use). We’ll talk about it in a bit more de­tail later.

en­try start - this de­fines the en­try point into our pro­gram. The en­try di­rec­tive re­quires a la­bel, which in this case is start”. A la­bel can be thought of as a name for an ad­dress within our pro­gram, so in this case we’re say­ing the en­try point to the pro­gram is at what­ever ad­dress the start’ la­bel is”. Note that you’re al­lowed to re­fer to la­bels even if they’re de­fined later in the pro­gram code (as is the case here).

sec­tion .text’ code read­able ex­e­cutable - this di­rec­tive in­di­cates the be­gin­ning of a new sec­tion in a Portable Executable file, in this case a sec­tion con­tain­ing ex­e­cutable code. More on this later.

start: - this is the la­bel that de­notes the en­try point to our pro­gram. We re­ferred to it ear­lier in the entry” di­rec­tive. Note that la­bels them­selves don’t pro­duce any ex­e­cutable ma­chine code: they’re just a way for the pro­gram­mer to mark lo­ca­tions within the ex­e­cutable’s ad­dress space.

int3 - this is a spe­cial in­struc­tion that causes the pro­gram to call the de­bug ex­cep­tion han­dler - when run­ning un­der a de­bug­ger, this will pause the pro­gram and al­low us to ex­am­ine its state or pro­ceed with the ex­e­cu­tion step-by-step. This is how break­points are ac­tu­ally im­ple­mented - the de­bug­ger re­places a sin­gle byte in the ex­e­cutable with the op­code cor­re­spond­ing to int3, and when the pro­gram hits it, the de­bug­ger takes over (obviously, the orig­i­nal con­tent of the mem­ory at break­point ad­dress has to be re­mem­bered and re­stored be­fore pro­ceed­ing with ex­e­cu­tion or sin­gle-step­ping). In our case, we are hard-cod­ing a break­point im­me­di­ately at the en­try point for con­ve­nience, so that we don’t have to set it man­u­ally via the de­bug­ger every time.

ret - this in­struc­tion pops off an ad­dress from the top of the stack, and trans­fers ex­e­cu­tion to that ad­dress. In our case, we’ll re­turn into the OS code that ini­tially in­voked our en­try point.

Fire up FASMW. EXE, paste the code above into the ed­i­tor, save the file and press Ctrl+F9. Your first as­sem­bly pro­gram is now com­plete! Let’s now load it up in a de­bug­ger and sin­gle-step through it to see it ac­tu­ally work­ing.

Open up WinDbg. Go to the View tab and make sure the fol­low­ing win­dows are vis­i­ble: Disassembly, Registers, Stack, Memory and Command. Go to File > Launch Executable and se­lect the ex­e­cutable you just built with FASM. At this point your work­space should re­sem­ble some­thing like this:

In the dis­as­sem­bly win­dow you can see the code that is cur­rently be­ing ex­e­cuted. Right now it’s not our pro­gram’s code, but some OS loader code - this stuff will load our pro­gram into mem­ory and even­tu­ally trans­fer ex­e­cu­tion to our en­try point. WinDbg en­sures a break­point is trig­gered be­fore any of that hap­pens.

In the reg­is­ters win­dow, you can see the con­tents of x86-64 reg­is­ters that we dis­cussed ear­lier.

The mem­ory win­dow shows the raw con­tent of the pro­gram’s mem­ory around a given vir­tual ad­dress. We’ll use it later.

The stack win­dow shows the cur­rent call stack (as you can see, it’s all in­side nt­dll.dll right now).

Finally, the com­mand win­dow al­lows en­ter­ing text com­mands and shows log mes­sages.

If you press F5 at this time, it will cause the pro­gram to con­tinue run­ning un­til it hits an­other break­point. The next break­point it will hit is the one we hard­coded. Try press­ing F5, and you’ll see some­thing like this:

You should be able to rec­og­nize the two in­struc­tions we wrote - int3 and ret. To ad­vance to the next in­struc­tion, press F8. When you do that, pay at­ten­tion to the reg­is­ters win­dow - you should see the rip reg­is­ter be­ing up­dated as you ad­vance (WinDbg high­lights the reg­is­ters that change in red).

Right af­ter the ret in­struc­tion is ex­e­cuted, you will re­turn to the code that in­voked our pro­gram’s en­try point.

As you can see from the im­age above, the next thing that will hap­pen is a call to RtlExitUserThread (a pretty self-ex­plana­tory name). If you press F5 now, your pro­gram’s main thread will clean up and end, and so will the pro­gram. Or will it?…

The truth is, by us­ing ret, I took a bit of a short­cut. On Windows a process will ter­mi­nate if any of the fol­low­ing con­di­tions are met:

But, we’re ex­it­ing the main thread here so we should be good, right? Well, sort of. There’s no guar­an­tee that Windows has­n’t started any other back­ground threads (for ex­am­ple, to load DLLs or some­thing like that) within our process. It seems that at least in this ex­am­ple, the main thread is the only one (I’ve checked and the process does­n’t stick around), but this may change. A well-be­haved Windows pro­gram should al­ways call ExitProcess at the ap­pro­pri­ate time.

In or­der to be able to call WinAPI func­tions, we need to learn a few things about the Portable Executable file for­mat, how DLLs are loaded and call­ing con­ven­tions.

The ExitProcess func­tion lives in KERNEL32. DLL (yes, that’s not a typo, KERNEL32 is the name of the 64-bit li­brary. The 32-bit ver­sions of those libs pro­vided for back-com­pat pueporses, live in a folder names SysWOW64. I’m not jok­ing.). In or­der to be able to call it, we first need to im­port it.

We won’t cover the Portable Executable for­mat in its en­tirety here. It is doc­u­mented ex­ten­sively on the Microsoft docs web­site. Here are a cou­ple of ba­sic facts we’ll need to know:

PE files are com­prised of sec­tions. We have al­ready seen a sec­tion con­tain­ing ex­e­cutable code in our pro­gram, but sec­tions may con­tain other types of data.

Information about what sym­bols are im­ported from what DLLs is stored in a spe­cial sec­tion called .idata’.

Let’s have a look at the .idata sec­tion.

As per the docs, the .idata sec­tion be­gins with an im­port di­rec­tory table (IDT). Each en­try in the IDT cor­re­sponds to one DLL, is 20 bytes in length and con­sists of the fol­low­ing fields:

A 4-byte rel­a­tive vir­tual ad­dress (RVA) of the Import Lookup Table (ILT), which con­tains the names of func­tions to im­port. More on that later

A 4-byte RVA of a null-ter­mi­nated string con­tain­ing the name of the DLL

A 4-byte RVA of the Import Address Table (IAT). The struc­ture of the IAT is the same as ILT, the only dif­fer­ence is that the con­tent of IAT is mod­i­fied at run­time by the loader - it over­writes each en­try with the ad­dress of the cor­re­spond­ing im­ported func­tion. So the­o­ret­i­cally, you can have both ILT and IAT fields point to the same ex­act piece of mem­ory. Moreover, I’ve found that set­ting the ILT pointer to zero also works, al­though I am not sure if this be­hav­ior is of­fi­cially sup­ported.

The Import Directory Table is ter­mi­nated by an en­try where all fields are equal zero.

The ILT/IAT is an ar­ray of 64-bit val­ues ter­mi­nated by a null value. The bot­tom 31 bits of each en­try con­tain the RVA of an en­try in a hint/​name table (containing the name of the im­ported func­tion). During run­time, the en­tries of the IAT are re­placed with the ac­tual ad­dresses of the im­ported func­tions.

The hint/​name table men­tioned above con­sists of en­tries, each of which needs to be aligned on an even bound­ary. Each en­try be­gins by a 2-byte hint (which we’ll ig­nore for now) and a null-ter­mi­nated string con­tain­ing the im­ported func­tion name, and a null byte (if nec­es­sary), to align the next en­try on an even bound­ary.

With that out of the way, let’s see how we would de­fine our ex­e­cutable’s .idata sec­tion in FASM

sec­tion .idata’ im­port read­able write­able

idt: ; im­port di­rec­tory table starts here

; en­try for KERNEL32.DLL

dd rva ker­nel32_iat

dd 0

dd 0

dd rva ker­nel32_­name

dd rva ker­nel32_iat

; NULL en­try - end of IDT

dd 5 dup(0)

name_table: ; hint/​name table

_ExitProcess_Name dw 0

db ExitProcess”, 0, 0

ker­nel32_­name: db KERNEL32.DLL”, 0

ker­nel32_iat: ; im­port ad­dress table for KERNEL32.DLL

ExitProcess dq rva _ExitProcess_Name

dq 0 ; end of KERNEL32′s IAT

The di­rec­tive for a new PE sec­tion is al­ready fa­mil­iar to us. In this case, we’re com­mu­ni­cat­ing that the sec­tion we’re about to in­tro­duce con­tains the im­ports data and needs to be made write­able when loaded into mem­ory (since ad­dresses of the im­ported func­tions will be writ­ten in there).

The di­rec­tives db, dw, dd and dq all cause FASM to emit a raw byte/​word/​dou­ble-word/​quad-word value re­spec­tively. The rva op­er­a­tor, un­sur­pris­ingly, yields the rel­a­tive vir­tual ad­dress of its ar­gu­ment. So, dd rva ker­nel32_iat will cause FASM to emit a 4-byte bi­nary value equal to the RVA of ker­nel32_iat la­bel.

Here we’ve just made use of fas­m’s db/​dw/​etc. di­rec­tives to pre­cisely de­scribe the con­tents of our .idata sec­tion.

We’re now al­most ready to fi­nally call ExitProcess. One thing we have to an­swer though, is - how does a func­tion call work? Think about it. There is a call in­struc­tion, which pushes the cur­rent value of rip onto the stack, and trans­fers ex­e­cu­tion to the ad­dress spec­i­fied by its pa­ra­me­ter. There is also the ret in­struc­tion, which pops off an ad­dress from the stack and trans­fers ex­e­cu­tion there. Nowhere is it spec­i­fied how ar­gu­ments should be passed to a func­tion, or how to han­dle the re­turn val­ues. The hard­ware sim­ply does­n’t care about that. It is the job of the caller and the callee to es­tab­lish a con­tract be­tween them­selves. These rules might look along the lines of:

The caller shall push the ar­gu­ments onto the stack (starting from the last one)

The callee shall re­move the pa­ra­me­ters from the stack be­fore re­turn­ing.

The callee shall place re­turn val­ues in the reg­is­ter eax

A set of rules like that is re­ferred to as the call­ing con­ven­tion, and there are many dif­fer­ent call­ing con­ven­tions in use. When you try to call a func­tion from as­sem­bly, you must know what type of call­ing con­ven­tion it ex­pects.

The good news is that on 64-bit Windows there’s pretty much only one call­ing con­ven­tion that you need to be aware of - the Microsoft x64 call­ing con­ven­tion. The bad news is that it’s a tricky one - un­like many of the older con­ven­tions, it re­quires the first few pa­ra­me­ters to be passed via reg­is­ters (as op­posed to be­ing passed on the stack), which can be good for per­for­mance.

You may read the full docs if you’re in­ter­ested in de­tails, I will cover only the parts of the call­ing con­ven­tion rel­e­vant to us here:

The stack pointer has to be aligned to a 16-byte bound­ary

The first four in­te­ger or pointer ar­gu­ments are passed in the reg­is­ters rcx, rdx, r8 and r9; the first four float­ing point ar­gu­ments are passed in reg­is­ters xmm0 to xmm3. Any ad­di­tional args are passed on the stack.

Even though the first 4 ar­gu­ments aren’t passed on the stack, the caller is still re­quired to al­lo­cate 32 bytes of space for them on the stack. This has to be done even if the func­tion has less than 4 ar­gu­ments.

The caller is re­spon­si­ble for clean­ing up the stack.

...

Read the original on gpfault.net »

6 358 shares, 23 trendiness

Refine

Your writ­ing as­sis­tant that never leaves your Mac. Powered by lo­cal AI mod­els with zero data col­lec­tion.

Your writ­ing stays on your Mac Your doc­u­ments never leave your Mac. No servers, no track­ing, no pri­vacy con­cerns. Works of­fline, on flights, in cof­fee shops, any­where you write. Your browser does not sup­port the video tag.

Seamlessly in­te­grates with all your fa­vorite Mac ap­pli­ca­tions. No setup re­quired — just start writ­ing and get gram­mar sug­ges­tions. And many more ap­pli­ca­tions across ma­cOS

Own it for­ever. No sub­scrip­tions, no hid­den fees. Pay once, own it forever­Pur­chase a LicenseTry it be­fore you buy! Download the free trial ver­sion to see if it fits your needs.

Can’t find the an­swer you’re look­ing for? Feel free to sent us an email at sup­port@re­fine.shIs my data truly pri­vate?Yes, ab­solutely. Your doc­u­ments, text, and writ­ing never leave your Mac. We don’t col­lect, store, or trans­mit any of your per­sonal con­tent. All pro­cess­ing hap­pens lo­cally us­ing of­fline large lan­guage mod­els (LLMs) that run di­rectly on your ma­chine. What apps does it work with?Works with most ma­cOS apps in­clud­ing Mail, Messages, Safari, Chrome, Pages, Word, Slack, Notion, and many more.Re­quires ma­cOS 14.0 or later. Works with both Apple Silicon (M1, M2, etc.) and Intel-based MacsWe of­fer a 7-day free trial with full ac­cess to all fea­tures. No credit card re­quired. Just down­load the app and start us­ing it.Is there an ed­u­ca­tional dis­count?Yes! We of­fer a 50% dis­count for stu­dents and ed­u­ca­tors. Just write us an email with your cur­rent stu­dent/​teacher/​in­sti­tu­tion email.

Ready to pro­tect your pri­vacy while im­prov­ing your writ­ing?

...

Read the original on refine.sh »

7 339 shares, 27 trendiness

Happy 20th birthday Django!

On July 13th 2005, Jacob Kaplan-Moss made the first com­mit to the pub­lic repos­i­tory that would be­come Django. Twenty years and 400+ re­leases later, here we are – Happy 20th birth­day Django! 🎉

We want to share this spe­cial oc­ca­sion with you all! Our new 20-years of Django web­site show­cases all on­line and lo­cal events hap­pen­ing around the world, through all of 2025. As well as other op­por­tu­ni­ties to cel­e­brate!

* A spe­cial quiz or two? see who knows all about Django trivia

As a birth­day gift of sorts, con­sider whether you or your em­ployer can sup­port the pro­ject via do­na­tions to our non-profit Django Software Foundation. For this spe­cial event, we want to set a spe­cial goal!

Over the next 20 days, we want to see 200 new donors, sup­port­ing Django with $20 or more, with at least 20 monthly donors. Help us mak­ing this hap­pen:

Once you’ve done it, post with #DjangoBirthday and tag us on Mastodon / on Bluesky / on X / on LinkedIn so we can say thank you!

20 years is a long time in open source — and we want to keep Django thriv­ing for many more, so it keeps on be­ing the web frame­work for per­fec­tion­ists with dead­lines as the in­dus­try evolves. We don’t know how the web will change it that time, but from Django, you can ex­pect:

* Many new re­leases, each with years of sup­port

* Thousands more pack­ages in our thriv­ing ecosys­tem

* An in­clu­sive and sup­port­ive com­mu­nity with hun­dreds of thou­sands of de­vel­op­ers

...

Read the original on www.djangoproject.com »

8 335 shares, 60 trendiness

Data Brokers are Selling Your Flight Information to CBP and ICE

For many years, data bro­kers have ex­isted in the shad­ows, ex­ploit­ing gaps in pri­vacy laws to har­vest our in­for­ma­tion—all for their own profit. They sell our pre­cise move­ments with­out our knowl­edge or mean­ing­ful con­sent to a va­ri­ety of pri­vate and state ac­tors, in­clud­ing law en­force­ment agen­cies. And they show no sign of stop­ping.

This in­cen­tivizes other bad ac­tors. If com­pa­nies col­lect any kind of per­sonal data and want to make a quick buck, there’s a data bro­ker will­ing to buy it and sell it to the high­est bid­der–of­ten law en­force­ment and in­tel­li­gence agen­cies.

One re­cent in­ves­ti­ga­tion by 404 Media re­vealed that the Airlines Reporting Corporation (ARC), a data bro­ker owned and op­er­ated by at least eight ma­jor U. S. air­lines, in­clud­ing United Airlines and American Airlines, col­lected trav­el­ers’ do­mes­tic flight records and se­cretly sold ac­cess to U.S. Customs and Border Protection (CBP). Despite sell­ing pas­sen­gers’ names, full flight itin­er­aries, and fi­nan­cial de­tails, the data bro­ker pre­vented U.S. bor­der forces from re­veal­ing it as the ori­gin of the in­for­ma­tion. So, not only is the gov­ern­ment do­ing an end run around the Fourth Amendment to get in­for­ma­tion where they would oth­er­wise need a war­rant­they’ve also been try­ing to hide how they know these things about us.

ARCs Travel Intelligence Program (TIP) ag­gre­gates pas­sen­ger data and con­tains more than one bil­lion records span­ning 39 months of past and fu­ture travel by both U. S. and non-U.S. cit­i­zens. CBP, which sits within the U.S. Department of Homeland Security (DHS), claims it needs this data to sup­port lo­cal and state po­lice keep­ing track of peo­ple of in­ter­est. But at a time of grow­ing con­cerns about in­creased im­mi­gra­tion en­force­ment at U.S. ports of en­try, in­clud­ing un­jus­ti­fied searches, law en­force­ment of­fi­cials will use this ad­di­tional sur­veil­lance tool to ex­pand the web of sus­pi­cion to even larger num­bers of in­no­cent trav­el­ers.

More than 200 air­lines set­tle tick­ets through ARC, with in­for­ma­tion on more than 54% of flights taken glob­ally. ARCs board of di­rec­tors in­cludes rep­re­sen­ta­tives from U. S. air­lines like JetBlue and Delta, as well as in­ter­na­tional air­lines like Lufthansa, Air France, and Air Canada.

In sell­ing law en­force­ment agen­cies bulk ac­cess to such sen­si­tive in­for­ma­tion, these air­lines—through their data bro­ker—are putting their own prof­its over trav­el­ers’ pri­vacy. U. S. Immigration and Customs Enforcement (ICE) re­cently de­tailed its own pur­chase of per­sonal data from ARC. In the cur­rent cli­mate, this can have a detri­men­tal im­pact on peo­ple’s lives.

Movement un­re­stricted by gov­ern­ments is a hall­mark of a free so­ci­ety. In our cur­rent mo­ment, when the fed­eral gov­ern­ment is threat­en­ing le­gal con­se­quences based on peo­ple’s na­tional, re­li­gious, and po­lit­i­cal af­fil­i­a­tions, hav­ing air travel in and out of the United States tracked by any ARC cus­tomer is a recipe for state ret­ri­bu­tion.

Sadly, data bro­kers are do­ing even broader harm to our pri­vacy. Sensitive lo­ca­tion data is har­vested from smart­phones and sold to cops, in­ter­net back­bone data is sold to fed­eral coun­ter­in­tel­li­gence agen­cies, and util­ity data­bases con­tain­ing phone, wa­ter, and elec­tric­ity records are shared with ICE of­fi­cers.

At a time when im­mi­gra­tion au­thor­i­ties are erod­ing fun­da­men­tal free­doms through in­creased—and ar­bi­trary—ac­tions at the U. S. bor­der, this news fur­ther ex­ac­er­bates con­cerns that creep­ing au­thor­i­tar­i­an­ism can be fu­eled by the ex­trac­tion of our most per­sonal data—all with­out our knowl­edge or con­sent.

The new rev­e­la­tions about ARCs data sales to CBP and ICE is a fresh re­minder of the need for privacy first” leg­is­la­tion that im­poses con­sent and min­i­miza­tion lim­its on cor­po­rate pro­cess­ing of our data. We also need to pass the Fourth Amendment is not for sale” act to stop po­lice from by­pass­ing ju­di­cial re­view of their data seizures by means of pur­chas­ing data from bro­kers. And let’s en­force data bro­ker reg­is­tra­tion laws.

...

Read the original on www.eff.org »

9 334 shares, 25 trendiness

How I build software quickly

Software is built un­der time and qual­ity con­straints. We want to write good code and have it done quickly.

If you go too fast, your work is buggy and hard to main­tain. If you go too slowly, noth­ing gets shipped. I have not mas­tered this ten­sion, but I’ll share a few lessons I’ve learned.

This post fo­cuses on be­ing a de­vel­oper on a small team, main­tain­ing soft­ware over mul­ti­ple years. It does­n’t fo­cus on cre­at­ing quick pro­to­types. And this is only based on my own ex­pe­ri­ence!

Early in my ca­reer, I wanted all my code to be per­fect: every func­tion well-tested, every iden­ti­fier el­e­gantly named, every ab­strac­tion eas­ily un­der­stood. And ab­solutely no bugs!

But I learned a les­son that now seems ob­vi­ous in hind­sight: there is­n’t one right way” to build soft­ware.

For ex­am­ple, if you’re mak­ing a game for a 24-hour game jam, you prob­a­bly don’t want to pri­or­i­tize clean code. That would be a waste of time! Who re­ally cares if your code is el­e­gant and bug-free?

On the other hand, if you’re build­ing a pace­maker de­vice, a mis­take could re­ally hurt some­one. Your work should be much bet­ter! I would­n’t want to risk my life with some­one’s spaghetti code!

Most of my work has been some­where in the mid­dle. Some em­ploy­ers have ag­gres­sive dead­lines where some bugs are ac­cept­able, while other pro­jects de­mand a higher qual­ity bar with more re­laxed sched­ules. Sussing this out has helped me de­ter­mine where to in­vest my time. What is my team’s idea of good enough”? What bugs are ac­cept­able, if any? Where can I do a less-than-per­fect job if it means get­ting things done sooner?

In gen­eral, my per­sonal rule of thumb is to aim for an 8 out of 10 score, de­liv­ered on time. The code is good and does its job. It has mi­nor is­sues but noth­ing ma­jor. And it’s done on time! (To be clear, I aim for this. I don’t al­ways hit it!) But again, it de­pends on the pro­ject—some­times I want a per­fect score even if it’s de­layed, and other times I write buggy code that’s fin­ished hastily.

Software, like writ­ing, can ben­e­fit from a rough draft. This is some­times called a spike” or a walking skele­ton”.

I like im­ple­ment­ing a rough draft as quickly as I can. Later, I shape it into the fi­nal so­lu­tion.

My rough draft code is em­bar­rass­ing. Here are some qual­i­ties of my typ­i­cal spikes:

* Error cases are not han­dled. (I re­cently had a branch where an er­ror mes­sage was logged 20 times per sec­ond.)

* Commit mes­sages are just three let­ters: WIP, short for work in progress”.

* 3 pack­ages were added and none of them are used any­more.

This sounds pretty bad, but it has one im­por­tant qual­ity: it vaguely re­sem­bles a good so­lu­tion.

As you might imag­ine, I fix these mis­takes be­fore the fi­nal patch! (Some teams might pres­sure me to ship this messy code, which I try to re­sist. I don’t want the rough draft to be treated like a fi­nal draft!)

This rough draft” ap­proach has a few ad­van­tages:

* It can re­veal unknown un­knowns”. Often, pro­to­types un­cover things I could­n’t have an­tic­i­pated. It’s gen­er­ally good to dis­cover those ASAP, not af­ter I’ve per­fected some code that ul­ti­mately gets dis­carded.

* Lots of these prob­lems dis­ap­pear over the course of the rough draft and I never have to fix them. For ex­am­ple, I write a func­tion that’s too slow but works well enough for a pro­to­type. Later, I re­al­ize I did­n’t need that func­tion at all. Good thing I did­n’t waste time speed­ing it up! (I can’t tell you how many func­tions I’ve fully unit tested and then deleted. What a waste of time!)

* It helps me fo­cus. I’m not fix­ing a prob­lem in an­other part of the code­base or wor­ry­ing about the per­fect func­tion name. I’m speedrun­ning this rough draft to un­der­stand the prob­lem bet­ter.

* It helps me avoid pre­ma­ture ab­strac­tions. If I’m rush­ing to get some­thing ugly work­ing, I’m less likely to try to build some byzan­tine ab­strac­tion. I build what I need for the spe­cific prob­lem, not what I think I might need for fu­ture prob­lems that may never come.

* It be­comes eas­ier to com­mu­ni­cate progress to oth­ers in two ways: first, I can usu­ally give a more ac­cu­rate es­ti­mate of when I’ll be done be­cause I know ap­prox­i­mately what’s left. Second, I can demo some­thing, which helps stake­hold­ers un­der­stand what I’m build­ing and pro­vide bet­ter feed­back. This feed­back might change the di­rec­tion of the work, which is bet­ter to know sooner.

Here are some con­crete things I do when build­ing rough drafts:

* Focus on bind­ing de­ci­sions. Some choices, like the se­lec­tion of pro­gram­ming lan­guage or data­base schema de­sign, can be hard to change later. A rough draft is a good time for me to ex­plore these, and make sure I’m not box­ing my­self into a choice that I’ll re­gret in a year.

* Keep track of hacks. Every time I cut a cor­ner, I add a TODO com­ment or equiv­a­lent. Later, when it’s time for pol­ish, I run git grep TODO to see every­thing that needs at­ten­tion.

* Build top to bot­tom”. For ex­am­ple, in an ap­pli­ca­tion, I pre­fer to scaf­fold the UI be­fore the busi­ness logic, even if lots of stuff is hard-coded. I’ve some­times writ­ten busi­ness logic first, which I later dis­carded once the UI came into play, be­cause I mis­cal­cu­lated how it would be used. Build the top layer first—the dream code” I want to write or the API I wish ex­isted—rather than try­ing to build the bottom” layer first. It’s eas­ier to make the right API de­ci­sions when I start with how it will be used. It can also be eas­ier to gather feed­back on.

* Extract smaller changes while work­ing. Sometimes, dur­ing a rough draft, I re­al­ize that some im­prove­ment needs to be made else­where in the code. Maybe there’s a de­pen­dency that needs up­dat­ing. Before fin­ish­ing the fi­nal draft, make a sep­a­rate patch to just up­date that de­pen­dency. This is use­ful on its own and will ben­e­fit the up­com­ing change. I can push it for code re­view sep­a­rately, and hope­fully, it’ll be merged by the time I fin­ish my fi­nal draft.

See also: Throw away your first draft of your code” and Best Simple System for Now”. YAGNI is also some­what re­lated to this topic.

Generally, do­ing less is faster and eas­ier! Depending on the task, you may be able to soften the re­quire­ments.

Some ex­am­ple ques­tions to ask:

* Could I com­bine mul­ti­ple screens into one?

* Is it okay if we don’t han­dle a par­tic­u­larly tricky edge case?

* Instead of an API sup­port­ing 1000 in­puts, what if it just sup­ported 10?

* Is it okay to build a pro­to­type in­stead of a full ver­sion?

* What if we did­n’t do this at all?

More gen­er­ally, I some­times try to nudge the cul­ture of the or­ga­ni­za­tion to­wards a slower pace. This is a big topic, and I’m no ex­pert on or­ga­ni­za­tional change! But I’ve found that mak­ing big de­mands rarely works; I’ve had bet­ter luck with small, grad­ual sug­ges­tions that slowly shift dis­cus­sions. I don’t know much about union­iz­ing, but I won­der if it could help here too.

The mod­ern world is full of dis­trac­tions: no­ti­fi­ca­tions from your phone, mes­sages from col­leagues, and dreaded meet­ings. I don’t have smart an­swers for han­dling these.

But there’s an­other kind of dis­trac­tion: I start wan­der­ing through the code. I be­gin work­ing on one thing, and two hours later, I’m chang­ing some­thing com­pletely un­re­lated. Maybe I’m the­o­ret­i­cally be­ing pro­duc­tive and im­prov­ing the code­base, but that bug I was as­signed is­n’t get­ting fixed! I’m lost in the sauce”!

I’ve found two con­crete ways to man­age this:

* Set a timer. When I start work­ing on a dis­crete task, I of­ten set a timer. Maybe I think this func­tion is go­ing to take me 15 min­utes to write. Maybe I think it’ll take me 1 hour to un­der­stand the source of this bug. My es­ti­mates are fre­quently wrong, but when the timer goes off, I’m of­ten jolted out of some silly dis­trac­tion. And there’s noth­ing as sat­is­fy­ing as run­ning git com­mit right as my timer goes off—a per­fect es­ti­ma­tion. (This also helps me prac­tice the im­pos­si­ble art of time es­ti­ma­tion, though I’m still not great at it.)

* Pair pro­gram­ming helps keep me fo­cused. Another soul is less likely to let me waste their time with some rab­bit hole.

Some pro­gram­mers nat­u­rally avoid this kind of dis­trac­tion, but not me! Discipline and de­lib­er­ate ac­tion help me fo­cus.

The worst boss I ever had en­cour­aged us to make large patches. These changes were wide in scope, usu­ally touch­ing mul­ti­ple parts of the code at once. From my ex­pe­ri­ence, this was ter­ri­ble ad­vice.

Small, fo­cused diffs have al­most al­ways served me bet­ter. They have sev­eral ad­van­tages:

* They are usu­ally eas­ier to write, be­cause there’s less to keep in your head.

* They are usu­ally eas­ier to re­view. This light­ens team­mates’ cog­ni­tive load, makes my mis­takes eas­ier to spot, and usu­ally means my code is merged sooner.

* They are usu­ally eas­ier to re­vert if some­thing goes wrong.

* They re­duce the risk of in­tro­duc­ing new bugs since you’re chang­ing less at once.

I also like to make smaller changes that build up to a larger one. For ex­am­ple, if I’m adding a screen that re­quires fix­ing a bug and up­grad­ing a de­pen­dency, that could be three sep­a­rate patches: one to fix the bug, one to up­grade the de­pen­dency, and one to add the screen.

Small changes usu­ally help me build soft­ware more quickly and with higher qual­ity.

Most of the above is fairly high-level. Several more spe­cific skills have come in handy, es­pe­cially when try­ing to build soft­ware quickly:

* Reading code is, by far, the most im­por­tant skill I’ve ac­quired as a pro­gram­mer. I’ve had to work on this a lot! It helps in so many ways: de­bug­ging is eas­ier be­cause I can see how some func­tion works, bugs and poor doc­u­men­ta­tion in third-party de­pen­den­cies are less scary, it’s a huge source of learn­ing, and so much more.

* Data mod­el­ing is usu­ally im­por­tant to get right, even if it takes a lit­tle longer. Making in­valid states un­rep­re­sentable can pre­vent whole classes of bugs. Getting a data­base schema wrong can cause all sorts of headaches later. I think it’s worth spend­ing time to de­sign your data mod­els care­fully, es­pe­cially when they’re per­sisted or ex­changed.

* Scripting. Being able to com­fort­ably write quick Bash or Python scripts has sped me up. I write a few scripts a week for var­i­ous tasks, such as sort­ing Markdown lists, clean­ing up some data, or find­ing du­pli­cate files. I highly rec­om­mend Shellcheck for Bash as it catches many com­mon mis­takes. LLMs tend to be good at these scripts, es­pe­cially if they don’t need to be ro­bust.

* Debuggers have saved me lots of time. There’s no sub­sti­tute for a proper de­bug­ger. It makes it much eas­ier to un­der­stand what’s go­ing on (whether there’s a bug or not!), and quickly be­comes faster than print()-based de­bug­ging.

* Knowing when to take a break. If I’m stuck on a prob­lem with­out mak­ing progress, I should prob­a­bly take a break. This has hap­pened to me many times: I strug­gle with a prob­lem for hours, step away for a few min­utes, come back, and solve it in 5 min­utes.

* Prefer pure func­tions and im­mutable data. The func­tional pro­gram­ming style elim­i­nates many bugs and re­duces men­tal over­head. It’s of­ten eas­ier than de­sign­ing com­plex class hi­er­ar­chies. Not al­ways prac­ti­cal, but it’s my de­fault choice.

* LLMs, de­spite their is­sues, can ac­cel­er­ate some parts of the de­vel­op­ment process. It’s taken me awhile to un­der­stand their strengths and weak­nesses, but I use them in my day-to-day pro­gram­ming. Lots of ink has been spilled on the topic of LLM-assisted soft­ware de­vel­op­ment and I don’t have much to add. I like the vibecoding” tag on Lobsters, but there are lots of other places to read.

All of these are skills I’ve prac­ticed a bunch, and I feel the in­vest­ment has made me a faster de­vel­oper.

* Know how good your code needs to be for the task at hand.

* Try to soften re­quire­ments if you can.

Everything in this list seems ob­vi­ous in hind­sight, but these are lessons that took me a long time to learn.

I’m cu­ri­ous to what you’ve dis­cov­ered on this topic. Are there more tricks to know, or prac­tices of mine you dis­agree with? Contact me any time. I’d love to learn from you!

Thanks to the anony­mous re­view­ers who pro­vided feed­back on drafts of this post, and to tcard on Lobsters for a com­ment I in­cor­po­rated.

If this post left you think­ing, I want to work with this per­son! They build soft­ware quickly!”…well, you can. I’m look­ing for work, ide­ally at a non-profit, co-op, or so­cial good ven­ture. See my list of pro­jects or LinkedIn. If you like what you see, con­tact me.

...

Read the original on evanhahn.com »

10 303 shares, 49 trendiness

AI slows down open source developers. Peter Naur can teach us why.

AI slows down open source de­vel­op­ers. Peter Naur can teach us why.

AI slows down open source de­vel­op­ers. Peter Naur can teach us why.

Metr re­cently pub­lished a pa­per about the im­pact AI tools have on open-source de­vel­oper pro­duc­tiv­i­ty1. They show that when open source de­vel­op­ers work­ing in code­bases that they are deeply fa­mil­iar with use AI tools to com­plete a task, then they take longer to com­plete that task com­pared to other tasks where they are barred from us­ing AI tools. Interestingly the de­vel­op­ers pre­dict that AI will make them faster, and con­tinue to be­lieve that it did make them faster, even af­ter com­plet­ing the task slower than they oth­er­wise would!

When de­vel­op­ers are al­lowed to use AI tools, they take 19% longer to com­plete is­sues—a sig­nif­i­cant slow­down that goes against de­vel­oper be­liefs and ex­pert fore­casts. This gap be­tween per­cep­tion and re­al­ity is strik­ing: de­vel­op­ers ex­pected AI to speed them up by 24%, and even af­ter ex­pe­ri­enc­ing the slow­down, they still be­lieved AI had sped them up by 20%.

Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity

We can’t gen­er­alise these re­sults to all soft­ware de­vel­op­ers. The de­vel­op­ers in this study are a very par­tic­u­lar sort of de­vel­oper, work­ing on very par­tic­u­lar pro­jects. They are ex­pe­ri­enced open source de­vel­op­ers, work­ing on their own pro­jects. This study tells us that the cur­rent suite of AI tools ap­pear to slow such de­vel­op­ers down - but it does­n’t mean that we can as­sume the same ap­plies to other de­vel­op­ers. For ex­am­ple, we might ex­pect that for cor­po­rate drones work­ing on a next.js apps that were mostly built by other peo­ple who’ve long since left the com­pany (me) see huge pro­duc­tiv­ity im­prove­ments!

One thing we can also do, is the­o­rise about why these par­tic­u­lar open source de­vel­op­ers were slowed down by tools that promise to speed them up.

I’m go­ing to fo­cus in par­tic­u­lar on why they were slowed down, not the gap be­tween per­ceived and real per­for­mance. The in­abil­ity of de­vel­op­ers to tell if a tool sped them up or slowed them down is fas­ci­nat­ing in it­self, prob­a­bly ap­plies to many other forms of hu­man en­deav­our, and ex­plains things as var­ied as why so many peo­ple think that AI has made them 10 times more pro­duc­tive, why I con­tinue to use Vim, why peo­ple drive in London etc. I just don’t have any par­tic­u­lar thoughts about why this gap arises be­yond. I do have an opin­ion about why they are slowed down.

A while ago I wrote, some­what tan­gen­tially, about an old pa­per by Peter Naur called pro­gram­ming as the­ory build­ing. That pa­per states

pro­gram­ming prop­erly should be re­garded as an ac­tiv­ity by which the pro­gram­mers form or achieve a cer­tain kind of in­sight, a the­ory, of the mat­ters at hand

That is to say that the real prod­uct when we write soft­ware is our men­tal model of the pro­gram we’ve cre­ated. This model is what al­lowed us to build the soft­ware, and in fu­ture is what al­lows us to un­der­stand the sys­tem, di­ag­nose prob­lems within it, and work on it ef­fec­tively. If you agree with this the­ory, which I do, then it ex­plains things like why every­one hates legacy code, why small teams can out­per­form larger ones, why out­sourc­ing gen­er­ally goes badly, etc.

We know that the pro­gram­mers in Metr’s study are all peo­ple with ex­tremely well de­vel­oped men­tal mod­els of the pro­jects they work on. And we also know that the LLMs they used had no real ac­cess to those men­tal mod­els. The de­vel­op­ers could pro­vide chunks of that men­tal model to their AI tools - but do­ing so is a slow and lossy process that will never truly cap­ture the the­ory of the pro­gram that ex­ists in their minds. By of­fload­ing their soft­ware de­vel­op­ment work to an LLM they ham­pered their unique abil­ity to work on their code­bases ef­fec­tively.

Think of a time that you’ve tried to del­e­gate a sim­ple task to some­one else, say putting a baby to bed. You can write down what you think are un­am­bigu­ous in­struc­tions - give the baby milk, put it to bed, if it cries do not re­spond” but you will find that nine times out of ten, when you get home the per­son fol­low­ing those in­struc­tions will do the ex­act op­po­site of what you in­tended. Maybe they’ll have got­ten the cry­ing baby out of bed and taken it on a walk to see some frogs.

The men­tal mod­els with which we un­der­stand the world are in­cred­i­bly rich, to the ex­tent that even the sim­plest of them take an in­cred­i­ble amount of ef­fort to trans­fer to an­other per­son. What’s more that trans­fer can never be to­tally suc­cess­ful, and it’s very hard to de­ter­mine how suc­cess­ful the trans­fer has been, un­til we run into prob­lems caused by a lack of shared un­der­stand­ing. These prob­lems are what al­low us to no­tice a mis­match, and mu­tu­ally adapt our men­tal mod­els to per­form bet­ter in fu­ture. When you are lim­ited to trans­fer­ring a men­tal model through text, to an en­tity that will never chal­lenge or ask clar­i­fy­ing ques­tions, which can’t re­ally learn, and which can­not treat one state­ment as more im­por­tant than any other - well the task be­comes es­sen­tially im­pos­si­ble.

This is why AI cod­ing tools, as they ex­ist to­day, will gen­er­ally slow some­one down if they know what they are do­ing, and are work­ing on a pro­ject that they un­der­stand.

Well, maybe not. In the pre­vi­ous para­graph I wrote that AI tools will slow down some­one who knows what they are do­ing, and who is work­ing on a pro­ject they un­der­stand” - does this de­scribe the av­er­age soft­ware de­vel­oper in in­dus­try? I doubt it. Does it de­scribe soft­ware de­vel­op­ers in your work­place?

It’s com­mon for en­gi­neers to end up work­ing on pro­jects which they don’t have an ac­cu­rate men­tal model of. Projects built by peo­ple who have long since left the com­pany for pas­tures new. It’s equally com­mon for de­vel­op­ers to work in en­vi­ron­ments where lit­tle value is placed on un­der­stand­ing sys­tems, but a lot of value is placed on quickly de­liv­er­ing changes that mostly work. In this con­text, I think that AI tools have more of an ad­van­tage. They can in­gest the un­fa­mil­iar code­base faster than any hu­man can, and can of­ten gen­er­ate changes that will es­sen­tially work.

So if we take this nar­row and short termed view of pro­duc­tiv­ity and say that it is sim­ply time to pro­duce busi­ness value - then yes I think that an LLM can make de­vel­op­ers more pro­duc­tive. I can’t prove it - not hav­ing any data - but I’d love if some­one did do this study. If there are no tak­ers then I might try ex­per­i­ment­ing on my­self.

But there is a prob­lem with us­ing AI tools in this con­text.

Okay, so if you don’t have a men­tal model of a pro­gram, then maybe an LLM could im­prove your pro­duc­tiv­ity. However, we agreed ear­lier that the main pur­pose of writ­ing soft­ware is to build a men­tal model. If we out­source our work to the LLM are we still able to ef­fec­tively build the men­tal model? I doubt it2.

So should you avoid us­ing these tools? Maybe. If you ex­pect to work on a pro­ject long term, want to truly un­der­stand it, and wish to be em­pow­ered to make changes ef­fec­tively then I think you should just write some code your­self3. If on the other hand you are just slop­ping out slop at the slop fac­tory, then in­stall cur­sor4 and crack on - yolo.

1 It’s a re­ally fab­u­lous study, and I strongly sug­gest read­ing at least the sum­mary.

2 One of the com­monly sug­gested uses of Claude Code et al is that you can use them to quickly on­board into new pro­jects by ask­ing ques­tions about that pro­ject. Does that help us build a men­tal model. Maybe yes! Does gen­er­at­ing code 10 times faster than a nor­mal de­vel­oper lead to a strong men­tal model of the sys­tem that is be­ing cre­ated? Almost cer­tainly not.

3 None of this is to say that there could­n’t be AI tools which mean­ing­fully speed up de­vel­op­ers with a men­tal model of their pro­jects, or which help them build those men­tal mod­els. But the cur­rent suite of tools that ex­ist don’t seem to be head­ing in that di­rec­tion. It’s pos­si­ble that if mod­els im­prove then we might get to a point that there’s no need for any hu­man to ever hold a men­tal model of a soft­ware ar­ti­fact. But we’re cer­tainly not there yet.

4 Don’t in­stall cur­sor, it sucks. Use Claude Code like an adult.

...

Read the original on johnwhiles.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.