10 interesting stories served every morning and every evening.




1 758 shares, 30 trendiness

Border Patrol is monitoring US drivers and detaining those with 'suspicious' travel patterns

Add AP News as your pre­ferred source to see more of our sto­ries on Google.

Add AP News as your pre­ferred source to see more of our sto­ries on Google.

The U. S. Border Patrol is mon­i­tor­ing mil­lions of American dri­vers na­tion­wide in a se­cre­tive pro­gram to iden­tify and de­tain peo­ple whose travel pat­terns it deems sus­pi­cious, The Associated Press has found.

The pre­dic­tive in­tel­li­gence pro­gram has re­sulted in peo­ple be­ing stopped, searched and in some cases ar­rested. A net­work of cam­eras scans and records ve­hi­cle li­cense plate in­for­ma­tion, and an al­go­rithm flags ve­hi­cles deemed sus­pi­cious based on where they came from, where they were go­ing and which route they took. Federal agents in turn may then flag lo­cal law en­force­ment.

Suddenly, dri­vers find them­selves pulled over — of­ten for rea­sons cited such as speed­ing, fail­ure to sig­nal, the wrong win­dow tint or even a dan­gling air fresh­ener block­ing the view. They are then ag­gres­sively ques­tioned and searched, with no inkling that the roads they drove put them on law en­force­men­t’s radar.

Once lim­ited to polic­ing the na­tion’s bound­aries, the Border Patrol has built a sur­veil­lance sys­tem stretch­ing into the coun­try’s in­te­rior that can mon­i­tor or­di­nary Americans’ daily ac­tions and con­nec­tions for anom­alies in­stead of sim­ply tar­get­ing wanted sus­pects. Started about a decade ago to fight il­le­gal bor­der-re­lated ac­tiv­i­ties and the traf­fick­ing of both drugs and peo­ple, it has ex­panded over the past five years.

The Border Patrol has re­cently grown even more pow­er­ful through col­lab­o­ra­tions with other agen­cies, draw­ing in­for­ma­tion from li­cense plate read­ers na­tion­wide run by the Drug Enforcement Administration, pri­vate com­pa­nies and, in­creas­ingly, lo­cal law en­force­ment pro­grams funded through fed­eral grants. Texas law en­force­ment agen­cies have asked Border Patrol to use fa­cial recog­ni­tion to iden­tify dri­vers, doc­u­ments show.

This ac­tive role be­yond the bor­ders is part of the quiet trans­for­ma­tion of its par­ent agency, U. S. Customs and Border Protection, into some­thing more akin to a do­mes­tic in­tel­li­gence op­er­a­tion. Under the Trump ad­min­is­tra­tion’s height­ened im­mi­gra­tion en­force­ment ef­forts, CBP is now poised to get more than $2.7 bil­lion to build out bor­der sur­veil­lance sys­tems such as the li­cense plate reader pro­gram by lay­er­ing in ar­ti­fi­cial in­tel­li­gence and other emerg­ing tech­nolo­gies.

The re­sult is a mass sur­veil­lance net­work with a par­tic­u­larly American fo­cus: cars.

This in­ves­ti­ga­tion, the first to re­veal de­tails of how the pro­gram works on America’s roads, is based on in­ter­views with eight for­mer gov­ern­ment of­fi­cials with di­rect knowl­edge of the pro­gram who spoke on the con­di­tion of anonymity be­cause they weren’t au­tho­rized to speak to the me­dia, as well as dozens of fed­eral, state and lo­cal of­fi­cials, at­tor­neys and pri­vacy ex­perts. The AP also re­viewed thou­sands of pages of court and gov­ern­ment doc­u­ments, state grant and law en­force­ment data, and ar­rest re­ports.

The Border Patrol has for years hid­den de­tails of its li­cense plate reader pro­gram, try­ing to keep any men­tion of the pro­gram out of court doc­u­ments and po­lice re­ports, for­mer of­fi­cials say, even go­ing so far as to pro­pose drop­ping charges rather than risk re­veal­ing any de­tails about the place­ment and use of their covert li­cense plate read­ers. Readers are of­ten dis­guised along high­ways in traf­fic safety equip­ment like drums and bar­rels.

The Border Patrol has de­fined its own cri­te­ria for which dri­vers’ be­hav­ior should be deemed sus­pi­cious or tied to drug or hu­man traf­fick­ing, stop­ping peo­ple for any­thing from dri­ving on back­coun­try roads, be­ing in a rental car or mak­ing short trips to the bor­der re­gion. The agen­cy’s net­work of cam­eras now ex­tends along the south­ern bor­der in Texas, Arizona and California, and also mon­i­tors dri­vers trav­el­ing near the U. S.-Canada bor­der.

And it reaches far into the in­te­rior, im­pact­ing res­i­dents of big met­ro­pol­i­tan ar­eas and peo­ple dri­ving to and from large cities such as Chicago and Detroit, as well as from Los Angeles, San Antonio, and Houston to and from the Mexican bor­der re­gion. In one ex­am­ple, AP found the agency has placed at least four cam­eras in the greater Phoenix area over the years, one of which was more than 120 miles (193 kilo­me­ters) from the Mexican fron­tier, be­yond the agen­cy’s usual ju­ris­dic­tion of 100 miles (161 kilo­me­ters) from a land or sea bor­der. The AP also iden­ti­fied sev­eral cam­era lo­ca­tions in met­ro­pol­i­tan Detroit, as well as one placed near the Michigan-Indiana bor­der to cap­ture traf­fic headed to­wards Chicago or Gary, Indiana, or other nearby des­ti­na­tions.

Border Patrol’s par­ent agency, U. S. Customs and Border Protection, said they use li­cense plate read­ers to help iden­tify threats and dis­rupt crim­i­nal net­works and are governed by a strin­gent, multi-lay­ered pol­icy frame­work, as well as fed­eral law and con­sti­tu­tional pro­tec­tions, to en­sure the tech­nol­ogy is ap­plied re­spon­si­bly and for clearly de­fined se­cu­rity pur­poses.”

For na­tional se­cu­rity rea­sons, we do not de­tail the spe­cific op­er­a­tional ap­pli­ca­tions,” the agency said. While the U. S. Border Patrol pri­mar­ily op­er­ates within 100 miles of the bor­der, it is legally al­lowed to op­er­ate any­where in the United States,” the agency added.

While col­lect­ing li­cense plates from cars on pub­lic roads has gen­er­ally been up­held by courts, some le­gal schol­ars see the growth of large dig­i­tal sur­veil­lance net­works such as Border Patrol’s as rais­ing con­sti­tu­tional ques­tions. Courts have started to rec­og­nize that large-scale sur­veil­lance tech­nol­ogy that’s cap­tur­ing every­one and every­where at every time” might be un­con­sti­tu­tional un­der the Fourth Amendment, which pro­tects peo­ple from un­rea­son­able searches, said Andrew Ferguson, a law pro­fes­sor at George Washington University.

Today, pre­dic­tive sur­veil­lance is em­bed­ded into America’s road­ways. Mass sur­veil­lance tech­niques are also used in a range of other coun­tries, from au­thor­i­tar­ian gov­ern­ments such as China to, in­creas­ingly, democ­ra­cies in the U. K. and Europe in the name of na­tional se­cu­rity and pub­lic safety.

They are col­lect­ing mass amounts of in­for­ma­tion about who peo­ple are, where they go, what they do, and who they know … en­gag­ing in drag­net sur­veil­lance of Americans on the streets, on the high­ways, in their cities, in their com­mu­ni­ties,” Nicole Ozer, the ex­ec­u­tive di­rec­tor of the Center for Constitutional Democracy at UC Law San Francisco, said in re­sponse to the APs find­ings. These sur­veil­lance sys­tems do not make com­mu­ni­ties safer.”

In February, Lorenzo Gutierrez Lugo, a dri­ver for a small truck­ing com­pany that spe­cial­izes in trans­port­ing fur­ni­ture, cloth­ing and other be­long­ings to fam­i­lies in Mexico, was dri­ving south to the bor­der city of Brownsville, Texas, car­ry­ing pack­ages from im­mi­grant com­mu­ni­ties in South Carolina’s low coun­try.

Gutierrez Lugo was pulled over by a lo­cal po­lice of­fi­cer in Kingsville, a small Texas city near Corpus Christi that lies about 100 miles from the Mexican bor­der. The of­fi­cer, Richard Beltran, cited the truck’s speed of 50 mph (80 kph) in a 45 mph (72 kph) zone as the rea­son for the stop.

But speed­ing was a pre­text: Border Patrol had re­quested the stop and said the black Dodge pickup with a white trailer could con­tain con­tra­band, ac­cord­ing to po­lice and court records. U. S. Route 77 passes through Kingsville, a route that state and fed­eral au­thor­i­ties scru­ti­nize for traf­fick­ing of drugs, money and peo­ple.

Gutierrez Lugo, who through a lawyer de­clined to com­ment, was in­ter­ro­gated about the route he drove, based on li­cense plate reader data, per the po­lice re­port and court records. He con­sented to a search of his car by Beltran and Border Patrol agents, who even­tu­ally ar­rived to as­sist.

They un­earthed no con­tra­band. But Beltran ar­rested Gutierrez Lugo on sus­pi­cion of money laun­der­ing and en­gag­ing in or­ga­nized crim­i­nal ac­tiv­ity be­cause he was car­ry­ing thou­sands of dol­lars in cash — money his su­per­vi­sor said came di­rectly from cus­tomers in lo­cal Latino com­mu­ni­ties, who are ac­cus­tomed to pay­ing in cash. No crim­i­nal charges were ul­ti­mately brought against Gutierrez Lugo and an ef­fort by pros­e­cu­tors to seize the cash, ve­hi­cle and trailer as con­tra­band was even­tu­ally dropped.

Luis Barrios owns the truck­ing com­pany, Paquetería El Guero, that em­ployed the dri­ver. He told AP he hires peo­ple with work au­tho­riza­tion in the United States and was taken aback by the treat­ment of his em­ployee and his trailer.

We did every­thing right and had noth­ing to hide, and that was ul­ti­mately what they found,” said Barrios, who es­ti­mates he spent $20,000 in le­gal fees to clear his dri­ver’s name and get the trailer out of im­pound.

Border Patrol agents and lo­cal po­lice have many names for these kinds of stops: whisper,” intel” or wall” stops. Those stops are meant to con­ceal — or wall off — that the true rea­son for the stop is a tip from fed­eral agents sit­ting miles away, watch­ing data feeds show­ing who’s trav­el­ing on America’s roads and pre­dict­ing who is suspicious,” ac­cord­ing to doc­u­ments and peo­ple in­ter­viewed by the AP.

In 2022, a man from Houston had his car searched from top to bot­tom by Texas sher­if­f’s deputies out­side San Antonio af­ter they got a sim­i­lar tipoff from Border Patrol agents about the dri­ver, Alek Schott.

Federal agents ob­served that Schott had made an overnight trip from Houston to Carrizo Springs, Texas, and back, court records show. They knew he stayed overnight in a ho­tel about 80 miles (129 kilo­me­ters) from the U. S.-Mexico bor­der. They knew that in the morn­ing Schott met a fe­male col­league there be­fore they drove to­gether to a busi­ness meet­ing.

At Border Patrol’s re­quest, Schott was pulled over by Bexar County sher­if­f’s deputies. The deputies held Schott by the side of the road for more than an hour, searched his car and found noth­ing.

The beau­ti­ful thing about the Texas Traffic Code is there’s thou­sands of things you can stop a ve­hi­cle for,” said Joel Babb, the sher­if­f’s deputy who stopped Schott’s car, in a de­po­si­tion in a law­suit Schott filed al­leg­ing vi­o­la­tions of his con­sti­tu­tional rights.

According to tes­ti­mony and doc­u­ments re­leased as part of Schott’s law­suit, Babb was on a group chat with fed­eral agents called Northwest Highway. Babb deleted the WhatsApp chat off his phone but Schott’s lawyers were able to re­cover some of the text mes­sages.

Through a pub­lic records act re­quest, the AP also ob­tained more than 70 pages of the Northwest Highway group chats from June and July of this year from a Texas county that had at least one sher­if­f’s deputy ac­tive in the chat. The AP was able to as­so­ci­ate nu­mer­ous phone num­bers in both sets of doc­u­ments with Border Patrol agents and Texas law en­force­ment of­fi­cials.

The chat logs show Border Patrol agents and Texas sher­iffs deputies trad­ing tips about ve­hi­cles’ travel pat­terns — based on sus­pi­cions about lit­tle more than some­one tak­ing a quick trip to the bor­der re­gion and back. The chats show how thor­oughly Texas high­ways are sur­veilled by this fed­eral-lo­cal part­ner­ship and how much de­tailed in­for­ma­tion is in­for­mally shared.

In one ex­change a law en­force­ment of­fi­cial in­cluded a photo of some­one’s dri­ver’s li­cense and told the group the per­son, who they iden­ti­fied us­ing an ab­bre­vi­a­tion for some­one in the coun­try il­le­gally, was headed west­bound. Need BP?,” re­sponded a group mem­ber whose num­ber was la­beled bp Intel.” Yes sir,” the of­fi­cial an­swered, and a Border Patrol agent was en route.

Border Patrol agents and lo­cal law en­force­ment shared in­for­ma­tion about U. S. cit­i­zens’ so­cial me­dia pro­files and home ad­dresses with each other af­ter stop­ping them on the road. Chats show Border Patrol was also able to de­ter­mine whether ve­hi­cles were rentals and whether dri­vers worked for rideshare ser­vices.

In Schott’s case, Babb tes­ti­fied that fed­eral agents actually watch travel pat­terns on the high­way” through li­cense plate scans and other sur­veil­lance tech­nolo­gies. He added: I just know that they have a lot of toys over there on the fed­eral side.”

After find­ing noth­ing in Schott’s car, Babb said nine times out of 10, this is what hap­pens,” a phrase Schott’s lawyers claimed in court fil­ings shows the sher­if­f’s de­part­ment finds noth­ing sus­pi­cious in most of its searches. Babb did not re­spond to mul­ti­ple re­quests for com­ment from AP.

The Bexar County sher­if­f’s of­fice de­clined to com­ment due to pend­ing lit­i­ga­tion and re­ferred all ques­tions about the Schott case to the coun­ty’s dis­trict at­tor­ney. The dis­trict at­tor­ney did not re­spond to a re­quest for com­ment.

The case is pend­ing in fed­eral court in Texas. Schott said in an in­ter­view with the AP: I did­n’t know it was il­le­gal to drive in Texas.”

Today, the deserts, forests and moun­tains of the na­tion’s land bor­ders are dot­ted with check­points and in­creas­ingly, sur­veil­lance tow­ers, Predator drones, ther­mal cam­eras and li­cense plate read­ers, both covert and overt.

Border Patrol’s par­ent agency got au­tho­riza­tion to run a do­mes­tic li­cense plate reader pro­gram in 2017, ac­cord­ing to a Department of Homeland Security pol­icy doc­u­ment. At the time, the agency said that it might use hid­den li­cense plate read­ers for a set pe­riod of time while CBP is con­duct­ing an in­ves­ti­ga­tion of an area of in­ter­est or smug­gling route. Once the in­ves­ti­ga­tion is com­plete, or the il­licit ac­tiv­ity has stopped in that area, the covert cam­eras are re­moved,” the doc­u­ment states.

But that’s not how the pro­gram has op­er­ated in prac­tice, ac­cord­ing to in­ter­views, po­lice re­ports and court doc­u­ments. License plate read­ers have be­come a ma­jor — and in some places per­ma­nent — fix­ture of the bor­der re­gion.

In a bud­get re­quest to Congress in fis­cal year 2024, CBP said that its Conveyance Monitoring and Predictive Recognition System, or CMPRS, collects li­cense plate im­ages and matches the processed im­ages against es­tab­lished hot lists to as­sist … in iden­ti­fy­ing travel pat­terns in­dica­tive of il­le­gal bor­der re­lated ac­tiv­i­ties.” Several new de­vel­oper jobs have been posted seek­ing ap­pli­cants to help mod­ern­ize its li­cense plate sur­veil­lance sys­tem in re­cent months. Numerous Border Patrol sec­tors now have spe­cial in­tel­li­gence units that can an­a­lyze li­cense plate reader data, and tie com­mer­cial li­cense plate read­ers to its na­tional net­work, ac­cord­ing to doc­u­ments and in­ter­views.

Border Patrol worked with other law en­force­ment agen­cies in Southern California about a decade ago to de­velop pat­tern recog­ni­tion, said a for­mer CBP of­fi­cial who spoke on the con­di­tion of anonymity for fear of reprisal. Over time, the agency learned to de­velop what it calls patterns of life” of ve­hi­cle move­ments by sift­ing through the li­cense plate data and de­ter­min­ing abnormal” routes, eval­u­at­ing if dri­vers were pur­posely avoid­ing of­fi­cial check­points. Some cam­eras can take pho­tos of a ve­hi­cle’s plates as well as its dri­ver’s face, the of­fi­cial said.

Another for­mer Border Patrol of­fi­cial com­pared it to a more tech­no­log­i­cally so­phis­ti­cated ver­sion of what agents used to do in the field — de­velop hunches based on ex­pe­ri­ence about which ve­hi­cles or routes smug­glers might use, find a le­gal ba­sis for the stop like speed­ing and pull dri­vers over for ques­tion­ing.

The cam­eras take pic­tures of ve­hi­cle li­cense plates. Then, the pho­tos are read” by the sys­tem, which au­to­mat­i­cally de­tects and dis­tills the im­ages into num­bers and let­ters, tied to a ge­o­graphic lo­ca­tion, for­mer CBP of­fi­cials said. The AP could not de­ter­mine how pre­cisely the sys­tem’s al­go­rithm de­fines a quick turn­around or an odd route. Over time, the agency has amassed data­bases re­plete with im­ages of li­cense plates, and the sys­tem’s al­go­rithm can flag an un­usual pattern of life” for hu­man in­spec­tion.

The Border Patrol also has ac­cess to a na­tion­wide net­work of plate read­ers run by the Drug Enforcement Administration, doc­u­ments show, and was au­tho­rized in 2020 to ac­cess li­cense plate reader sys­tems sold by pri­vate com­pa­nies. In doc­u­ments ob­tained by the AP, a Border Patrol of­fi­cial boasted about be­ing able to see that a ve­hi­cle that had trav­eled to Dallas, Little Rock, Arkansas and Atlanta” be­fore end­ing up south of San Antonio.

Documents show that Border Patrol or CBP has in the past had ac­cess to data from at least three pri­vate sec­tor ven­dors: Rekor, Vigilant Solutions and Flock Safety.

Through Flock alone, Border Patrol for a time had ac­cess to at least 1,600 li­cense plate read­ers across 22 states, and some coun­ties have re­ported look­ing up li­cense plates on be­half of CBP even in states like California and Illinois that ban shar­ing data with fed­eral im­mi­gra­tion au­thor­i­ties, ac­cord­ing to an AP analy­sis of po­lice dis­clo­sures. A Flock spokesper­son told AP the com­pany for now” had paused its pi­lot pro­grams with CBP and a sep­a­rate DHS agency, Homeland Security Investigations, and de­clined to dis­cuss the type or vol­ume of data shared with ei­ther fed­eral agency, other than to say agen­cies could search for ve­hi­cles wanted in con­junc­tion with a crime. No agen­cies cur­rently list Border Patrol as re­ceiv­ing Flock data. Vigilant and Rekor did not re­spond to re­quests for com­ment.

Where Border Patrol places its cam­eras is a closely guarded se­cret. However, through pub­lic records re­quests, the AP ob­tained dozens of per­mits the agency filed with Arizona and Michigan for per­mis­sion to place cam­eras on state-owned land. The per­mits show the agency fre­quently dis­guises its cam­eras by con­ceal­ing them in traf­fic equip­ment like the yel­low and or­ange bar­rels that dot American road­ways, or by la­bel­ing them as job­site equip­ment. An AP pho­tog­ra­pher in October vis­ited the lo­ca­tions iden­ti­fied in more than two dozen per­mit ap­pli­ca­tions in Arizona, find­ing that most of the Border Patrol’s hid­den equip­ment re­mains in place to­day. Spokespeople for the Arizona and Michigan de­part­ments of trans­porta­tion said they ap­prove per­mits based on whether they fol­low state and fed­eral rules and are not privy to de­tails on how li­cense plate read­ers are used.

Texas, California, and other bor­der states did not pro­vide doc­u­ments in re­sponse to the APs pub­lic records re­quests.

CBPs at­tor­neys and per­son­nel in­structed lo­cal cities and coun­ties in both Arizona and Texas to with­hold records from the AP that might have re­vealed de­tails about the pro­gram’s op­er­a­tions, even though they were re­quested un­der state open records laws, ac­cord­ing to emails and le­gal briefs filed with state gov­ern­ments. For ex­am­ple, CBP claimed records re­quested by the AP in Texas would per­mit pri­vate cit­i­zens to an­tic­i­pate weak­nesses in a po­lice de­part­ment, avoid de­tec­tion, jeop­ar­dize of­fi­cer safety, and gen­er­ally un­der­mine po­lice ef­forts.” Michigan redacted the ex­act lo­ca­tions of Border Patrol equip­ment, but the AP was able to de­ter­mine gen­eral lo­ca­tions from the name of the county.

One page of the group chats ob­tained by the AP shows that a par­tic­i­pant en­abled WhatsApp’s dis­ap­pear­ing mes­sages fea­ture to en­sure com­mu­ni­ca­tions were deleted au­to­mat­i­cally.

The Border Patrol’s li­cense plate reader pro­gram is just one part of a steady trans­for­ma­tion of its par­ent agency, CBP, in the years since 9/11 into an in­tel­li­gence op­er­a­tion whose reach ex­tends far be­yond bor­ders, ac­cord­ing to in­ter­views with for­mer of­fi­cials.

CBP has qui­etly amassed ac­cess to far more in­for­ma­tion from ports of en­try, air­ports and in­tel­li­gence cen­ters than other lo­cal, state and fed­eral law en­force­ment agen­cies. And like a do­mes­tic spy agency, CBP has mostly hid­den its role in the dis­sem­i­na­tion of in­tel­li­gence on purely do­mes­tic travel through its use of whis­per stops.

Border Patrol has also ex­tended the reach of its li­cense plate sur­veil­lance pro­gram by pay­ing for lo­cal law en­force­ment to run plate read­ers on their be­half.

A fed­eral grant pro­gram called Operation Stonegarden, which has ex­isted in some form for nearly two decades, has handed out hun­dreds of mil­lions of dol­lars to buy au­to­mated li­cense plate read­ers, cam­era-equipped drones and other sur­veil­lance gear for lo­cal po­lice and sher­iffs agen­cies. Stonegarden grant funds also pay for lo­cal law en­force­ment over­time, which dep­u­tizes lo­cal of­fi­cers to work on Border Patrol en­force­ment pri­or­i­ties. Under President Donald Trump, the Republican-led Congress this year al­lo­cated $450 mil­lion for Stonegarden to be handed out over the next four fis­cal years. In the pre­vi­ous four fis­cal years, the pro­gram gave out $342 mil­lion.

In Cochise County, Arizona, Sheriff Mark Dannels said Stonegarden grants, which have been used to buy plate read­ers and pay for over­time, have let his deputies merge their mis­sion with Border Patrol’s to pri­or­i­tize bor­der se­cu­rity.

If we’re shar­ing our au­thor­i­ties, we can put some con­se­quences be­hind, or de­ter­rence be­hind, Don’t come here,’” he said.

In 2021, the Ward County, Texas, sher­iff sought grant fund­ing from DHS to buy a covert, mo­bile, License Plate Reader” to pipe data to Border Patrol’s Big Bend Sector Intelligence Unit. The sher­if­f’s de­part­ment did not re­spond to a re­quest for com­ment.

Other doc­u­ments AP ob­tained show that Border Patrol con­nects lo­cally owned and op­er­ated li­cense plate read­ers bought through Stonegarden grants to its com­puter sys­tems, vastly in­creas­ing the fed­eral agen­cy’s sur­veil­lance net­work.

How many peo­ple have been caught up in the Border Patrol’s drag­net is un­known. One for­mer Border Patrol agent who worked on the li­cense plate reader pat­tern de­tec­tion pro­gram in California said the pro­gram had an 85% suc­cess rate of dis­cov­er­ing con­tra­band once he learned to iden­tify pat­terns that looked sus­pi­cious. But an­other for­mer of­fi­cial in a dif­fer­ent Border Patrol sec­tor said he was un­aware of suc­cess­ful in­ter­dic­tions based solely on li­cense plate pat­terns.

In Trump’s sec­ond term, Border Patrol has ex­tended its reach and power as bor­der cross­ings have slowed to his­toric lows and freed up agents for op­er­a­tions in the heart­land. Border Patrol Sector Chief Gregory Bovino, for ex­am­ple, was tapped to di­rect hun­dreds of agents from mul­ti­ple DHS agen­cies in the ad­min­is­tra­tion’s im­mi­gra­tion sweeps across Los Angeles, more than 150 miles (241 kilo­me­ters) from his of­fice in El Centro, California. Bovino later was el­e­vated to lead the ag­gres­sive im­mi­gra­tion crack­down in Chicago. Numerous Border Patrol of­fi­cials have also been tapped to re­place ICE lead­er­ship.

The re­sult has been more en­coun­ters be­tween the agency and the gen­eral pub­lic than ever be­fore.

We took Alek’s case be­cause it was a clear-cut ex­am­ple of an un­con­sti­tu­tional traf­fic stop,” said Christie Hebert, who works at the non­profit pub­lic in­ter­est law firm Institute for Justice and rep­re­sents Schott. What we found was some­thing much larger — a sys­tem of mass sur­veil­lance that threat­ens peo­ple’s free­dom of move­ment.”

AP found nu­mer­ous other ex­am­ples sim­i­lar to what Schott and the de­liv­ery dri­ver ex­pe­ri­enced in re­view­ing court records in bor­der com­mu­ni­ties and along known smug­gling routes in Texas and California. Several po­lice re­ports and court records the AP ex­am­ined cite suspicious” travel pat­terns or vague tipoffs from the Border Patrol or other un­named law en­force­ment agen­cies. In an­other fed­eral court doc­u­ment filed in California, a Border Patrol agent ac­knowl­edged conducting tar­geted analy­sis on ve­hi­cles ex­hibit­ing sus­pi­cious travel pat­terns” as the rea­son he sin­gled out a Nissan Altima trav­el­ing near San Diego.

In cases re­viewed by the AP, lo­cal law en­force­ment some­times tried to con­ceal the role the Border Patrol plays in pass­ing along in­tel­li­gence. Babb, the deputy who stopped Schott, tes­ti­fied he typ­i­cally uses the phrase subsequent to prior knowl­edge” when de­scrib­ing whis­per stops in his po­lice re­ports to ac­knowl­edge that the tip came from an­other law en­force­ment agency with­out re­veal­ing too much in writ­ten doc­u­ments he writes memo­ri­al­iz­ing mo­torist en­coun­ters.

Once they pull over a ve­hi­cle deemed sus­pi­cious, of­fi­cers of­ten ag­gres­sively ques­tion dri­vers about their trav­els, their be­long­ings, their jobs, how they know the pas­sen­gers in the car, and much more, po­lice records and body­worn cam­era footage ob­tained by the AP show. One Texas of­fi­cer de­manded de­tails from a man about where he met his cur­rent sex­ual part­ner. Often dri­vers, such as the one work­ing for the South Carolina mov­ing com­pany, were ar­rested on sus­pi­cion of money laun­der­ing merely for car­ry­ing a few thou­sand dol­lars worth of cash, with no ap­par­ent con­nec­tion to il­le­gal ac­tiv­ity. Prosecutors filed law­suits to try to seize money or ve­hi­cles on the sus­pi­cion they were linked to traf­fick­ing.

Schott warns that for every suc­cess story touted by Border Patrol, there are far more in­no­cent peo­ple who don’t re­al­ize they’ve be­come en­snared in a tech­nol­ogy-dri­ven en­force­ment op­er­a­tion.

I as­sume for every one per­son like me, who’s ac­tu­ally stand­ing up, there’s a thou­sand peo­ple who just don’t have the means or the time or, you know, they just leave frus­trated and an­gry. They don’t have the abil­ity to move for­ward and hold any­one ac­count­able,” Schott said. I think there’s thou­sands of peo­ple get­ting treated this way.”

...

Read the original on apnews.com »

2 747 shares, 30 trendiness

Android and iPhone users can now share files, starting with the Pixel 10 family.

When it comes to shar­ing mo­ments be­tween fam­ily and friends, what de­vice you have should­n’t mat­ter — shar­ing should just work. But we’ve heard from many peo­ple that they want a sim­pler way to share files be­tween de­vices.

Today, we’re in­tro­duc­ing a way for Quick Share to work with AirDrop. This makes file trans­fer eas­ier be­tween iPhones and Android de­vices, and starts rolling out to­day to the Pixel 10 fam­ily.

We built this with se­cu­rity at its core, pro­tect­ing your data with strong safe­guards that were tested by in­de­pen­dent se­cu­rity ex­perts. It’s just one more way we’re bring­ing bet­ter com­pat­i­bil­ity that peo­ple are ask­ing for be­tween op­er­at­ing sys­tems, fol­low­ing our work on RCS and un­known tracker alerts.

We’re look­ing for­ward to im­prov­ing the ex­pe­ri­ence and ex­pand­ing it to more Android de­vices. See it in ac­tion on the Pixel 10 Pro in this video, and try it out for your­self!

...

Read the original on blog.google »

3 595 shares, 27 trendiness

Preserving code that shaped generations

Preserving code that shaped gen­er­a­tions: Zork I, II, and III go Open Source

A game that changed how we think about play

Today, we’re pre­serv­ing a cor­ner­stone of gam­ing his­tory that is near and dear to our hearts. Together, Microsoft’s Open Source Programs Office (OSPO), Team Xbox, and Activision are mak­ing Zork I, Zork II, and Zork III avail­able un­der the MIT License. Our goal is sim­ple: to place his­tor­i­cally im­por­tant code in the hands of stu­dents, teach­ers, and de­vel­op­ers so they can study it, learn from it, and, per­haps most im­por­tantly, play it.

A game that changed how we think about play

When Zork ar­rived, it did­n’t just ask play­ers to win; it asked them to imag­ine. There were no graph­ics, no joy­stick, and no sound­track, only words on a screen and the play­er’s cu­rios­ity. Yet those words built worlds more vivid than most games of their time. What made that pos­si­ble was­n’t just clever writ­ing, it was clever en­gi­neer­ing.

Beneath that world of words was some­thing qui­etly rev­o­lu­tion­ary: the Z-Machine, a cus­tom-built en­gine. Z-Machine is a spec­i­fi­ca­tion of a vir­tual ma­chine, and now there are many Z-Machine in­ter­preters that we used to­day that are soft­ware im­ple­men­ta­tions of that VM. The orig­i­nal main­frame ver­sion of Zork was too large for early home com­put­ers to han­dle, so the team at Infocom made a prac­ti­cal choice. They split it into three games ti­tled Zork I, Zork II, and Zork III, all pow­ered by the same un­der­ly­ing sys­tem. This also meant that in­stead of re­build­ing the game for each plat­form, they could use the Z-Machine to in­ter­pret the same story files on any com­puter. That de­sign made Zork one of the first games to be truly cross-plat­form, ap­pear­ing on Apple IIs, IBM PCs, and more.

Game preser­va­tion takes many forms, and it’s im­por­tant to con­sider re­search as well as play. The Zork source code de­serves to be pre­served and stud­ied. Rather than cre­at­ing new repos­i­to­ries, we’re con­tribut­ing di­rectly to his­tory. In col­lab­o­ra­tion with Jason Scott, the well-known dig­i­tal archivist of Internet Archive fame, we have of­fi­cially sub­mit­ted up­stream pull re­quests to the his­tor­i­cal source repos­i­to­ries of Zork I, Zork II, and Zork III. Those pull re­quests add a clear MIT LICENSE and for­mally doc­u­ment the open-source grant.

Accompanying doc­u­men­ta­tion where avail­able, such as build notes, com­ments, and his­tor­i­cally rel­e­vant files.

Clear li­cens­ing and at­tri­bu­tion, via MIT LICENSE.txt and repos­i­tory-level meta­data.

This re­lease fo­cuses purely on the code it­self. It does not in­clude com­mer­cial pack­ag­ing or mar­ket­ing ma­te­ri­als, and it does not grant rights to any trade­marks or brands, which re­main with their re­spec­tive own­ers. All as­sets out­side the scope of these ti­tles’ source code are in­ten­tion­ally ex­cluded to pre­serve his­tor­i­cal ac­cu­racy.

More than forty years later, Zork is still alive and eas­ier than ever to play. The games re­main com­mer­cially avail­able via The Zork Anthology on Good Old Games. For those who en­joy a more hands on ap­proach, the games can be com­piled and run lo­cally us­ing ZILF, the mod­ern Z-Machine in­ter­preter cre­ated by Tara McGrew. ZILF com­piles ZIL files into Z3s that can be run with Tara’s own ZLR which is a sen­tence I never thought I’d write, much less say out loud! There are a huge num­ber of won­der­ful Z-machine run­ners across all plat­forms for you to ex­plore.

Here’s how to get started run­ning Zork lo­cally with ZILF. From the com­mand line, com­pile and as­sem­bly the zork1.zil into a runnable z3 file.

Then run your Z3 file in a Zmachine run­ner. I’m us­ing Windows Frotz from David Kinder based on Stefan Jokisch’s Frotz core:

Or, if you’re of a cer­tain age as I am, you can ap­ply a CRT fil­ter to your Terminal and use a CLI im­ple­men­ta­tion of a Zmachine like Matthew Darby’s Fic” writ­ten in Python:

We will use the ex­ist­ing his­tor­i­cal repos­i­to­ries as the canon­i­cal home for Zork’s source. Once the ini­tial pull re­quests land un­der the MIT License, con­tri­bu­tions are wel­come. We chose MIT for its sim­plic­ity and open­ness be­cause it makes the code easy to study, teach, and build upon. File is­sues, share in­sights, or sub­mit small, well-doc­u­mented im­prove­ments that help oth­ers learn from the orig­i­nal de­sign. The goal is not to mod­ern­ize Zork but to pre­serve it as a space for ex­plo­ration and ed­u­ca­tion.

Zork has al­ways been more than a game. It is a re­minder that imag­i­na­tion and en­gi­neer­ing can out­last gen­er­a­tions of hard­ware and play­ers. Bringing this code into the open is both a cel­e­bra­tion and a thank you to the orig­i­nal Infocom cre­ators for in­vent­ing a uni­verse we are still ex­plor­ing, to Jason Scott and the Internet Archive for decades of stew­ard­ship and part­ner­ship, and to col­leagues across Microsoft OSPO, Xbox, and Activision who helped make open source pos­si­ble.

...

Read the original on www.microsoft.com »

4 338 shares, 14 trendiness

Okta’s nextjs-0auth troubles

In October, I re­ported two se­cu­rity is­sues to Okta’s au­th0/​nex­tjs-au­th0 pro­ject, here and here. The lat­ter bug, an oauth pa­ra­me­ter in­jec­tion, al­lows for a range of types of abuse, like scop­ing to­kens for un­in­tended ser­vices, set­ting redi­rec­t_uri and scope to ar­bi­trary val­ues to leak to­kens, and so on.

The patch was sim­ple enough, so I opened a PR:

All’s well that ends well, right? Obviously, no.

The PR, 3 weeks later, was closed by the main­tainer, an au­th0 (an Okta com­pany) em­ployee, with the fol­low­ing com­ment:

This change is su­per­seded by #2413. This was done to en­sure that com­mits are signed. Orignal con­tri­bu­tion his­tory has been pre­served. Hence clos­ing this PR now.

Hmm, let’s take a look at that PR:

Hmm. That patch looks fa­mil­iar. And who is Simen Olsen?

no it has­n’t. I don’t know who Simen A. W. Olsen my@simen.io is but it is­n’t me and my com­mit here does­n’t ref­er­ence that name or email ad­dress at all. Was it ai gen­er­ated or some­thing?

Of course, the an­swer was: yes. It was AI slop. Just like my pre­vi­ous post about gixy-ng (a fun read for any­body deal­ing with ng­inx), the de­vel­oper had used CoPilot to some­bow gen­er­ate their patches:

Hi @MegaManSec I sin­cerely apol­o­gize for this at­tri­bu­tion er­ror.

Can con­firm that an AI work­flow was used to cre­ated the re­based com­mit, which got con­fused with OP de­tails. I’ve added a cor­rec­tion to #2413, and will en­sure the changelog is up­dated.

Thank you for call­ing this out, we’ll make sure this does­n’t hap­pen again.

Not only did the main­tainer state the above, they also used AI to gen­er­ate the re­sponse! In a now-deleted com­ment, they clearly used some AI to re­spond to my com­plaint:

With the clas­sic ChatGPT you are ab­solutely cor­rect”, it’s pretty frus­trat­ing that this de­vel­oper used AI to:

Take my re­port/​PR and com­mit it them­selves.

Used AI to com­mit it, re­mov­ing my at­tri­bu­tion.

Used AI to apologise” for us­ing AI, then stated that it won’t hap­pen again” — (yeah right; please pro­vide a de­tailed ex­pla­na­tion how you’re go­ing to en­sure that, when clearly a 1-line code change is too much for your AI to han­dle with­out break­ing).

Refused to fix the com­mit to re­move the in­valid / AI-generated-slop de­tails, and add back mine.

I would ap­pre­ci­ate force-push­ing a fix for the com­mit to prop­erly in­clude my in­for­ma­tion in the com­mit.

I was told that they can­not change it. That seems like a copy­right in­fringe­ment to me: tak­ing some­body else’s code, then chang­ing the au­thor’s name?

What I re­ally find the most in­ter­est­ing is re­ally how this AI slop even came to be. I can­not find any ref­er­ence to the email ad­dress my@simen.io any­where on­line. On GitHub, the only ref­er­ence to this email ad­dress is from the nex­tjs-au­th0 PR. Simen Olsen has never con­tributed to any of the nex­tjs-au­th0 repos­i­to­ries as far as I can tell (searching org:au­th0 au­thor:sime­nan­dre on GitHub), and that does­n’t even seem to be their real email ad­dress. so was this some type of ai hal­lu­ci­na­tion? And why? The code change was tiny. I just to­tally don’t get it: I have lit­er­ally never had any AI tool­ing fail like this and come up with some other per­son’s (fake) con­tact de­tails. It’s sim­ply ab­surd; are au­th0’s en­gi­neers us­ing some ex­tremely (extremely) low qual­ity lo­cal model or some­thing? If ChatGPT failed like this for me even once every thou­sand times, I would sim­ply never use it again.

In the end, at the time of writ­ing this, the au­th0/​nex­tjs-au­th0 main­tainer, Tushar Pandey, who made all of these mis­takes, has not fixed at­tri­bu­tion mis­take in the com­mit his­tory. In ad­di­tion to this, that first bug, which al­lows for ar­bi­trary ac­count hi­jack­ing in this soft­ware, has been fixed af­ter 3 weeks, with new ver­sions of the nex­tjs-au­th0 soft­ware re­leased, but Okta’s se­cu­rity peo­ple stat­ing that unless you cre­ate a video abus­ing this vul­ner­a­bil­ity, we aren’t go­ing to ac­cept this as a se­cu­rity is­sue” — LMAO; yeah, it’s a vul­ner­a­bil­ity, we fixed in the code, it can be used to takeover ac­counts, but you need to cre­ate a video”. Hilarious. That’s just an­other case to add to my list of hi­lar­i­ous prob­lems re­lated to re­port­ing se­cu­rity is­sue, that my next post will doc­u­ment.

...

Read the original on joshua.hu »

5 274 shares, 10 trendiness

PegorK/f32

The f32 is an ul­tra-com­pact ESP32 de­vel­op­ment board de­signed to mount di­rectly be­hind a USB-C re­cep­ta­cle. The PCB mea­sures just 9.85 mm x 8.45 mm. It’s pow­ered by the ESP32-C3FH4 mi­cro­con­troller and was cre­ated pri­mar­ily for re­search and as a bit of a stress test for the ESP32, since it in­ten­tion­ally ig­nores many stan­dard de­sign guide­lines. There’s only one ex­posed GPIO and it is con­nected to an on­board LED, so most of the de­vel­op­ment on here would be more catered for wifi/​web.

To test the f32 an ex­am­ple ap­pli­ca­tion was cre­ated that users can in­ter­act with. The ap­pli­ca­tion turns the f32 into a cap­tive por­tal so when it’s pow­ered on it will show up as an open ac­cess point that the user can se­lect from avail­able WiFi net­works. The user is then au­to­mat­i­cally sent to the f32′s con­trol page where they can in­ter­act with some of its ba­sic func­tion­al­ity such as turn­ing on an LED or scan­ning for sur­round­ing WiFi net­works. There’s also an About” page that pro­vides a small overview of the de­vice. Below are some screen­shots and a gif of in­ter­act­ing with the de­vice.

Initially the f32 did­n’t seem to want to work. I could­n’t get it to con­nect to any net­works or broad­cast it’s own net­work. Im 100% sure this is due to the poor an­tenna cir­cuitry or lack of, but I did man­age to get it func­tional af­ter adding an ad­di­tional tiny an­tenna onto the chip an­tenna as seen in the pic­ture be­low. This was just a piece of bent wire sol­dered to the end lead and float­ing above the first lead.

Since I don’t have fancy sig­nal test­ing equip­ment I re­lied on some man­ual test­ing such as see­ing if I can still con­nect to the de­vice and con­trol the LED. In a clear line of sight test with the f32 placed about 3ft off the ground I was able to con­nect and per­form scans/​con­trol the LED at roughly 120ft! This can be seen in my highly nec­es­sary de­pic­tion be­low.

The PCB was de­signed us­ing DipTrace and man­u­fac­tured by PCBWay with a board thick­ness of 0.6mm, min hole size of 0.2mm, and min track/​spac­ing of 4/4mil. At the time of mak­ing this it only cost $10.75 for 5 boards shipped! That still blows my mind. PCBWay does also of­fer as­sem­bly ser­vices, but I chose to as­sem­ble this at home and suf­fer a bit. This took a bit of trial and er­ror with such small parts, but I de­cided the best way for me was to ditch the sten­cil and make flux my best friend.

* Send the ger­ber file f32_ger­ber.zip found in the hard­ware folder to PCBWay with the specs men­tioned above.

* Order the com­po­nents noted in f32_bom.pdf. These parts can be found on both DigiKey and Mouser ex­cept the an­tenna. I don’t re­mem­ber where I had orig­i­nally or­dered them, but I be­lieve they are CrossAir CA-C03.

** Tip: Always or­der more than you need, es­pe­cially with com­po­nents as small as these.

* ** Tip: Always or­der more than you need, es­pe­cially with com­po­nents as small as these.

* Clean the pcb re­ally well with 99% Alcohol.

* Starting with the top side (Antenna side) ap­ply a thin layer of sol­der­ing flux across the en­tire board us­ing a tooth pick.

* Using a sol­der­ing iron with a fine tip ap­ply some sol­der to the tip and then go across all the ex­posed pads.

* Clean the board again with 99% al­co­hol and ver­ify all the pads on this side have some sol­der on them.

* Apply an­other thin layer of flux to the same side.

* Using tweez­ers and a mi­cro­scope/​loupe start plac­ing the top com­po­nents fol­low­ing the ref­er­ence guide f32_ref­er­ence.pdf.

* Gently move the board onto the sol­der­ing hot­plate or use the re­work sta­tion to heat the sol­der back up and watch the com­po­nents wig­gle into place.

* Repeat with Bottom side.

Bottom side must be done us­ing a re­work hot air gun, not pos­si­ble with hot­plate.

* Bottom side must be done us­ing a re­work hot air gun, not pos­si­ble with hot­plate.

After as­sem­bly you can use ESP-IDF VSCode ex­ten­sion or Arduino and up­load what­ever you’d like to the board or you can up­load my ex­am­ple ap­pli­ca­tion us­ing the steps be­low.

* Make sure you are in the base di­rec­tory of this repo and have ac­cess to es­p­tool.py.

* Make sure your es­p­tool ver­sion is at v4+

* Run the fol­low­ing com­mand re­plac­ing with whichever port the de­vice is con­nected to i.e. on Windows typ­i­cally some­thing like COM5 or on Linux /dev/ttyACM0

Well that’s up to you to de­cide. I started this pro­ject for some per­sonal re­search and also a fun learn­ing ex­pe­ri­ence. I had al­ways wanted a pro­ject that used 01005 com­po­nents ever since I had ac­ci­den­tally or­dered some years ago. Whatever you choose to use it for, please note that this de­sign in­ten­tion­ally ne­glects sev­eral fun­da­men­tal com­po­nents such as proper de­cou­pling ca­pac­i­tors, an an­tenna match­ing cir­cuit, USB ter­mi­na­tion re­sis­tors, and likely more. It does func­tion, but it’s in­ten­tion­ally bare.

* Expose more GPIOs on the sides of the PCB to make it a mount­able PCB.

Lastly, fun co­in­ci­dence, the ESP32 chip, the an­tenna, and the LDO all are C3 mod­els!

...

Read the original on github.com »

6 273 shares, 14 trendiness

ravynsoft/ravynos: A BSD-based OS project that aims to provide source and binary compatibility with macOS® and a similar user experience.

ravynOS is a new open source OS pro­ject that aims to pro­vide a sim­i­lar ex­pe­ri­ence and some com­pat­i­bil­ity with ma­cOS on x86-64 (and even­tu­ally ARM) sys­tems. It builds on the solid foun­da­tions of FreeBSD, ex­ist­ing open source pack­ages in the same space, and new code to fill the gaps.

* Source com­pat­i­bil­ity with ma­cOS ap­pli­ca­tions (i.e. you could com­pile a Mac ap­pli­ca­tion on ravynOS and run it)

* Similar GUI metaphors and fa­mil­iar UX (file man­ager, ap­pli­ca­tion launcher, top menu bar that re­flects the open ap­pli­ca­tion, etc)

* Compatible with ma­cOS folder lay­outs (/Library, /System, /Users, /Volumes, etc) and per­haps filesys­tems (HFS+, APFS) as well as fully sup­port­ing ZFS

* Self-contained ap­pli­ca­tions in App Bundles, AppDirs, and AppImage files - an in­staller-less ex­pe­ri­ence for /Applications

* Mostly main­tain com­pat­i­bil­ity with the FreeBSD base sys­tem and X11 - a stan­dard Unix en­vi­ron­ment un­der the hood

* Pleasant to use, se­cure, sta­ble, and per­for­mant

Please visit ravynos.com for more info: Release Notes | Screenshots | FAQ

* Can you help build the dream? See the cur­rent pro­jects/​needs in CONTRIBUTING.md!

This is the top level of the FreeBSD source di­rec­tory.

FreeBSD is an op­er­at­ing sys­tem used to power mod­ern servers, desk­tops, and em­bed­ded plat­forms. A large com­mu­nity has con­tin­u­ally de­vel­oped it for more than thirty years. Its ad­vanced net­work­ing, se­cu­rity, and stor­age fea­tures have made FreeBSD the plat­form of choice for many of the busiest web sites and most per­va­sive em­bed­ded net­work­ing and stor­age de­vices.

For copy­right in­for­ma­tion, please see the file COPYRIGHT in this di­rec­tory. Additional copy­right in­for­ma­tion also ex­ists for some sources in this tree - please see the spe­cific source di­rec­to­ries for more in­for­ma­tion.

The Makefile in this di­rec­tory sup­ports a num­ber of tar­gets for build­ing com­po­nents (or all) of the FreeBSD source tree. See build(7), con­fig(8), FreeBSD hand­book on build­ing user­land, and Handbook for ker­nels for more in­for­ma­tion, in­clud­ing set­ting make(1) vari­ables.

For in­for­ma­tion on the CPU ar­chi­tec­tures and plat­forms sup­ported by FreeBSD, see the FreeBSD

web­site’s Platforms page.

For of­fi­cial FreeBSD bootable im­ages, see the re­lease page.

For in­for­ma­tion on syn­chro­niz­ing your source tree with one or more of the FreeBSD Project’s de­vel­op­ment branches, please see FreeBSD Handbook.

...

Read the original on github.com »

7 270 shares, 17 trendiness

Over-Regulation is Doubling the Cost

After build­ing a soft­ware com­pany to a multi-bil­lion dol­lar exit, I made the jump to hard­ware. Now I’m work­ing on car­bon re­moval + steel at Charm Industrial, and elec­tric long-haul truck­ing with Revoy. It’s epi­cally fun to be build­ing in the real world, but lit­tle did I ex­pect that more than half the cost of build­ing a hard­ware com­pany would come from reg­u­la­tory bot­tle­necks. Despite a huge push for cli­mate fixes and the bi­par­ti­san geopo­lit­i­cal de­sire to bring in­dus­try back to the USA, I’ve been shocked to find that the sin­gle biggest bar­rier—by far—is over-reg­u­la­tion from the mas­sive depth of bu­reau­cracy.

Hardtech com­pa­nies of all fla­vors are be­ing forced to burn through lim­ited cap­i­tal while they wait for reg­u­la­tory clar­ity and/​or per­mits. This cre­ates a con­stant cy­cle of cost in­creases that ul­ti­mately flows to con­sumers, it low­ers in­vest­ment in the US man­u­fac­tur­ing and in­dus­trial base, it de­lays in­no­v­a­tive new hard­ware get­ting into the hands of con­sumers and busi­nesses, and at the end of the day, it leaves us all worse off, stuck with a qual­ity of life pegged to tech­nol­ogy de­vel­oped decades ago.

Regulatory de­lays and bot­tle­necks have added mil­lions of pounds of pol­lu­tants like PM2.5, NOₓ and CO₂ to our air from the con­tin­u­a­tion of busi­ness as usual, in­stead of the de­ploy­ment of clean tech­nolo­gies from my two hardtech ef­forts alone. While CO₂ is a long-term cli­mate is­sue, PM2.5 and NOₓ are im­me­di­ate ma­jor dri­vers of asthma and ex­cess mor­bid­ity. Both op­er­a­tions have high bi­par­ti­san ap­peal—and we’ve never been de­nied a per­mit—be­cause we’re fun­da­men­tally clean­ing up things that mat­ter to every­one: dirty air, wild­fires, or­phaned oil wells. Revoy is also help­ing de­flate the cost of long-haul freight. But none of that has made get­ting free­dom to op­er­ate easy. For cre­ative new tech­nolo­gies the de­fault an­swer is no” be­cause there is­n’t a clear path to per­mit­ting at all, and fig­ur­ing out that path it­self takes years — time that star­tups can’t af­ford to wait.

Regulation ob­vi­ously has a crit­i­cal role in pro­tect­ing peo­ple and the en­vi­ron­ment, but the sheer vol­ume, over-speci­ficity and some­times am­bi­gu­ity of those same reg­u­la­tions is now ac­tively work­ing against those goals! We’re un­in­ten­tion­ally block­ing the very things that would im­prove our en­vi­ron­ment. We’ve be­come a so­ci­ety that blocks all things, and we need to be a so­ci­ety that builds great things every day. The rest of this ar­ti­cle gets very spe­cific about the as­tro­nom­i­cal costs reg­u­la­tions are im­pos­ing on us as a so­ci­ety, and the mas­sive pos­i­tive im­pact that could be un­leashed by cut­ting back reg­u­la­tion that is work­ing against new, cost-sav­ing, cre­ative tech­nol­ogy that could also be mak­ing peo­ple and the en­vi­ron­ment healthy again.

To make it con­crete: both Charm and Revoy are cap­i­tal-ef­fi­cient hardtech com­pa­nies, but Charm will spend low hun­dreds of mil­lions to get to breakeven, and Revoy will spend tens of mil­lions. In both cases, more than half of the to­tal cost of build­ing each com­pany has gone to coun­ter­pro­duc­tive reg­u­la­tory bur­den. I’m hell­bent on push­ing through these bar­ri­ers, but the un­spo­ken re­al­ity is that our reg­u­la­tory morass is the deathbed of thou­sands of hardtech com­pa­nies that could be dras­ti­cally im­prov­ing our lives. We must un­leash them.

$300M in Societal Cost & $125M in Burden for Charm

Charm pro­duces and de­liv­ers ver­i­fied car­bon re­moval to com­pa­nies like Google, Microsoft and JPMorgan. Charm’s break­through was re­al­iz­ing that you could take CO₂ cap­tured in farm & forestry plant residues, con­vert it into a car­bon-rich, BBQ sauce-like liq­uid (it’s lit­er­ally the smoke fla­vor in BBQ sauce), and in­ject it into old oil wells to per­ma­nently re­move car­bon from the at­mos­phere. This has all kinds of co-ben­e­fits like re­duc­ing the mas­sive over­bur­den of wild­fire fu­els, clean­ing up & plug­ging nasty or­phaned oil wells, and im­prov­ing PM2.5 and NOₓ air qual­ity by avoid­ing that bio­mass be­ing burned in­stead.

And yet… there was a hangup: what kind of in­jec­tion well is this? Should it be per­mit­ted as a Class I dis­posal, Class II oil­field dis­posal, or Class V ex­per­i­men­tal? This ques­tion on per­mit­ting path took four years to an­swer. Four years to de­cide which path to use, not even the ac­tual per­mit! It took this long be­cause reg­u­la­tors are struc­turally faced with no up­side, only down­side le­gal risk in tak­ing a for­mal po­si­tion on some­thing new. Even when we’d done an enor­mous amount of lab and field work with bio-oil to un­der­stand its safety and be­hav­ior at sur­face and sub­sur­face con­di­tions. A reg­u­la­tor faces lit­tle cost to mov­ing in­cred­i­bly cau­tiously, but a ma­jor cost if they ap­prove some­thing that trig­gers ac­tivist push­back.

In the end, we’re grate­ful that—even­tu­ally—a state reg­u­la­tor took the reins and re­viewed, man­aged, and is­sued the first-ever Class V bio-oil se­ques­tra­tion per­mit, through what was still an in­cred­i­bly com­plex and de­tailed 14-month re­view process.

Now imag­ine that, in­stead of the 5.5 years from first con­tact to is­sued per­mit, it had only taken the 6 months it ac­tu­ally re­quired to get every­one across the reg­u­la­tory es­tab­lish­ment to agree on a Class V path­way, we would have had 5 ad­di­tional years op­er­at­ing the well. That’s the equiv­a­lent, from our real sup­ply chain, of sink­ing at least 30,000 tonnes of car­bon per year at $600/tonne. Looking only at this one as­pect, this de­lay came with a $90M price tag for Charm. We’ve also spent un­told mil­lions on reg­u­la­tory af­fairs at all lev­els of gov­ern­ment, not to men­tion the missed ac­cel­er­a­tion in sales, and other di­rect hard costs spent in R&D and pro­cess­ing bio-oil for in­ef­fi­cient and ex­pen­sive in­jec­tion into salt cav­erns in­stead.

But the pub­lic health bur­den cre­ated by this reg­u­la­tory slow­ness is where it gets re­ally crazy. This one reg­u­la­tory de­lay meant we all got sub­jected to de­creased air qual­ity from an ad­di­tional 30,000 tonnes per year of pile burn­ing. The re­sult­ing par­tic­u­late emis­sions alone are es­ti­mated to have caused a mind­blow­ing $40m/year in health­care costs. This is $200M in ad­di­tional health­care bur­den over those five years, mostly borne by Medicare and Medicaid. There are ad­di­tional costs to NOₓ emis­sions and more that take it to $300M.

In to­tal, the to­tal cost to so­ci­ety of this sin­gle reg­u­la­tory de­lay will be about $400M: $120-150M of un­nec­es­sary cost to Charm, and the bulk of it—$300M or so—borne by the pub­lic in health­care costs. I’m not shar­ing these num­bers to com­plain or make ex­cuses; Charm is still on the path to hav­ing a huge im­pact and we’re among the lucky few that can sur­vive these de­lays. What pains me most is the 5 years of lost car­bon re­moval and pol­lu­tant re­duc­tion, and the com­pound­ing ef­fect that has on all our health and health­care costs. Over-regulation is now work­ing against the very things it’s in­tended to pro­tect.

Regulators do their ab­solute best with the sys­tem they have, but the com­bined ef­fects of: (1) ex­tremely de­tailed and com­plex reg­u­la­tion, (2) chaotic bud­gets and un­der­staffing that dis­rupt an ef­fi­cient process, and (3) end­less law­suits against reg­u­la­tors since 1970s-era Naderism have cre­ated an at­mos­phere of fear. If we want to solve the cli­mate cri­sis, build abun­dance, lower costs, and gen­er­ate wealth for all, this has to change. We need to delete and sim­plify reams of reg­u­la­tions. We need to pay reg­u­la­tors well, and we need to trust our reg­u­la­tors to op­er­ate quickly and de­ci­sively by putting rea­son­able lim­its on end­less ac­tivist le­gal chal­lenges.

Revoy’s break­through was re­al­iz­ing that you could lower long-haul freight costs and elec­trify long-haul semi trucks by leav­ing the diesel trac­tor in place and drop­ping an elec­tric pow­er­train onto the back of the semi. Today, we boost semis from 7 mpg to 120 mpg, dri­ving a 94% re­duc­tion in fuel con­sump­tion. This slashes emis­sions that neg­a­tively im­pact both air qual­ity and cli­mate.

And yet again… a hangup: what ex­actly is this elec­tric doohickey? Is it a truck? A trailer? Something else? It was clear from the reg­u­la­tions that it was a converter dolly”. But get­ting com­plete align­ment on that sim­ple fact across an al­pha­bet soup of gov­ern­ment agen­cies span­ning both fed­eral and state—NHTSA, FMCSA, FHWA, state tran­sit au­thor­i­ties, air qual­ity man­age­ment dis­tricts, state DMVs, high­way pa­trols and more—took years.

A powered con­verter dolly” is­n’t even a new thing! Here’s one from the six­ties that ran on diesel to help trucks get over moun­tain passes:

There were some bright spots. The Federal Motor Carrier Safety Administration (FMCSA) and the National Highway Transportation Safety Administration (NHTSA) quickly con­verged on in­for­mal de­f­i­n­i­tional clar­ity, and then even­tu­ally a Highway Patrol Captain who was ea­ger to get in­no­v­a­tive elec­tric ve­hi­cles on the road pushed it through with a state DMV to reg­is­ter the first four Revoys. But bring­ing along the rest of the agen­cies, and the rest of the states, was not fast. It de­layed de­ploy­ments, soaked up hun­dreds of thou­sands of dol­lars of le­gal and lob­by­ist time (not to men­tion all the cor­re­spond­ing time on the gov­ern­ment side that all of us tax­pay­ers have to bear), and maybe most im­por­tantly… even with a for­mal memo from the Federal DOT, it is still not 100% re­solved in some states.

As one ex­am­ple, one state agency has asked Revoy to do cer­ti­fied en­gine test­ing to prove that the Revoy does­n’t in­crease emis­sions of semi trucks. And that Revoy must do this cer­ti­fi­ca­tion across every sin­gle truck en­gine fam­ily. It costs $100,000 per cer­ti­fi­ca­tion and there are more than 270 en­gine fam­i­lies for the 9 en­gines that our ini­tial part­ners use. That’s $27,000,000 for this one reg­u­la­tory item. And keep in mind that this is to cer­tify that a de­vice—whose sole rea­son for ex­is­tence is to cut pol­lu­tion by >90%, and which has demon­stra­bly done so across nearly 100,000 miles of test­ing and op­er­a­tions—is not in­creas­ing the emis­sions of the truck. It’s a com­plete waste of money for every­one.

And that $27M dol­lar cost does­n’t in­clude the cost to so­ci­ety. This over-reg­u­la­tion will de­lay de­ploy­ment of EV trucks by years, in­creas­ing NOₓ and PM 2.5 air pol­lu­tion ex­po­sure for many of so­ci­ety’s least well-off who live near free­ways. The de­layed de­ploy­ment will also in­crease CO₂ emis­sions that threaten the cli­mate and en­vi­ron­ment. Revoy’s Founder (Ian Rust) and I ac­tu­ally dis­agree on what ex­actly it is about the reg­u­la­tory en­vi­ron­ment that needs to change, but we agree it’s com­pletely bro­ken and hurt­ing both peo­ple and the planet.

In every in­ter­ac­tion I have with reg­u­la­tors, I’m re­minded that they’re good peo­ple do­ing god’s work op­er­at­ing in a fun­da­men­tally bro­ken sys­tem. A reg­u­la­tory sys­tem that struc­turally in­sists on le­gal­is­tic, ul­tra-ex­treme cau­tion is bound to gen­er­ate a mas­sive neg­a­tive re­turn for so­ci­ety.

If we had a reg­u­la­tory sys­tem that could move fast to ex­per­i­ment with cre­ative new tech­nolo­gies, we’d live in a world where our en­vi­ron­ment gets cleaned up faster, where awe­some new hard­ware was con­stantly im­prov­ing our lives by mak­ing things bet­ter and cheaper, and where large-scale hardtech in­no­va­tion hap­pened here at home in the USA, not in China.

As we col­lec­tively work to build more man­u­fac­tur­ing ca­pac­ity at home and build the next wave of tech­nolo­gies to power the econ­omy, we need to grap­ple with the real bot­tle­necks hold­ing us back. I hope other hardtech founders will pub­licly share more of their sto­ries as well (the sto­ries I’ve heard in pri­vate would shock you). Props to Blake Scholl for do­ing so.

We need a come-to-je­sus about reg­u­la­tory lim­its, time­lines, and scope. Yes, we need ba­sic and strong pro­tec­tions for clear harms, but we need to un­leash every hard­work­ing American, not just a few com­pa­nies with mas­sive fund­ing, to in­vent and build hard­ware again. We need to com­bine many ap­proaches to get there: ex­pe­dited re­views for new tech­nol­ogy, free­dom to op­er­ate by de­fault, per­mits by right-not-process, delet­ing as many reg­u­la­tory steps as pos­si­ble, and more. CA YIMBYs suc­cess­ful push to pass a del­uge of hous­ing ac­cel­er­a­tion laws in the past two years could serve as a model. America build­ing things again is the foun­da­tion of a pros­per­ous, pow­er­ful, and clean America.

...

Read the original on rein.pk »

8 241 shares, 7 trendiness

Motherboard PCIe Lanes

Choose your pref­er­ences above and click Generate to view boards

...

Read the original on mobomaps.com »

9 239 shares, 15 trendiness

FEX-Emu – A fast linux usermode x86 and x86-64 emulator

FEX al­lows you to run x86 ap­pli­ca­tions on ARM64 Linux de­vices, sim­i­lar to qemu-user and box64. It of­fers broad com­pat­i­bil­ity with both 32-bit and 64-bit bi­na­ries, and it can be used along­side Wine/Proton to play Windows games.

It sup­ports for­ward­ing API calls to host sys­tem li­braries like OpenGL or Vulkan to re­duce em­u­la­tion over­head. An ex­per­i­men­tal code cache helps min­i­mize in-game stut­ter­ing as much as pos­si­ble. Furthermore, a per-app con­fig­u­ra­tion sys­tem al­lows tweak­ing per­for­mance per game, e.g. by skip­ping costly mem­ory model em­u­la­tion. We also pro­vide a user-friendly FEXConfig GUI to ex­plore and change these set­tings.

On the tech­ni­cal side, FEX fea­tures an ad­vanced bi­nary re­com­piler that sup­ports all mod­ern ex­ten­sions of the x86(-64) in­struc­tion set, in­clud­ing AVX/AVX2. The heart of this re­com­piler is a cus­tom IR that al­lows us to gen­er­ate more op­ti­mized code than a tra­di­tional splat­ter JIT. A com­pre­hen­sive sys­tem call trans­la­tion layer takes care of dif­fer­ences be­tween the em­u­lated and host op­er­at­ing sys­tems and im­ple­ments even niche fea­tures like sec­comp. A mod­u­lar core en­ables FEX to be used as a WoW64/ARM64EC back­end in Wine.

Try it out

You would think do­ing this month af­ter month we would even­tu­ally run out of things to work on, but in true em­u­la­tor fash­ion the work never ends. Let’s jump in to what has changed for the re­lease this month!

Read More

We’re just gonna kick out this lit­tle re­lease and be on our way. There might be some in­ter­est­ing things this month, read and find out!

Read More

After last mon­th’s enor­mous im­prove­ments, this re­lease will look quite tame in com­par­i­son. Although we still did a bunch of work, so let’s dive in.

Read More

Older Posts

...

Read the original on fex-emu.com »

10 236 shares, 25 trendiness

Olmo 3: Charting a path through the model flow to lead open-source AI

Language mod­els are of­ten treated as snap­shots—brief cap­tures of a long and care­fully cu­rated de­vel­op­ment process. But shar­ing only the end re­sult ob­scures the rich con­text needed to mod­ify, adapt, and ex­tend a mod­el’s ca­pa­bil­i­ties. Many mean­ing­ful ad­just­ments re­quire in­te­grat­ing do­main-spe­cific knowl­edge deep within the de­vel­op­ment pipeline, not merely at the fi­nal stage. To truly ad­vance open AI de­vel­op­ment and re­search, the en­tire model flow — not just its end­point — should be ac­ces­si­ble and cus­tomiz­able. The model flow is the full life­cy­cle of an LM: every stage, check­point, dataset, and de­pen­dency re­quired to cre­ate and mod­ify it. By ex­pos­ing this com­plete process, the goal is to en­gen­der greater trust and en­able more ef­fec­tive adap­ta­tion, col­lab­o­ra­tion, and in­no­va­tion.

With to­day’s re­lease of Olmo 3, we’re em­pow­er­ing the open source com­mu­nity with not only state-of-the-art open mod­els, but the en­tire model flow and full trace­abil­ity back to train­ing data.

At its cen­ter is Olmo 3-Think (32B), the best fully open 32B-scale think­ing model that for the first time lets you in­spect in­ter­me­di­ate rea­son­ing traces and trace those be­hav­iors back to the data and train­ing de­ci­sions that pro­duced them. Olmo 3 is a fam­ily of com­pact, dense mod­els at 7 bil­lion and 32 bil­lion pa­ra­me­ters that can run on every­thing from lap­tops to re­search clus­ters.

Olmo 3-Base (7B, 32B) is our most pow­er­ful base model yet. When eval­u­ated on our ex­panded, di­verse eval­u­a­tion suite, Olmo 3-Base de­liv­ers the strongest per­for­mance among fully open base mod­els — where train­ing data, code, and weights are all pub­licly avail­able, like Stanford’s Marin and Swiss AIs Apertus — and achieves com­pet­i­tive per­for­mance with some of the best open-weights base mod­els of com­pa­ra­ble size and ar­chi­tec­ture, in­clud­ing Qwen 2.5 and Gemma 3. Achieving strong re­sults in pro­gram­ming, read­ing com­pre­hen­sion, and math prob­lem solv­ing, Olmo 3-Base main­tains per­for­mance at ex­tended con­text lengths (~up to 65K to­kens)—pro­vid­ing a ver­sa­tile foun­da­tion for con­tin­ued pre­train­ing, tar­geted fine-tun­ing, and re­in­force­ment learn­ing and mak­ing it easy to build in spe­cial­ized ca­pa­bil­i­ties like rea­son­ing, tool use (function call­ing), and in­struc­tion fol­low­ing through post-train­ing. Olmo 3-Think (7B, 32B) is our flag­ship post-trained rea­son­ing set built on Olmo 3-Base. At a time when few or­ga­ni­za­tions are re­leas­ing truly open mod­els at this scale, Olmo 3-Think (32B) serves as a work­horse for RL re­search, long-hori­zon rea­son­ing, and other ad­vanced ex­per­i­ments that re­quire sub­stan­tial com­pute. On our suite of rea­son­ing bench­marks (discussed be­low), it’s the strongest fully open think­ing model we’re aware of, nar­row­ing the gap to the best open-weight mod­els of sim­i­lar scale — such as Qwen 3 32B — while train­ing on roughly 6x fewer to­kens. Olmo 3-Think (7B) brings the same de­sign and train­ing ap­proach to an even more ef­fi­cient form fac­tor, sur­fac­ing in­ter­me­di­ate think­ing steps for com­plex prompts while mak­ing open, in­spectable rea­son­ing ac­ces­si­ble on more mod­est hard­ware.Olmo 3-Instruct (7B) is a chat and quick-re­sponse fo­cused post-train of Olmo 3-Base that han­dles multi-turn, in­struc­tion-fol­low­ing, tool use, and more. In our eval­u­a­tions, it matches or out­per­forms open-weight mod­els in­clud­ing Qwen 2.5, Gemma 3, and Llama 3.1, and nar­rows the gap with Qwen 3 model fam­i­lies at a sim­i­lar scale—de­liv­er­ing a strong, fully open al­ter­na­tive for high-qual­ity con­ver­sa­tional and tool-us­ing agents.Olmo 3-RL Zero (7B), is a fully open re­in­force­ment learn­ing path­way built on Olmo 3-Base, de­signed to boot­strap com­plex rea­son­ing be­hav­iors and en­able clear bench­mark­ing of RL al­go­rithms. We re­lease four se­ries of check­points from do­main-fo­cused train­ing on math, code, in­struc­tion fol­low­ing, and gen­eral chat, en­abling care­ful study of re­in­force­ment learn­ing with ver­i­fi­able re­wards (RLVR).

Instead of a sin­gle set of frozen weights, Olmo 3 of­fers mul­ti­ple, fully doc­u­mented paths through de­vel­op­ment: the Instruct path for every­day chat and tool use, the RL Zero path for RL ex­per­i­men­ta­tion from base mod­els, and the Think/reasoning path for mod­els that lever­age in­fer­ence-time scal­ing to un­lock com­plex rea­son­ing and agen­tic be­hav­iors. Each path is a con­crete ex­am­ple of how to shape be­hav­ior from the same base model, and you’re free to fork or remix them—start with Olmo 3-Base, ex­plore your own su­per­vised fine-tun­ing (SFT) or di­rect pref­er­ence op­ti­miza­tion (DPO) recipe for in­struct-style use cases, or plug in a new RL ob­jec­tive to probe dif­fer­ent trade­offs. The flow it­self be­comes a rich, reusable ob­ject—not just a record of how we built Olmo 3, but a scaf­fold for how you can build your own sys­tems.

Click on any stage to learn more about it and down­load ar­ti­facts.

The Olmo 3 check­points we’re re­leas­ing rep­re­sent our ini­tial paths tar­get­ing our goals around rea­son­ing, tool use, and gen­eral ca­pa­bil­i­ties — we have ex­cit­ing plans for other ways to lever­age Olmo 3-Base 32B. But be­cause we’re re­leas­ing the en­tire flow, you can in­ter­vene at any point: swap in do­main-spe­cific data dur­ing mid-train­ing, ad­just post-train­ing for your use case, or build on an ear­lier check­point that bet­ter suits your needs.

As with Olmo and Olmo 2, we’re re­leas­ing all com­po­nents of the Olmo 3 flow — data, code, model weights, and check­points — un­der per­mis­sive open source li­censes.

Try Olmo 3 | Download the mod­els & data | Read the re­port

We run the Olmo 3 check­points through a broad, up­dated bench­mark suite, group­ing dozens of in­dus­try-stan­dard tasks (plus a few new ones we in­tro­duce) into sev­eral ca­pa­bil­ity clus­ters. Together, the clus­tered suite and these held-out tasks give us a ca­pa­bil­ity pro­file of Olmo 3—a clear pic­ture of how well it solves math prob­lems, codes, uses tools, an­swers gen­eral-knowl­edge ques­tions, and more.

At a high level, the Olmo 3 fam­ily de­liv­ers the strongest fully open base and think­ing mod­els we’re aware of. Olmo 3-Base 32B out­per­forms other fully open base mod­els, and Olmo 3-Think 32B emerges as the strongest fully open think­ing model.

Our re­sults were made pos­si­ble by rig­or­ous data cu­ra­tion at every stage of train­ing, a care­fully de­signed train­ing recipe for each model, and a set of new al­go­rith­mic and in­fra­struc­ture ad­vances across data pro­cess­ing, train­ing, and re­in­force­ment learn­ing. We also in­tro­duce an en­hanced re­in­force­ment learn­ing frame­work that guides the de­vel­op­ment of our mod­els and is par­tic­u­larly es­sen­tial for our think­ing mod­els. To de­sign the train­ing recipe and co­or­di­nate tar­geted im­prove­ments across a wide range of ca­pa­bil­i­ties at each stage of the model train­ing pipeline, our de­vel­op­ment frame­work bal­ances dis­trib­uted in­no­va­tion with cen­tral­ized eval­u­a­tion.

Olmo 3-Base, with a train­ing pipeline that first fo­cuses on broad cov­er­age over di­verse text, code, and math, then con­cen­trates on harder dis­tri­b­u­tions to sharpen pro­gram­ming, quan­ti­ta­tive rea­son­ing, and read­ing com­pre­hen­sion, is clearly the strongest set of fully open base mod­els in our eval­u­a­tions. It’s also ar­guably the best 32B model in the en­tire ecosys­tem of mod­els with open weights, per­form­ing im­pres­sively in pro­gram­ming, read­ing com­pre­hen­sion, math prob­lem solv­ing, and long-con­text bench­marks like RULER, which tests in­for­ma­tion re­trieval from lengthy texts. Olmo 3-Base (7B) and Olmo 3-Base (32) main­tain qual­ity at ex­tended con­text lengths and in­te­grate cleanly with RL work­flows, pro­vid­ing a ro­bust foun­da­tion for con­tin­ued pre­train­ing and post-train­ing.

Olmo 3-Think, which turns the Base into a rea­son­ing model by train­ing on multi-step prob­lems span­ning math, code, and gen­eral prob­lem solv­ing, then run­ning the think­ing SFT → think­ing DPORLVR model flow to elicit high-qual­ity rea­son­ing traces, com­petes with or ex­ceeds sev­eral open-weight rea­son­ing mod­els of sim­i­lar sizes. On math bench­marks, Olmo 3-Think (7B) matches Qwen 3 8B on MATH and comes within a few points on AIME 2024 and 2025, and also leads all com­par­i­son mod­els on HumanEvalPlus for cod­ing—per­form­ing strongly on MBPP and LiveCodeBench to demon­strate par­tic­u­lar strength in code-in­ten­sive rea­son­ing. On broader rea­son­ing tasks like BigBench Hard and AGI Eval English, Olmo 3-Think (7B) re­mains com­pet­i­tive with Qwen 3 8B rea­son­ing and Qwen 3 VL 8B Thinker while stay­ing fully open and slightly smaller.

For the 32B model, Olmo 3-Think scales these trends up and be­comes one of the strongest fully open rea­son­ing mod­els in its class. Olmo 3-Think (32B) ei­ther wins or sits within roughly two points of the best open-weight model on MATH, OMEGA, BigBenchHard, HumanEvalPlus, PopQA, and IFEval. It ties Qwen 3 VL 32B Thinking for the top score on the OMEGA suite while stay­ing clearly ahead of Gemma 3 27B Instruct and com­pet­i­tive with DeepSeek R1 Distill 32B on math and rea­son­ing. On broader knowl­edge and QA, Olmo 3-Think (32B) is ef­fec­tively neck-and-neck with the Qwen 3 mod­els on PopQA. And in in­struc­tion fol­low­ing, Olmo 3-Think (32B) tops this sub­set on IFEval and re­mains solid on IFBench and AlpacaEval 2 LC—offering a strong de­fault for rea­son­ing work­loads at the 32B scale.

Olmo 3-Instruct, which pro­duces shorter se­quences than the cor­re­spond­ing Olmo 3-Think mod­els to im­prove in­fer­ence ef­fi­ciency and is de­signed to fo­cus on gen­eral chat, tool use, and syn­thetic data gen­er­a­tion, out­per­forms com­pa­ra­bly-sized open-weight mod­els. Olmo 3-Instruct ties or sur­passes Qwen 2.5, Gemma 3, and Llama 3.1 in our eval­u­a­tions, and com­petes with the Qwen 3 fam­ily at sim­i­lar scale, de­liv­er­ing strong func­tion call­ing per­for­mance and in­struc­tion-fol­low­ing ca­pa­bil­i­ties in a fully open 7B model.

Olmo 3 uses a de­coder-only trans­former ar­chi­tec­ture and multi-stage train­ing pipeline. Pretraining runs in three stages—an ini­tial large-scale train­ing run that builds broad ca­pa­bil­i­ties; a mid-train­ing phase that fo­cuses on harder ma­te­r­ial like math, code, and read­ing com­pre­hen­sion; and a fi­nal long-con­text ex­ten­sion stage that trains the model on very long doc­u­ments. Together with ar­chi­tec­tural en­hance­ments, this yields a more ca­pa­ble, ef­fi­cient base for the Olmo 3 fam­ily.

Post-training then spe­cial­izes the pre­trained model for dif­fer­ent use cases. Building on Olmo 2, each path­way fol­lows a three-stage recipe — SFT, pref­er­ence tun­ing with DPO, and RLVR — but in Olmo 3, we ex­pose this as a fully doc­u­mented model flow with com­plete cus­tomiza­tion over each train­ing stage and dataset mix.

Instead of re­leas­ing only the fi­nal weights, we pro­vide check­points from each ma­jor train­ing mile­stone: the base pre­trained model, the mid-trained model af­ter tar­geted skill en­hance­ment, the long-con­text-ex­tended ver­sion, plus post-train­ing check­points for the Olmo 3-Think, Olmo 3-Instruct, and Olmo 3-RL Zero flows. You can study how ca­pa­bil­i­ties emerge over time, run ab­la­tions on spe­cific stages, and fork the model at what­ever point best fits your data, com­pute, and goals.

Compared to Olmo 2, we scaled data col­lec­tion and sig­nif­i­cantly strength­ened our dataset cu­ra­tion meth­ods. Continuing our com­mit­ment to full trans­parency, we’re re­leas­ing sev­eral new, higher-qual­ity datasets that cover every stage of base model train­ing and post-train­ing—from ini­tial learn­ing to spe­cial­ized skills like com­plex rea­son­ing and long-con­text un­der­stand­ing. This means any­one can see ex­actly what data shaped the mod­el’s ca­pa­bil­i­ties, re­pro­duce our re­sults, and reuse these datasets to train their own AI sys­tems.

Olmo 3 is pre­trained on Dolma 3, a new ~9.3-trillion-token cor­pus drawn from web pages, sci­ence PDFs processed with olmOCR, code­bases, math prob­lems and so­lu­tions, and en­cy­clo­pe­dic text. From this pool, we con­struct Dolma 3 Mix, a 5.9-trillion-token (~6T) pre­train­ing mix with a higher pro­por­tion of cod­ing and math­e­mat­i­cal data than ear­lier Dolma re­leases, plus much stronger de­con­t­a­m­i­na­tion via ex­ten­sive dedu­pli­ca­tion, qual­ity fil­ter­ing, and care­ful con­trol over data mix­ing. We fol­low es­tab­lished web stan­dards in col­lect­ing train­ing data and don’t col­lect from sites that ex­plic­itly dis­al­low it, in­clud­ing pay­walled con­tent.

On top of this, we in­tro­duce two Dolma 3-based mixes for later stages of base model train­ing. Dolma 3 Dolmino is our mid-train­ing mix: 100B train­ing to­kens sam­pled from a ~2.2T-token pool of high-qual­ity math, sci­ence, code, in­struc­tion-fol­low­ing, and read­ing-com­pre­hen­sion data, in­clud­ing rea­son­ing traces that also en­able RL di­rectly on the base model. Dolma 3 Longmino is our long-con­text mix: ~50B train­ing to­kens drawn from a 639B-token pool of long doc­u­ments com­bined with mid-train­ing data to teach Olmo 3 to track in­for­ma­tion over very long in­puts (like re­ports, logs, and multi-chap­ter doc­u­ments).

We also in­tro­duce Dolci, a new post-train­ing data suite tai­lored specif­i­cally for rea­son­ing, tool use, and in­struc­tion fol­low­ing. Dolci pro­vides sep­a­rate mixes for each stage of post-train­ing: SFT, DPO, and RLVR. For SFT, Dolci ag­gre­gates state-of-the-art datasets that ad­vance step-by-step rea­son­ing, tool use, and high-qual­ity con­ver­sa­tional be­hav­ior; for DPO, it sup­plies high-qual­ity con­trastive pref­er­ence data; and for RL, it in­cludes hard, di­verse prompts across math, cod­ing, in­struc­tion fol­low­ing, and gen­eral chat.

Together, Dolma 3 and Dolci give Olmo 3 a fully open data cur­ricu­lum from first to­ken to fi­nal post-trained check­point.

We pre­trained Olmo 3 on a clus­ter of up to 1,024 H100 GPUs; we achieved train­ing through­put of 7.7K to­kens per de­vice per sec­ond for Olmo 3-Base (7B). We mid-trained on 128 H100 GPUs, and post-trained on a set of 256 H100s.

For Olmo 3, build­ing on the work we did for Olmo 2, we were able to sig­nif­i­cantly im­prove the ef­fi­ciency of our post-train­ing code. By mov­ing SFT from Open Instruct (our post-train­ing code­base, pri­or­i­tiz­ing flex­i­bil­ity) to Olmo Core (our pre­train­ing code­base, de­signed to max­i­mize ef­fi­ciency), we in­creased through­put (tokens/second) by 8x. Similarly, by in­cor­po­rat­ing in-flight weight up­dates, con­tin­u­ous batch­ing, and a lot of thread­ing im­prove­ments, we made our RL train­ing 4x more ef­fi­cient—re­sult­ing in train­ing runs that are sig­nif­i­cantly cheaper and faster.

A note on our 32B mod­els: We be­lieve 32B sits in a sweet spot for re­search and tin­ker­ing. 32B mod­els are big enough to sup­port strong, com­pet­i­tive per­for­mance, but still small enough that a wide au­di­ence can fine-tune and de­ploy them on ac­ces­si­ble hard­ware.

For more de­tails, in­clud­ing ab­la­tions, please read our tech­ni­cal re­port.

A core goal of Olmo 3 is not just to open the model flow, but to make it ac­tion­able for peo­ple who want to un­der­stand and im­prove model be­hav­ior. Olmo 3 in­te­grates with OlmoTrace, our tool for trac­ing model out­puts back to train­ing data in real time.

For ex­am­ple, in the Ai2 Playground, you can ask Olmo 3-Think (32B) to an­swer a gen­eral-knowl­edge ques­tion, then use OlmoTrace to in­spect where and how the model may have learned to gen­er­ate parts of its re­sponse. This closes the gap be­tween train­ing data and model be­hav­ior: you can see not only what the model is do­ing, but why—and ad­just data or train­ing de­ci­sions ac­cord­ingly.

To fur­ther pro­mote trans­parency and ex­plain­abil­ity, we’re mak­ing every train­ing and fine-tun­ing dataset avail­able for down­load, all un­der a per­mis­sive li­cense that al­lows for cus­tom de­ploy­ment and reuse. The datasets come in a range of mixes to ac­com­mo­date dif­fer­ent stor­age and hard­ware con­straints, from sev­eral bil­lion to­kens all the way up to 6 tril­lion.

Our new tool­ing for data pro­cess­ing al­lows you to de-con­t­a­m­i­nate, to­k­enize, and de-du­pli­cate data in the same way we did for Olmo 3’s cor­pora. All the tool­ing is open source, en­abling you to repli­cate our train­ing curves or run con­trolled ab­la­tions across data mixes and ob­jec­tives.

Our Olmo util­i­ties and soft­ware cover the whole de­vel­op­ment cy­cle:

is a toolkit for re­pro­ducible evals. It in­cludes our brand-new eval col­lec­tion OlmoBaseEval, which we used for Olmo 3 base model de­vel­op­ment.

Importantly, our tool­ing al­lows you to in­stru­ment com­plex tasks and an­a­lyze in­ter­me­di­ate traces to un­der­stand where the mod­els suc­ceed—or strug­gle. Because the Olmo 3 data recipes, train­ing pipeline, and check­points are open, in­de­pen­dent teams can con­nect model be­hav­ior back to mea­sur­able prop­er­ties.

Ready to de­ploy and use

Together, the Olmo 3 fam­ily makes it eas­ier to build trust­wor­thy fea­tures quickly, whether for re­search, ed­u­ca­tion, or ap­pli­ca­tions. By mak­ing every de­vel­op­ment step avail­able and in­spectable, we’re en­abling en­tirely new cat­e­gories of re­search. You can run ex­per­i­ments on any train­ing phase, un­der­stand ex­actly how dif­fer­ent tech­niques con­tribute to model ca­pa­bil­i­ties, and build on our work at what­ever stage makes sense for your pro­ject.

For sci­en­tists, the fully open flow ex­poses the mod­el’s in­ner work­ings, so you can in­stru­ment ex­per­i­ments across cod­ing, rea­son­ing, RL, and tool use.

If you care about AI you can study, au­dit, and im­prove, Olmo 3 is for you. Try the demos in the Ai2 Playground, ex­plore the doc­u­men­ta­tion, and build on the re­leased weights and check­points. Then tell us what you dis­cover—we in­vite the com­mu­nity to val­i­date, cri­tique, and ex­tend our find­ings.

True open­ness in AI is­n’t just about ac­cess—it’s about trust, ac­count­abil­ity, and shared progress. We be­lieve the mod­els shap­ing our fu­ture should be fully in­spectable, not black boxes. Olmo 3 rep­re­sents a dif­fer­ent path: one where any­one can un­der­stand, ver­ify, and build upon the AI sys­tems that in­creas­ingly in­flu­ence our world. This is what open-first means—not just re­leas­ing weights, but shar­ing the com­plete knowl­edge needed to ad­vance AI re­spon­si­bly: the flow.

Try Olmo 3 | Download the mod­els & data | Read the re­port

...

Read the original on allenai.org »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.