10 interesting stories served every morning and every evening.




1 466 shares, 54 trendiness

Helping Valve to Power Up Steam Devices

Last week, Valve stunned the com­puter gam­ing world by un­veil­ing three new gam­ing de­vices at once: the Steam Frame, a wire­less VR head­set; the Steam Machine, a gam­ing con­sole in the vein of a PlayStation or Xbox; and the Steam Controller, a hand­held game con­troller. Successors to the highly suc­cess­ful Valve Index and Steam Deck, these de­vices are set to be re­leased in the com­ing year.

Igalia has long worked with Valve on SteamOS, which will power the Machine and Frame, and is ex­cited to be con­tribut­ing to these new de­vices, par­tic­u­larly the Frame. The Frame, un­like the Machine or Deck which have x86 CPUs, runs on an ARM-based CPU.

Under nor­mal cir­cum­stances, this would mean that only games com­piled to run on ARM chips could be played on the Frame. In or­der to get around this bar­rier, a trans­la­tion layer called FEX is used to run ap­pli­ca­tions com­piled for x86 chips (which are used in nearly all gam­ing PCs) on ARM chips by trans­lat­ing the x86 ma­chine code into ARM64 ma­chine code.

If you love video games, like I do, work­ing on FEX with Valve is a dream come true,” said Paulo Matos, an en­gi­neer with Igalia’s Compilers Team. Even so, the chal­lenges can be daunt­ing, be­cause mak­ing sure the trans­la­tion is work­ing of­ten re­quires man­ual QA rather than au­to­mated test­ing. You have to start a game, some­times the er­ror shows up in the col­ors or sound, or how the game be­haves when you break down the door in the sec­ond level. Just de­bug­ging this can take a while,” said Matos. For op­ti­miza­tion work I did early last year, I used a game called Psychonauts to test it. I must have played the first 3 to 4 min­utes of the game many, many times for de­bug­ging. Looking at my his­tory, Steam tells me I played it for 29 hours, but it was al­ways the first few min­utes, noth­ing else.”

Beyond the CPU, the Qualcomm Adreno 750 GPU used in the Steam Frame in­tro­duced its own set of chal­lenges when it came to run­ning desk­top games, and other com­plex work­loads, on these de­vices. Doing so re­quires a rock-solid Vulkan dri­ver that can en­sure cor­rect­ness, elim­i­nat­ing ma­jor ren­der­ing bugs, while main­tain­ing high per­for­mance. This is a very dif­fi­cult com­bi­na­tion to achieve, and yet that’s ex­actly what we’ve done for Valve with Mesa3D Turnip, a FOSS Vulkan dri­ver for Qualcomm Adreno GPUs.

Before we started our work, crit­i­cal op­ti­miza­tions such as LRZ (which you can learn more about from our blog post here) or the au­to­tuner (and its sub­se­quent over­haul) weren’t in place. Even worse, there was­n’t sup­port for the Adreno 700-series GPUs at all, which we even­tu­ally added along with sup­port for tiled ren­der­ing.

We im­ple­mented many Vulkan ex­ten­sions and re­viewed nu­mer­ous oth­ers,” said Danylo Piliaiev, an en­gi­neer on the Graphics Team. Over the years, we en­sured that D3D11, D3D12, and OpenGL games ren­dered cor­rectly through DXVK, vkd3d-pro­ton, and Zink, in­ves­ti­gat­ing many ren­der­ing is­sues along the way. We achieved higher cor­rect­ness than the pro­pri­etary dri­ver and, in many cases, Mesa3D Turnip is faster as well.”

We’ve worked with many won­der­ful peo­ple from Valve, Google, and other com­pa­nies to it­er­ate on the Vulkan dri­ver over the years in or­der to in­tro­duce new fea­tures, bug fixes, per­for­mance im­prove­ments, as well as de­bug­ging work­flows. Some of those peo­ple de­cided to join Igalia later on, such as our col­league and Graphics Team de­vel­oper Emma Anholt. I’ve been work­ing on Mesa for 22 years, and it’s great to have a home now where I can keep do­ing that work, across hard­ware pro­jects, where the or­ga­ni­za­tion pri­or­i­tizes the work ex­pe­ri­ence of its de­vel­op­ers and em­pow­ers them within the or­ga­ni­za­tion.”

Valve’s sup­port in all this can­not be un­der­stated, ei­ther. Their choice to build their de­vices us­ing open soft­ware like Mesa3D Turnip and FEX means they’re com­mit­ted to work­ing on and sup­port­ing im­prove­ments and op­ti­miza­tions that be­come avail­able to any­one who uses the same open-source pro­jects.

We’ve re­ceived a lot of pos­i­tive feed­back about sig­nif­i­cantly im­proved per­for­mance and fewer ren­der­ing glitches from hob­by­ists who use these pro­jects to run PC games on Android phones as a re­sult of our work,” said Dhruv Mark Collins, an­other Graphics Team en­gi­neer work­ing on Turnip. And it goes both ways! We’ve caught a cou­ple of nasty bugs be­cause of that wide­spread test­ing, which re­ally em­pha­sizes why the FOSS model is ben­e­fi­cial for every­one in­volved.”

An in­ter­est­ing area of graph­ics dri­ver de­vel­op­ment is all the com­piler work that is in­volved. Vulkan dri­vers such as Mesa3D Turnip need to process shader pro­grams sent by the ap­pli­ca­tion to the GPU, and these pro­grams gov­ern how pix­els in our screens are shaded or col­ored with geom­e­try, tex­tures, and lights while play­ing games. Job Noorman, an en­gi­neer from our Compilers Team, made sig­nif­i­cant con­tri­bu­tions to the com­piler used by Mesa3D Turnip. He also con­tributed to the Mesa3D NIR shader com­piler, a com­mon part that all Mesa dri­vers use, in­clud­ing RADV (most pop­u­larly used on the Steam Deck) or V3DV (used on Raspberry Pi boards).

As is nor­mal for Igalia, while we fo­cused on de­liv­er­ing re­sults for our cus­tomer, we also made our work as widely use­ful as pos­si­ble. For ex­am­ple: While our tar­get through­out our work has been the Snapdragon 8 Gen 3 that’s in the Frame, much of our work ex­tends back through years of Snapdragon hard­ware, and we re­gres­sion test it to make sure it stays Vulkan con­for­mant,” said Anholt. This means that Igalia’s work for the Frame has con­sis­tently passed Vulkan’s Conformance Test Suite (CTS) of over 2.8 mil­lion tests, some of which Igalia is in­volved in cre­at­ing.

Our very own Vulkan CTS ex­pert Ricardo García says:

Igalia and other Valve con­trac­tors ac­tively par­tic­i­pate in sev­eral ar­eas in­side the Khronos Group, the or­ga­ni­za­tion main­tain­ing and de­vel­op­ing graph­ics API stan­dards like Vulkan. We con­tribute spec­i­fi­ca­tion fixes and feed­back, and we are reg­u­larly in­volved in the de­vel­op­ment of many new Vulkan ex­ten­sions. Some of these end up be­ing crit­i­cal for game de­vel­op­ers, like mesh shad­ing. Others en­sure a smooth and ef­fi­cient trans­la­tion of other APIs like DirectX to Vulkan, or help take ad­van­tage of hard­ware fea­tures to en­sure ap­pli­ca­tions per­form great across mul­ti­ple plat­forms, both mo­bile like the Steam Frame or desk­top like the Steam Machine. Having Vulkan CTS cov­er­age for these new ex­ten­sions is a crit­i­cal step in the re­lease process, help­ing make sure the spec­i­fi­ca­tion is clear and dri­vers im­ple­ment it cor­rectly, and Igalia en­gi­neers have con­tributed mil­lions of source code lines and tests since our col­lab­o­ra­tion with Valve started.

A huge chal­lenge we faced in mov­ing for­ward with de­vel­op­ment is en­sur­ing that we did­n’t in­tro­duce re­gres­sions, small in­no­cent-seem­ing changes can com­pletely break ren­der­ing on games in a way that even CTS might not catch. What au­to­mated test­ing could be done was of­ten quite con­strained, but Igalians found ways to push through the bar­ri­ers. I made a con­tin­u­ous in­te­gra­tion test to au­to­mat­i­cally run sin­gle-frame cap­tures of a wide range of games span­ning D3D11, D3D9, D3D8, Vulkan, and OpenGL APIs,” said Piliaiev, about the de­vel­op­ment cov­ered in his re­cent XDC 2025 talk, ensuring that we don’t have ren­der­ing or per­for­mance re­gres­sions.”

Looking ahead, Igalia’s work for Valve will con­tinue to de­liver ben­e­fits to the wider Linux Gaming ecosys­tem. For ex­am­ple, the Steam Frame, as a bat­tery-pow­ered VR head­set, needs to de­liver high per­for­mance within a lim­ited power bud­get. A way to ad­dress this is to cre­ate a more ef­fi­cient task sched­uler, which is some­thing Changwoo Min of Igalia’s Kernel Team has been work­ing on. As he says, I have been de­vel­op­ing a cus­tomized CPU sched­uler for gam­ing, named LAVD: Latency-criticality Aware Virtual Deadline sched­uler.”

In gen­eral terms, a sched­uler au­to­mat­i­cally iden­ti­fies crit­i­cal tasks and dy­nam­i­cally boosts their dead­lines to im­prove re­spon­sive­ness. Most task sched­ulers don’t take en­ergy con­sump­tion into ac­count, but the Rust-based LAVD is dif­fer­ent. LAVD makes sched­ul­ing de­ci­sions con­sid­er­ing each chip’s per­for­mance ver­sus en­ergy trade-offs. It mea­sures and pre­dicts the re­quired com­put­ing power on the fly, then se­lects the best set of CPUs to meet that de­mand with min­i­mal en­ergy con­sump­tion,” said Min.

One of our other ker­nel en­gi­neers, Melissa Wen, has been work­ing on AMD ker­nel dis­play dri­vers to main­tain good color man­age­ment and HDR sup­port for SteamOS across AMD hard­ware fam­i­lies, both for the Steam Deck and the Steam Machine. This is es­pe­cially im­por­tant with newer dis­play hard­ware in the Steam Machine, which fea­tures some no­table dif­fer­ences in color ca­pa­bil­i­ties, aim­ing for more pow­er­ful and ef­fi­cient color man­age­ment which ne­ces­si­tated dri­ver work.

…and that’s a wrap! We will con­tinue our ef­forts to­ward im­prov­ing fu­ture ver­sions of SteamOS, and with a part­ner as strongly sup­port­ive as Valve, we ex­pect to do more work to make Linux gam­ing even bet­ter. If any of that sounded in­ter­est­ing and you’d like to work with us to tackle tricky prob­lems of your own, please get in touch!

...

Read the original on www.igalia.com »

2 448 shares, 39 trendiness

Open-Source, Offline & Private Portfolio Tracker

A beau­ti­ful, Private and in­vest­ment tracker that runs on all your de­vices.

A beau­ti­ful port­fo­lio tracker that re­spects your pri­vacy and your data

Your data never leaves your de­vice. As an open-source pro­ject, we pri­or­i­tize se­cu­rity and trans­parency. Free to use with op­tional one-time pay­ment. No sub­scrip­tions or re­cur­ring fees.

THE ESSENTIALS YOU NEED TO TRACK YOUR WEALTH

No More Messy Spreadsheets or Privacy Concerns - Just You and Your Secure, Personal Wealth Companion Application

Gather all your in­vest­ment and sav­ings ac­counts in one place. See every­thing at a glance, from stocks to sav­ings! Import your CSV state­ments from your bro­ker or bank. See all your ac­counts in one place. Get a clear pic­ture of what’s in your port­fo­lio. Stocks, ETFs, or Cryptocurrencies - know what you have and how it’s per­form­ing. Monitor how your in­vest­ments are do­ing. See how your in­vest­ments stack up, all in one place. Compare your ac­counts side by side, check if you are beat­ing the S&P 500, and track your fa­vorite ETFs with­out the has­sle. No fancy jar­gon - just clear, use­ful charts that help you un­der­stand how your money is re­ally do­ing. See which ac­counts are do­ing best. Check how you stack up against some pop­u­lar in­dexes and ETFs. Monitor div­i­dends and in­ter­est in­come across your en­tire port­fo­lio. Get a clear view of your pas­sive in­come streams, help­ing you make in­formed de­ci­sions about your in­vest­ments. Keep an eye on in­ter­est earn­ings. Track your ac­counts’ hold­ings and per­for­mance over time. See how a par­tic­u­lar ac­count is per­form­ing, and how it’s chang­ing over time. Set your sav­ings tar­gets clearly. Distribute your funds across these ob­jec­tives, as­sign­ing a spe­cific per­cent­age to each. Keep an eye on your progress. Stay on top of your con­tri­bu­tion lim­its for tax-ad­van­taged ac­counts like IRAs, 401(k)s, or TFSAs. Track your avail­able con­tri­bu­tion room and avoid over-con­tribut­ing.

Track and an­a­lyze in­vest­ment fees across your port­fo­lio with de­tailed an­a­lyt­ics and in­sights

Track your in­vest­ment progress to­wards tar­get amounts with a vi­sual rep­re­sen­ta­tion

...

Read the original on wealthfolio.app »

3 399 shares, 26 trendiness

How a French judge was digitally cut off by the USA

Digital sov­er­eignty has been much dis­cussed in Europe in re­cent weeks, most re­cently dur­ing a German-French sum­mit in Berlin. The ex­tent of de­pen­dence on the USA in the dig­i­tal sec­tor is cur­rently be­ing ex­pe­ri­enced by a French judge. Nicolas Guillou, one of six judges and three pros­e­cu­tors of the International Criminal Court (ICC), was sanc­tioned by the USA in August. He de­scribed his cur­rent sit­u­a­tion as a dig­i­tal time travel back to the 1990s, be­fore the in­ter­net age, in a re­cent in­ter­view.

The rea­son for the US sanc­tions are the ar­rest war­rants against Israeli Prime Minister Benjamin Netanyahu and Defense Minister Yoav Gallant. They were in­dicted for war crimes and crimes against hu­man­ity in the con­text of the de­struc­tion of the Gaza Strip. The USA con­demned this de­ci­sion by the court, where­upon the US Treasury Department sanc­tioned six judges and three pros­e­cu­tors.

In Guillou’s daily life, this means that he is ex­cluded from dig­i­tal life and much of what is con­sid­ered stan­dard to­day, he told the French news­pa­per Le Monde. All his ac­counts with US com­pa­nies such as Amazon, Airbnb, or PayPal were im­me­di­ately closed by the providers. Online book­ings, such as through Expedia, are im­me­di­ately can­celed, even if they con­cern ho­tels in France. Participation in e-com­merce is also prac­ti­cally no longer pos­si­ble for him, as US com­pa­nies al­ways play a role in one way or an­other, and they are strictly for­bid­den to en­ter into any trade re­la­tion­ship with sanc­tioned in­di­vid­u­als.

He also de­scribes the im­pact on par­tic­i­pat­ing in bank­ing as dras­tic. Payment sys­tems are blocked for him, as US com­pa­nies like American Express, Visa, and Mastercard have a vir­tual mo­nop­oly in Europe. He also de­scribes the rest of bank­ing as se­verely re­stricted. For ex­am­ple, ac­counts with non-US banks have also been par­tially closed. Transactions in US dol­lars or via dol­lar con­ver­sions are for­bid­den to him.

Guillou’s case shows how strong the USAs in­flu­ence in the tech sec­tor is and how few op­tions he has to cir­cum­vent it. And this at a time when an ac­count with a US tech com­pany is con­sid­ered a mat­ter of course in more and more places.

The French judge ad­vo­cates for Europe to gain more sov­er­eignty in the dig­i­tal and bank­ing sec­tors. Without this sov­er­eignty, the rule of law can­not be guar­an­teed, he warns. At the same time, he calls on the EU to ac­ti­vate an ex­ist­ing block­ing reg­u­la­tion (Regulation (EC) No 2271/96) for the International Criminal Court, which pre­vents third coun­tries like the USA from en­forc­ing sanc­tions in the EU. EU com­pa­nies would then no longer be al­lowed to com­ply with US sanc­tions if they vi­o­late EU in­ter­ests. Companies that vi­o­late this would then be li­able for dam­ages.

...

Read the original on www.heise.de »

4 364 shares, 15 trendiness

Olmo 3: Charting a path through the model flow to lead open-source AI

Language mod­els are of­ten treated as snap­shots—brief cap­tures of a long and care­fully cu­rated de­vel­op­ment process. But shar­ing only the end re­sult ob­scures the rich con­text needed to mod­ify, adapt, and ex­tend a mod­el’s ca­pa­bil­i­ties. Many mean­ing­ful ad­just­ments re­quire in­te­grat­ing do­main-spe­cific knowl­edge deep within the de­vel­op­ment pipeline, not merely at the fi­nal stage. To truly ad­vance open AI de­vel­op­ment and re­search, the en­tire model flow — not just its end­point — should be ac­ces­si­ble and cus­tomiz­able. The model flow is the full life­cy­cle of an LM: every stage, check­point, dataset, and de­pen­dency re­quired to cre­ate and mod­ify it. By ex­pos­ing this com­plete process, the goal is to en­gen­der greater trust and en­able more ef­fec­tive adap­ta­tion, col­lab­o­ra­tion, and in­no­va­tion.

With to­day’s re­lease of Olmo 3, we’re em­pow­er­ing the open source com­mu­nity with not only state-of-the-art open mod­els, but the en­tire model flow and full trace­abil­ity back to train­ing data.

At its cen­ter is Olmo 3-Think (32B), the best fully open 32B-scale think­ing model that for the first time lets you in­spect in­ter­me­di­ate rea­son­ing traces and trace those be­hav­iors back to the data and train­ing de­ci­sions that pro­duced them. Olmo 3 is a fam­ily of com­pact, dense mod­els at 7 bil­lion and 32 bil­lion pa­ra­me­ters that can run on every­thing from lap­tops to re­search clus­ters.

Olmo 3-Base (7B, 32B) is our most pow­er­ful base model yet. When eval­u­ated on our ex­panded, di­verse eval­u­a­tion suite, Olmo 3-Base de­liv­ers the strongest per­for­mance among fully open base mod­els — where train­ing data, code, and weights are all pub­licly avail­able, like Stanford’s Marin and Swiss AIs Apertus — and achieves com­pet­i­tive per­for­mance with some of the best open-weights base mod­els of com­pa­ra­ble size and ar­chi­tec­ture, in­clud­ing Qwen 2.5 and Gemma 3. Achieving strong re­sults in pro­gram­ming, read­ing com­pre­hen­sion, and math prob­lem solv­ing, Olmo 3-Base main­tains per­for­mance at ex­tended con­text lengths (~up to 65K to­kens)—pro­vid­ing a ver­sa­tile foun­da­tion for con­tin­ued pre­train­ing, tar­geted fine-tun­ing, and re­in­force­ment learn­ing and mak­ing it easy to build in spe­cial­ized ca­pa­bil­i­ties like rea­son­ing, tool use (function call­ing), and in­struc­tion fol­low­ing through post-train­ing. Olmo 3-Think (7B, 32B) is our flag­ship post-trained rea­son­ing set built on Olmo 3-Base. At a time when few or­ga­ni­za­tions are re­leas­ing truly open mod­els at this scale, Olmo 3-Think (32B) serves as a work­horse for RL re­search, long-hori­zon rea­son­ing, and other ad­vanced ex­per­i­ments that re­quire sub­stan­tial com­pute. On our suite of rea­son­ing bench­marks (discussed be­low), it’s the strongest fully open think­ing model we’re aware of, nar­row­ing the gap to the best open-weight mod­els of sim­i­lar scale — such as Qwen 3 32B — while train­ing on roughly 6x fewer to­kens. Olmo 3-Think (7B) brings the same de­sign and train­ing ap­proach to an even more ef­fi­cient form fac­tor, sur­fac­ing in­ter­me­di­ate think­ing steps for com­plex prompts while mak­ing open, in­spectable rea­son­ing ac­ces­si­ble on more mod­est hard­ware.Olmo 3-Instruct (7B) is a chat and quick-re­sponse fo­cused post-train of Olmo 3-Base that han­dles multi-turn, in­struc­tion-fol­low­ing, tool use, and more. In our eval­u­a­tions, it matches or out­per­forms open-weight mod­els in­clud­ing Qwen 2.5, Gemma 3, and Llama 3.1, and nar­rows the gap with Qwen 3 model fam­i­lies at a sim­i­lar scale—de­liv­er­ing a strong, fully open al­ter­na­tive for high-qual­ity con­ver­sa­tional and tool-us­ing agents.Olmo 3-RL Zero (7B), is a fully open re­in­force­ment learn­ing path­way built on Olmo 3-Base, de­signed to boot­strap com­plex rea­son­ing be­hav­iors and en­able clear bench­mark­ing of RL al­go­rithms. We re­lease four se­ries of check­points from do­main-fo­cused train­ing on math, code, in­struc­tion fol­low­ing, and gen­eral chat, en­abling care­ful study of re­in­force­ment learn­ing with ver­i­fi­able re­wards (RLVR).

Instead of a sin­gle set of frozen weights, Olmo 3 of­fers mul­ti­ple, fully doc­u­mented paths through de­vel­op­ment: the Instruct path for every­day chat and tool use, the RL Zero path for RL ex­per­i­men­ta­tion from base mod­els, and the Think/reasoning path for mod­els that lever­age in­fer­ence-time scal­ing to un­lock com­plex rea­son­ing and agen­tic be­hav­iors. Each path is a con­crete ex­am­ple of how to shape be­hav­ior from the same base model, and you’re free to fork or remix them—start with Olmo 3-Base, ex­plore your own su­per­vised fine-tun­ing (SFT) or di­rect pref­er­ence op­ti­miza­tion (DPO) recipe for in­struct-style use cases, or plug in a new RL ob­jec­tive to probe dif­fer­ent trade­offs. The flow it­self be­comes a rich, reusable ob­ject—not just a record of how we built Olmo 3, but a scaf­fold for how you can build your own sys­tems.

Click on any stage to learn more about it and down­load ar­ti­facts.

The Olmo 3 check­points we’re re­leas­ing rep­re­sent our ini­tial paths tar­get­ing our goals around rea­son­ing, tool use, and gen­eral ca­pa­bil­i­ties — we have ex­cit­ing plans for other ways to lever­age Olmo 3-Base 32B. But be­cause we’re re­leas­ing the en­tire flow, you can in­ter­vene at any point: swap in do­main-spe­cific data dur­ing mid-train­ing, ad­just post-train­ing for your use case, or build on an ear­lier check­point that bet­ter suits your needs.

As with Olmo and Olmo 2, we’re re­leas­ing all com­po­nents of the Olmo 3 flow — data, code, model weights, and check­points — un­der per­mis­sive open source li­censes.

Try Olmo 3 on the Ai2 Playground | Use Olmo 3 via OpenRouter | Download the mod­els & data | Read the re­port

We run the Olmo 3 check­points through a broad, up­dated bench­mark suite, group­ing dozens of in­dus­try-stan­dard tasks (plus a few new ones we in­tro­duce) into sev­eral ca­pa­bil­ity clus­ters. Together, the clus­tered suite and these held-out tasks give us a ca­pa­bil­ity pro­file of Olmo 3—a clear pic­ture of how well it solves math prob­lems, codes, uses tools, an­swers gen­eral-knowl­edge ques­tions, and more.

At a high level, the Olmo 3 fam­ily de­liv­ers the strongest fully open base and think­ing mod­els we’re aware of. Olmo 3-Base 32B out­per­forms other fully open base mod­els, and Olmo 3-Think 32B emerges as the strongest fully open think­ing model.

Our re­sults were made pos­si­ble by rig­or­ous data cu­ra­tion at every stage of train­ing, a care­fully de­signed train­ing recipe for each model, and a set of new al­go­rith­mic and in­fra­struc­ture ad­vances across data pro­cess­ing, train­ing, and re­in­force­ment learn­ing. We also in­tro­duce an en­hanced re­in­force­ment learn­ing frame­work that guides the de­vel­op­ment of our mod­els and is par­tic­u­larly es­sen­tial for our think­ing mod­els. To de­sign the train­ing recipe and co­or­di­nate tar­geted im­prove­ments across a wide range of ca­pa­bil­i­ties at each stage of the model train­ing pipeline, our de­vel­op­ment frame­work bal­ances dis­trib­uted in­no­va­tion with cen­tral­ized eval­u­a­tion.

Olmo 3-Base, with a train­ing pipeline that first fo­cuses on broad cov­er­age over di­verse text, code, and math, then con­cen­trates on harder dis­tri­b­u­tions to sharpen pro­gram­ming, quan­ti­ta­tive rea­son­ing, and read­ing com­pre­hen­sion, is clearly the strongest set of fully open base mod­els in our eval­u­a­tions. It’s also ar­guably the best 32B model in the en­tire ecosys­tem of mod­els with open weights, per­form­ing im­pres­sively in pro­gram­ming, read­ing com­pre­hen­sion, math prob­lem solv­ing, and long-con­text bench­marks like RULER, which tests in­for­ma­tion re­trieval from lengthy texts. Olmo 3-Base (7B) and Olmo 3-Base (32) main­tain qual­ity at ex­tended con­text lengths and in­te­grate cleanly with RL work­flows, pro­vid­ing a ro­bust foun­da­tion for con­tin­ued pre­train­ing and post-train­ing.

Olmo 3-Think, which turns the Base into a rea­son­ing model by train­ing on multi-step prob­lems span­ning math, code, and gen­eral prob­lem solv­ing, then run­ning the think­ing SFT → think­ing DPORLVR model flow to elicit high-qual­ity rea­son­ing traces, com­petes with or ex­ceeds sev­eral open-weight rea­son­ing mod­els of sim­i­lar sizes. On math bench­marks, Olmo 3-Think (7B) matches Qwen 3 8B on MATH and comes within a few points on AIME 2024 and 2025, and also leads all com­par­i­son mod­els on HumanEvalPlus for cod­ing—per­form­ing strongly on MBPP and LiveCodeBench to demon­strate par­tic­u­lar strength in code-in­ten­sive rea­son­ing. On broader rea­son­ing tasks like BigBench Hard and AGI Eval English, Olmo 3-Think (7B) re­mains com­pet­i­tive with Qwen 3 8B rea­son­ing and Qwen 3 VL 8B Thinker while stay­ing fully open and slightly smaller.

For the 32B model, Olmo 3-Think scales these trends up and be­comes one of the strongest fully open rea­son­ing mod­els in its class. Olmo 3-Think (32B) ei­ther wins or sits within roughly two points of the best open-weight model on MATH, OMEGA, BigBenchHard, HumanEvalPlus, PopQA, and IFEval. It ties Qwen 3 VL 32B Thinking for the top score on the OMEGA suite while stay­ing clearly ahead of Gemma 3 27B Instruct and com­pet­i­tive with DeepSeek R1 Distill 32B on math and rea­son­ing. On broader knowl­edge and QA, Olmo 3-Think (32B) is ef­fec­tively neck-and-neck with the Qwen 3 mod­els on PopQA. And in in­struc­tion fol­low­ing, Olmo 3-Think (32B) tops this sub­set on IFEval and re­mains solid on IFBench and AlpacaEval 2 LC—offering a strong de­fault for rea­son­ing work­loads at the 32B scale.

Olmo 3-Instruct, which pro­duces shorter se­quences than the cor­re­spond­ing Olmo 3-Think mod­els to im­prove in­fer­ence ef­fi­ciency and is de­signed to fo­cus on gen­eral chat, tool use, and syn­thetic data gen­er­a­tion, out­per­forms com­pa­ra­bly-sized open-weight mod­els. Olmo 3-Instruct ties or sur­passes Qwen 2.5, Gemma 3, and Llama 3.1 in our eval­u­a­tions, and com­petes with the Qwen 3 fam­ily at sim­i­lar scale, de­liv­er­ing strong func­tion call­ing per­for­mance and in­struc­tion-fol­low­ing ca­pa­bil­i­ties in a fully open 7B model.

Olmo 3 uses a de­coder-only trans­former ar­chi­tec­ture and multi-stage train­ing pipeline. Pretraining runs in three stages—an ini­tial large-scale train­ing run that builds broad ca­pa­bil­i­ties; a mid-train­ing phase that fo­cuses on harder ma­te­r­ial like math, code, and read­ing com­pre­hen­sion; and a fi­nal long-con­text ex­ten­sion stage that trains the model on very long doc­u­ments. Together with ar­chi­tec­tural en­hance­ments, this yields a more ca­pa­ble, ef­fi­cient base for the Olmo 3 fam­ily.

Post-training then spe­cial­izes the pre­trained model for dif­fer­ent use cases. Building on Olmo 2, each path­way fol­lows a three-stage recipe — SFT, pref­er­ence tun­ing with DPO, and RLVR — but in Olmo 3, we ex­pose this as a fully doc­u­mented model flow with com­plete cus­tomiza­tion over each train­ing stage and dataset mix.

Instead of re­leas­ing only the fi­nal weights, we pro­vide check­points from each ma­jor train­ing mile­stone: the base pre­trained model, the mid-trained model af­ter tar­geted skill en­hance­ment, the long-con­text-ex­tended ver­sion, plus post-train­ing check­points for the Olmo 3-Think, Olmo 3-Instruct, and Olmo 3-RL Zero flows. You can study how ca­pa­bil­i­ties emerge over time, run ab­la­tions on spe­cific stages, and fork the model at what­ever point best fits your data, com­pute, and goals.

Compared to Olmo 2, we scaled data col­lec­tion and sig­nif­i­cantly strength­ened our dataset cu­ra­tion meth­ods. Continuing our com­mit­ment to full trans­parency, we’re re­leas­ing sev­eral new, higher-qual­ity datasets that cover every stage of base model train­ing and post-train­ing—from ini­tial learn­ing to spe­cial­ized skills like com­plex rea­son­ing and long-con­text un­der­stand­ing. This means any­one can see ex­actly what data shaped the mod­el’s ca­pa­bil­i­ties, re­pro­duce our re­sults, and reuse these datasets to train their own AI sys­tems.

Olmo 3 is pre­trained on Dolma 3, a new ~9.3-trillion-token cor­pus drawn from web pages, sci­ence PDFs processed with olmOCR, code­bases, math prob­lems and so­lu­tions, and en­cy­clo­pe­dic text. From this pool, we con­struct Dolma 3 Mix, a 5.9-trillion-token (~6T) pre­train­ing mix with a higher pro­por­tion of cod­ing and math­e­mat­i­cal data than ear­lier Dolma re­leases, plus much stronger de­con­t­a­m­i­na­tion via ex­ten­sive dedu­pli­ca­tion, qual­ity fil­ter­ing, and care­ful con­trol over data mix­ing. We fol­low es­tab­lished web stan­dards in col­lect­ing train­ing data and don’t col­lect from sites that ex­plic­itly dis­al­low it, in­clud­ing pay­walled con­tent.

On top of this, we in­tro­duce two Dolma 3-based mixes for later stages of base model train­ing. Dolma 3 Dolmino is our mid-train­ing mix: 100B train­ing to­kens sam­pled from a ~2.2T-token pool of high-qual­ity math, sci­ence, code, in­struc­tion-fol­low­ing, and read­ing-com­pre­hen­sion data, in­clud­ing rea­son­ing traces that also en­able RL di­rectly on the base model. Dolma 3 Longmino is our long-con­text mix: ~50B train­ing to­kens drawn from a 639B-token pool of long doc­u­ments com­bined with mid-train­ing data to teach Olmo 3 to track in­for­ma­tion over very long in­puts (like re­ports, logs, and multi-chap­ter doc­u­ments).

We also in­tro­duce Dolci, a new post-train­ing data suite tai­lored specif­i­cally for rea­son­ing, tool use, and in­struc­tion fol­low­ing. Dolci pro­vides sep­a­rate mixes for each stage of post-train­ing: SFT, DPO, and RLVR. For SFT, Dolci ag­gre­gates state-of-the-art datasets that ad­vance step-by-step rea­son­ing, tool use, and high-qual­ity con­ver­sa­tional be­hav­ior; for DPO, it sup­plies high-qual­ity con­trastive pref­er­ence data; and for RL, it in­cludes hard, di­verse prompts across math, cod­ing, in­struc­tion fol­low­ing, and gen­eral chat.

Together, Dolma 3 and Dolci give Olmo 3 a fully open data cur­ricu­lum from first to­ken to fi­nal post-trained check­point.

We pre­trained Olmo 3 on a clus­ter of up to 1,024 H100 GPUs; we achieved train­ing through­put of 7.7K to­kens per de­vice per sec­ond for Olmo 3-Base (7B). We mid-trained on 128 H100 GPUs, and post-trained on a set of 256 H100s.

For Olmo 3, build­ing on the work we did for Olmo 2, we were able to sig­nif­i­cantly im­prove the ef­fi­ciency of our post-train­ing code. By mov­ing SFT from Open Instruct (our post-train­ing code­base, pri­or­i­tiz­ing flex­i­bil­ity) to Olmo Core (our pre­train­ing code­base, de­signed to max­i­mize ef­fi­ciency), we in­creased through­put (tokens/second) by 8x. Similarly, by in­cor­po­rat­ing in-flight weight up­dates, con­tin­u­ous batch­ing, and a lot of thread­ing im­prove­ments, we made our RL train­ing 4x more ef­fi­cient—re­sult­ing in train­ing runs that are sig­nif­i­cantly cheaper and faster.

A note on our 32B mod­els: We be­lieve 32B sits in a sweet spot for re­search and tin­ker­ing. 32B mod­els are big enough to sup­port strong, com­pet­i­tive per­for­mance, but still small enough that a wide au­di­ence can fine-tune and de­ploy them on ac­ces­si­ble hard­ware.

For more de­tails, in­clud­ing ab­la­tions, please read our tech­ni­cal re­port.

A core goal of Olmo 3 is not just to open the model flow, but to make it ac­tion­able for peo­ple who want to un­der­stand and im­prove model be­hav­ior. Olmo 3 in­te­grates with OlmoTrace, our tool for trac­ing model out­puts back to train­ing data in real time.

For ex­am­ple, in the Ai2 Playground, you can ask Olmo 3-Think (32B) to an­swer a gen­eral-knowl­edge ques­tion, then use OlmoTrace to in­spect where and how the model may have learned to gen­er­ate parts of its re­sponse. This closes the gap be­tween train­ing data and model be­hav­ior: you can see not only what the model is do­ing, but why—and ad­just data or train­ing de­ci­sions ac­cord­ingly.

To fur­ther pro­mote trans­parency and ex­plain­abil­ity, we’re mak­ing every train­ing and fine-tun­ing dataset avail­able for down­load, all un­der a per­mis­sive li­cense that al­lows for cus­tom de­ploy­ment and reuse. The datasets come in a range of mixes to ac­com­mo­date dif­fer­ent stor­age and hard­ware con­straints, from sev­eral bil­lion to­kens all the way up to 6 tril­lion.

Our new tool­ing for data pro­cess­ing al­lows you to de-con­t­a­m­i­nate, to­k­enize, and de-du­pli­cate data in the same way we did for Olmo 3’s cor­pora. All the tool­ing is open source, en­abling you to repli­cate our train­ing curves or run con­trolled ab­la­tions across data mixes and ob­jec­tives.

Our Olmo util­i­ties and soft­ware cover the whole de­vel­op­ment cy­cle:

is a toolkit for re­pro­ducible evals. It in­cludes our brand-new eval col­lec­tion OlmoBaseEval, which we used for Olmo 3 base model de­vel­op­ment.

Importantly, our tool­ing al­lows you to in­stru­ment com­plex tasks and an­a­lyze in­ter­me­di­ate traces to un­der­stand where the mod­els suc­ceed—or strug­gle. Because the Olmo 3 data recipes, train­ing pipeline, and check­points are open, in­de­pen­dent teams can con­nect model be­hav­ior back to mea­sur­able prop­er­ties.

Ready to de­ploy and use

Together, the Olmo 3 fam­ily makes it eas­ier to build trust­wor­thy fea­tures quickly, whether for re­search, ed­u­ca­tion, or ap­pli­ca­tions. By mak­ing every de­vel­op­ment step avail­able and in­spectable, we’re en­abling en­tirely new cat­e­gories of re­search. You can run ex­per­i­ments on any train­ing phase, un­der­stand ex­actly how dif­fer­ent tech­niques con­tribute to model ca­pa­bil­i­ties, and build on our work at what­ever stage makes sense for your pro­ject.

For sci­en­tists, the fully open flow ex­poses the mod­el’s in­ner work­ings, so you can in­stru­ment ex­per­i­ments across cod­ing, rea­son­ing, RL, and tool use.

If you care about AI you can study, au­dit, and im­prove, Olmo 3 is for you. Try the demos in the Ai2 Playground, ex­plore the doc­u­men­ta­tion, and build on the re­leased weights and check­points. Then tell us what you dis­cover—we in­vite the com­mu­nity to val­i­date, cri­tique, and ex­tend our find­ings.

True open­ness in AI is­n’t just about ac­cess—it’s about trust, ac­count­abil­ity, and shared progress. We be­lieve the mod­els shap­ing our fu­ture should be fully in­spectable, not black boxes. Olmo 3 rep­re­sents a dif­fer­ent path: one where any­one can un­der­stand, ver­ify, and build upon the AI sys­tems that in­creas­ingly in­flu­ence our world. This is what open-first means—not just re­leas­ing weights, but shar­ing the com­plete knowl­edge needed to ad­vance AI re­spon­si­bly: the flow.

Try Olmo 3 on the Ai2 Playground | Use Olmo 3 via OpenRouter | Download the mod­els & data | Read the re­port

...

Read the original on allenai.org »

5 331 shares, 32 trendiness

Did Qualcomm kill Arduino for good?

Six weeks ago, Qualcomm ac­quired Arduino. The maker com­mu­nity im­me­di­ately wor­ried that Qualcomm would kill the open-source ethos that made Arduino the lin­gua franca of hobby elec­tron­ics.

This week, Arduino pub­lished up­dated terms and con­di­tions and a new pri­vacy pol­icy, clearly rewrit­ten by Qualcomm’s lawyers. The changes con­firm the com­mu­ni­ty’s worst fears: Arduino is no longer an open com­mons. It’s be­com­ing just an­other cor­po­rate plat­form.

Here’s what’s at stake, what Qualcomm got wrong, and what might still be sal­vaged, draw­ing from com­mu­nity dis­cus­sions across maker fo­rums and sites.

What changed?

The new terms read like stan­dard cor­po­rate boil­er­plate: manda­tory ar­bi­tra­tion, data in­te­gra­tion with Qualcomm’s global ecosys­tem, ex­port con­trols, AI use re­stric­tions. For any other SaaS plat­form, this would be un­re­mark­able.

But Arduino is­n’t SaaS. It’s the foun­da­tion of the maker ecosys­tem.

The most dan­ger­ous change is Arduino now ex­plic­itly states that us­ing their plat­form grants you no patent li­censes what­so­ever. You can’t even ar­gue one is im­plied.

This means Qualcomm could po­ten­tially as­sert patents against your pro­jects if you built them us­ing Arduino tools, Arduino ex­am­ples, or Arduino-compatible hard­ware.

And here’s the dis­con­nect, baf­fling mak­ers. Arduino’s IDE is li­censed un­der AGPL. Their CLI is GPL v3. Both li­censes ex­plic­itly re­quire that you can re­verse en­gi­neer the soft­ware. But the new Qualcomm terms ex­plic­itly for­bid re­verse en­gi­neer­ing the Platform.”

What’s re­ally go­ing on?

The com­mu­nity is try­ing to fig­ure out what is Qualcomm’s ac­tual in­tent. Are these terms just bad lawyer­ing with SaaS lawyers ap­ply­ing their stan­dard tem­plate to cloud ser­vices, not re­al­iz­ing Arduino is dif­fer­ent? Or is Qualcomm test­ing how much they can get away with be­fore the com­mu­nity re­volts? Or is this a first step to­ward lock­ing down the ecosys­tem they just bought?

Some peo­ple point out that the Platform” might only mean Arduino’s cloud ser­vices (forums, Arduino Cloud, Project Hub) not the IDE and CLI that every­one ac­tu­ally uses.

If that’s true, Qualcomm needs to say so, ex­plic­itly, and in plain lan­guage. Because li­brary main­tain­ers are likely won­der­ing whether con­tribut­ing to Arduino re­pos puts them at le­gal risk. And hard­ware mak­ers are ques­tion­ing whether Arduino-compatible” is still safe to ad­ver­tise.

Why Adafruit’s alarm mat­ters

Adafruit has been vo­cal about the dan­gers of this ac­qui­si­tion. Some dis­miss Adafruit’s crit­i­cism as self-serv­ing. After all, they sell com­pet­ing hard­ware and pro­mote CircuitPython. But that misses who Adafruit is.

Adafruit has been the moral au­thor­ity on open hard­ware for decades. They’ve made their liv­ing prov­ing you can build a suc­cess­ful busi­ness on open prin­ci­ples. When they sound the alarm, it’s not about com­pe­ti­tion, it’s about prin­ci­ple.

What they’re call­ing out is­n’t that Qualcomm bought Arduino. It’s that Qualcomm’s lawyers fun­da­men­tally don’t un­der­stand what they bought. Arduino was­n’t valu­able be­cause it was just a mi­cro­con­troller com­pany. It was valu­able be­cause it was a com­mons. And you can’t ap­ply en­ter­prise le­gal frame­works to a com­mons with­out de­stroy­ing it.

Adafruit gets this. They’ve built their en­tire busi­ness on this. That’s why their crit­i­cism car­ries weight.

What Qualcomm does­n’t seem to un­der­stand

Qualcomm prob­a­bly thought they were buy­ing an IoT hard­ware com­pany with a loyal user base.

They weren’t. They bought the IBM PC of the maker world.

Arduino’s value was never just the hard­ware. Their boards have been ob­so­lete for years. Their value is the stan­dard.

The Arduino IDE is the lin­gua franca of hobby elec­tron­ics.

Millions of mak­ers learned on it, even if they moved to other hard­ware. ESP32, STM32, Teensy, Raspberry Pi Pico — none of them are Arduino hard­ware, but they all work with the Arduino IDE.

Thousands of li­braries are Arduino li­braries.” Tutorials as­sume Arduino. University cur­ric­ula teach Arduino. When you search how to read a sen­sor,” the an­swer comes back in Arduino code.

This is the ecosys­tem Qualcomm’s lawyers just dropped le­gal un­cer­tainty onto.

If Qualcomm’s lawyers start as­sert­ing con­trol over the IDE, CLI, or core li­braries un­der re­stric­tive terms, they will poi­son the en­tire maker ecosys­tem. Even peo­ple who never buy Arduino hard­ware are de­pen­dent on Arduino soft­ware in­fra­struc­ture.

Qualcomm did­n’t just buy a com­pany. They bought a com­mons. And now they in­ad­ver­tently are tak­ing steps that are de­stroy­ing what made it valu­able.

What are mak­ers sup­posed to do?

There has been some buzz of folks just leav­ing the Arduino en­vi­ron­ment be­hind. But Arduino IDE al­ter­na­tives such as PlatformIO and VSCode are not in any way be­gin­ner friendly. If the Arduino IDE goes, then there’s a huge prob­lem.

I re­mem­ber when Hypercard ended. There were al­ter­na­tives, but none so easy. I don’t think I re­ally coded again for al­most 20 years un­til I picked up the Arduino IDE (go fig­ure).

If some­thing hap­pens to the Arduino IDE, even if its de­vel­op­ment stalls or be­comes en­cum­bered, there’s no re­place­ment for that easy on­board­ing. We’d lose many promis­ing new mak­ers be­cause the first step be­came too steep.

The in­sti­tu­tional knowl­edge at risk

But leav­ing Arduino be­hind is­n’t sim­ple. The plat­for­m’s suc­cess de­pends on two decades of ac­cu­mu­lated knowl­edge, such as count­less Arduino tu­to­ri­als on YouTube, blogs, and school cur­ric­ula; open-source li­braries that de­pend on Arduino com­pat­i­bil­ity; pro­jects in pro­duc­tion us­ing Arduino tool­ing; and uni­ver­sity pro­grams built around Arduino as the teach­ing plat­form

All of these de­pend on Arduino re­main­ing open and ac­ces­si­ble.

If Qualcomm de­cided to sun­set the open Arduino IDE in fa­vor of a locked-down Arduino Pro” plat­form, or if they start as­sert­ing patent claims, or if un­cer­tainty makes con­trib­u­tors aban­don the ecosys­tem, all that knowl­edge be­comes stranded.

It’s like Wikipedia go­ing be­hind a pay­wall. The value is­n’t just the con­tent, it is the trust that it re­mains ac­ces­si­ble. Arduino’s value is­n’t just the code, it’s the trust that the com­mons would stay open.

That trust is now gone. And once lost, it hard to get back.

Why this hap­pened (but does­n’t ex­cuse it)

Let’s be fair to Qualcomm, their lawyers were do­ing their jobs.

When you ac­quire a com­pany, you stan­dard­ize the le­gal terms; add manda­tory ar­bi­tra­tion to limit class ac­tion ex­po­sure; in­te­grate data sys­tems for com­pli­ance and au­dit­ing; add ex­port con­trols be­cause you sell to de­fense con­trac­tors; pro­hibit re­verse en­gi­neer­ing be­cause that’s in the tem­plate.

For most ac­qui­si­tions, this is just good cor­po­rate hy­giene. And Arduino, now part of a mega­corp, faces higher li­a­bil­i­ties than it did as an in­de­pen­dent en­tity.

But here’s what Qualcomm’s lawyers missed: Arduino is­n’t a nor­mal ac­qui­si­tion. The com­mu­nity is­n’t a cus­tomer base, it’s a com­mons. And you can’t ap­ply en­ter­prise SaaS le­gal frame­works to a com­mons with­out de­stroy­ing what made it valu­able.

This is tone-deaf­ness, not mal­ice. But the out­come is the same. A com­mu­nity that trusted Arduino no longer does.

Understanding why this hap­pened does­n’t ex­cuse it, but it might sug­gest what needs to hap­pen next.

What should have hap­pened and how to still save it

Qualcomm dropped le­gal boil­er­plate on the com­mu­nity with zero con­text and let peo­ple dis­cover the con­tra­dic­tions them­selves. That’s how you de­stroy trust overnight.

Qualcomm should have an­nounced the changes in ad­vance. They should have given the com­mu­nity weeks, not hours, to un­der­stand what’s chang­ing and why. They should have used plain-lan­guage ex­pla­na­tions, not just le­gal doc­u­ments.

Qualcomm can fix things by ex­plic­itly carv­ing out the open ecosys­tem. They should state clearly that the terms ap­ply to Arduino Cloud ser­vices, and the IDE, CLI, and core li­braries re­main un­der their ex­ist­ing open source li­censes.

We’d need con­crete com­mit­ments, such as which re­pos stay open, which li­censes won’t change, what’s pro­tected from fu­ture ac­qui­si­tion de­ci­sions. Right now we have vague cor­po­rate-speak about supporting the com­mu­nity.”

Indeed, they could cre­ate some struc­tural pro­tec­tion, as well, by putting IDE, CLI, and core li­braries in a foun­da­tion that Qualcomm could­n’t uni­lat­er­ally con­trol (think the Linux Foundation model).

Finally, Qualcomm might wish to es­tab­lish some form of com­mu­nity gov­er­nance with real rep­re­sen­ta­tion and real power over the tools the com­mu­nity de­pends on.

The ac­qui­si­tion is done. The le­gal in­te­gra­tion is prob­a­bly in­evitable. But how it’s done de­ter­mines whether Arduino sur­vives as a com­mons or dies as just an­other Qualcomm sub­sidiary.

What’s next?

Arduino may be the toolset that made hobby elec­tron­ics ac­ces­si­ble to mil­lions. But that maker com­mu­nity built Arduino into what it be­came. Qualcomm’s ac­qui­si­tion has thrown that legacy into doubt. Whether through le­gal con­fu­sion, cor­po­rate tone-deaf­ness, or de­lib­er­ate strat­egy, the com­mu­ni­ty’s trust is bro­ken.

The next few months will re­veal whether this was a stum­ble or a strat­egy. If Qualcomm is­sues clar­i­fi­ca­tions, moves re­pos to some sort of gov­er­nance, and ex­plic­itly pro­tects the open tool­chain, then maybe this is sal­vage­able. If they stay silent, or worse, if IDE de­vel­op­ment slows or li­cense terms tighten fur­ther, then that’s a sig­nal to find al­ter­na­tives.

The ques­tion is­n’t whether the open hobby elec­tron­ics maker com­mu­nity sur­vives. It’s whether Arduino does.

...

Read the original on www.molecularist.com »

6 294 shares, 26 trendiness

We should all be using dependency cooldowns

TL;DR: Dependency cooldowns are a free, easy, and in­cred­i­bly ef­fec­tive

way to mit­i­gate the large ma­jor­ity of open source sup­ply chain at­tacks. More in­di­vid­ual pro­jects should ap­ply cooldowns (via tools like Dependabot and Renovate) to their de­pen­den­cies, and pack­ag­ing ecosys­tems should in­vest in first-class sup­port for cooldowns di­rectly in their pack­age man­agers.

Supply chain se­cu­rity” is a se­ri­ous prob­lem. It’s also se­ri­ously over­hyped, in part be­cause dozens of ven­dors have a vested fi­nan­cial in­ter­est in con­vinc­ing your that their fram­ing of the un­der­ly­ing prob­lem is (1) cor­rect, and (2) worth your money.

What’s con­ster­nat­ing about this is that most open source sup­ply chain at­tacks have the same ba­sic struc­ture:

An at­tacker com­pro­mises a pop­u­lar open source pro­ject, typ­i­cally via a stolen cre­den­tial or CI/CD vul­ner­a­bilty (such as pwn re­quests” in GitHub Actions).

The at­tacker in­tro­duces a ma­li­cious change to the pro­ject and up­loads it some­where that will have max­i­mum ef­fect (PyPI, npm, GitHub re­leases, &c., de­pend­ing on the tar­get).

At this point, the clock has started, as the at­tacker has moved into the pub­lic.

Users pick up the com­pro­mised ver­sion of the pro­ject via au­to­matic de­pen­dency up­dates or a lack of de­pen­dency pin­ning.

Meanwhile, the afore­men­tioned ven­dors are scan­ning pub­lic in­dices as well as cus­tomer repos­i­to­ries for signs of com­pro­mise, and pro­vide alerts up­stream (e.g. to PyPI).

Notably, ven­dors are in­cen­tivized to re­port quickly and loudly up­stream, as this in­creases the per­ceived value of their ser­vices in a crowded field.

Upstreams (PyPI, npm, &c.) re­move or dis­able the com­pro­mised pack­age ver­sion(s).

The key thing to ob­serve is that the gap be­tween (1) and (2) can be very large

(weeks or months), while the gap be­tween (2) and (5) is typ­i­cally very small: hours or days. This means that, once the at­tacker has moved into the ac­tual ex­ploita­tion phase, their win­dow of op­por­tu­nity to cause dam­age is pretty lim­ited.

We can see this with nu­mer­ous promi­nent sup­ply chain at­tacks over the last 18 months:

My take­away from this: some win­dows of op­por­tu­nity are big­ger, but the ma­jor­ity

of them are un­der a week long. Consequently, or­di­nary de­vel­op­ers can avoid the bulk of these types of at­tacks by in­sti­tut­ing cooldowns on their de­pen­den­cies.

A cooldown” is ex­actly what it sounds like: a win­dow of time be­tween when a de­pen­dency is pub­lished and when it’s con­sid­ered suit­able for use. The de­pen­dency is pub­lic dur­ing this win­dow, mean­ing that supply chain se­cu­rity” ven­dors can work their magic while the rest of us wait any prob­lems out.

They’re em­pir­i­cally ef­fec­tive, per above. They won’t stop all at­tack­ers, but they do stymie the ma­jor­ity of high-vis­i­bi­ity, mass-im­pact sup­ply chain at­tacks that have be­come more com­mon.

They’re in­cred­i­bly easy to im­ple­ment. Moreover, they’re lit­er­ally free

to im­ple­ment in most cases: most peo­ple can use Dependabot’s func­tion­al­ity,

Renovate’s func­tion­al­ity, or the func­tion­al­ity build di­rectly into their pack­age man­ager.

This is how sim­ple it is in Dependabot:

Cooldowns en­force pos­i­tive be­hav­ior from sup­ply chain se­cu­rity ven­dors: ven­dors are still in­cen­tivized to dis­cover and re­port at­tacks quickly, but are not as in­cen­tivized to emit vol­umes of blogspam about critical” at­tacks on largely un­der­funded open source ecosys­tems.

In the very small sam­ple set above, 8/10 at­tacks had win­dows of op­por­tu­nity of less than a week. Setting a cooldown of 7 days would have pre­vented the vast ma­jor­ity of these at­tacks from reach­ing end users (and caus­ing knock-on at­tacks, which sev­eral of these were). Increasing the cooldown to 14 days would have pre­vented all but 1 of these at­tacks.

Cooldowns are, ob­vi­ously, not a panacea: some at­tack­ers will evade de­tec­tion, and de­lay­ing the in­clu­sion of po­ten­tially ma­li­cious de­pen­den­cies by a week (or two) does not fun­da­men­tally al­ter the fact that sup­ply chain se­cu­rity is a

so­cial trust prob­lem, not a purely tech­ni­cal one. Still, an 80-90% re­duc­tion in ex­po­sure through a tech­nique that is free and easy seems hard to beat.

Related to the above, it’s un­for­tu­nate that cooldowns aren’t baked di­rectly

into more pack­ag­ing ecosys­tems: Dependabot and Renovate are great, but

even bet­ter would be if the pack­age man­ager it­self (as the source of ground truth) could en­force cooldowns di­rectly (including of de­pen­den­cies not in­tro­duced or bumped through au­to­mated flows).

...

Read the original on blog.yossarian.net »

7 265 shares, 4 trendiness

Motherboard PCIe Lanes

Choose your pref­er­ences above and click Generate to view boards

...

Read the original on mobomaps.com »

8 248 shares, 11 trendiness

WebAssembly from the Ground Up

Find out how WebAssembly works and why it’s a big deal. You’ll go from hand craft­ing byte­codes to writ­ing a com­piler for a toy pro­gram­ming lan­guage. No com­piler ex­per­tise nec­es­sary. All the code is in the book; we’ll take you through it step by step.For­get the hype — get your hands dirty and see for your­self what WebAssembly is all about.Buy now for $39*You can read a free sam­ple.

To re­ally un­der­stand what WebAssembly is and what makes it spe­cial, you need to dive into the low-level de­tails. We use a hands-on ap­proach to teach you the core of WebAssembly: the in­struc­tion set and the mod­ule for­mat.Since WebAssembly is pri­mar­ily a com­pi­la­tion tar­get, we think the best way to learn the de­tails is by writ­ing a com­piler. (Really.)You’ll build a com­piler that com­piles a sim­ple pro­gram­ming lan­guage down to WebAssembly.The fo­cus is on WebAssembly, not the finer de­tails of pars­ing. The com­piler is built in JavaScript, us­ing Ohm, a user-friendly pars­ing toolkit.No com­piler ex­per­tise is nec­es­sary; all the code you need is pro­vided in the book. Everything pro­ceeds step by step — in small, log­i­cal in­cre­ments.

Here’s a peek in­side the book — 15 chap­ters of tech­ni­cal con­tent, and two bonus chap­ters. Full source code (including tests) is avail­able for each mile­stone in every chap­ter. The code is MIT-licensed, so you’re free to use it in your own pro­jects.

ex­tern func set­Pixel(x, y, r, g, b, a);func say­Hello() { print(“Hello from Wafer!!“)}func draw(width, height, t) { let y = 0; while y < height { let x = 0; while x < width { let r = t; let g = x; let b = y; let a = 255; set­Pixel(x, y, r, g, b, a); x := x + 1; } y := y + 1; } 0}

What ex­actly WebAssembly is, and what makes it unique.

How to in­stan­ti­ate a WebAssembly mod­ule in JavaScript and run its func­tions.

The bi­nary mod­ule for­mat, and how to hand craft a mod­ule from scratch.

How to cre­ate a sim­ple com­piler with Ohm.

How to in­ter­act with the out­side world.

The WebAssembly se­cu­rity model: what makes it safe?

Who should read this book?The book is mainly tar­geted at ex­pe­ri­enced pro­gram­mers. You don’t need to be an ex­pert, but ide­ally you’ve been pro­gram­ming for a few years and are flu­ent in more than one lan­guage. For im­por­tant top­ics that some read­ers might not be fa­mil­iar with, we’ve in­cluded op­tional deep dive sec­tions to get you up to speed. In or­der to un­der­stand the code, you’ll need at least in­ter­me­di­ate knowl­edge of JavaScript, or a will­ing­ness to learn. We try to stick to the good parts” and to avoid any ad­vanced or ob­scure fea­tures.You do not need any pre­vi­ous ex­pe­ri­ence with writ­ing a com­piler! Our com­piler is based on Ohm, a frame­work that han­dles the lower-level de­tails of pars­ing. This lets us keep the fo­cus on WebAssembly.For some rea­son, many peo­ple be­lieve that writ­ing a com­piler is a com­plex, es­o­teric task. But we hope to con­vince you that it’s re­ally not.

The book I wish ex­isted when I started WAForth.

The spec is­n’t very ac­ces­si­ble, this looks like a great way to get into the de­tails. I’ve re­ally loved work­ing my way through the book so far and learn­ing more about WebAssembly in the process.It’s ba­si­cally the miss­ing IKEA man­ual for peo­ple who wish to tar­get the wasm bi­nary rep­re­sen­ta­tion di­rectly.

Mariano is the co-founder of Gloodata and Instadeq data analy­sis and vi­su­al­iza­tion prod­ucts. He has a long his­tory of lan­guage- and com­piler-re­lated side pro­jects, in­clud­ing the pro­gram­ming lan­guages Efene and Interfix. In the past, he worked with IBM Research and on high-per­for­mance com­put­ing at Intel. Originally from Córdoba, Argentina, he now lives in Stuttgart, Germany. Patrick is a pro­gram­mer and in­de­pen­dent re­searcher based in Munich, Germany. He’s a co-cre­ator and the pri­mary main­tainer of Ohm, a user-friendly pars­ing toolkit for JavaScript. At the be­gin­ning of his ca­reer, he spent four years work­ing on the J9 Java VM at IBM. Since then, he’s worked at com­pa­nies like Google (on Chrome and Android), Lyft, and Sourcegraph.

If you’re not ready to buy the book yet, you can sub­scribe to our mail­ing list —  we’ll send pe­ri­odic up­dates with new con­tent and in­ter­est­ing WebAssembly tid­bits.

...

Read the original on wasmgroundup.com »

9 240 shares, 21 trendiness

You Can Now Make PS2 Games in JavaScript

I re­cently dis­cov­ered that you could make PS2 games in JavaScript. I’m not even kid­ding, it’s ac­tu­ally pos­si­ble. I was work­ing on a pro­ject and had my phone near my desk when I re­ceived a no­ti­fi­ca­tion. Upon fur­ther in­spec­tion, it came from itch.io which was a plat­form where I usu­ally pub­lished most of my web games.

Under my rel­a­tively pop­u­lar Sonic in­fi­nite run­ner game which was made in JavaScript and de­vel­oped a year ago, I re­ceived a com­ment from some­one with the user­name Dev Will which claimed they had made a PS2 ver­sion of my game and pro­vided the GitHub repo of the source code.

At first, I thought that it was cool that some­one took the time to re­make my game for an old con­sole that had a rep­u­ta­tion to be hard to de­velop for and prob­a­bly re­quired them to write a lot of C or C++.

Out of cu­rios­ity, I opened up the GitHub repo and was as­ton­ished to see that the pro­ject was not us­ing even a bit of C++ or C but was en­tirely in JavaScript!

If mak­ing PS2 games were eas­ier than I thought since I could use a higher level lan­guage like JavaScript, I could prob­a­bly try mak­ing one in a rea­son­able amount of time and play it on a retro han­dled or an ac­tual PS2. How cool would that be?

This is where I knew I had to drop every­thing I was do­ing to in­ves­ti­gate how this was pos­si­ble.

Since the dev be­hind the pro­ject was Portuguese speak­ing (I as­sume they were ei­ther from Brazil or Portugal), they wrote the Readme of the repo in Portuguese which was a lan­guage I did not un­der­stand.

Fortunately, I was still able to de­ci­pher most of what was writ­ten be­cause I had done 3 years of Spanish in school and spoke French na­tively. Since Portuguese is a ro­mance lan­guage like Spanish and French, I was for­tu­nately not to­tally lost.

Anyway, The readme said that the en­gine used to make the PS2 ver­sion of my game was called AthenaEnv with a con­ve­niently placed link to­wards it so I could learn more.

As with the Sonic Infinite Runner PS2 pro­ject, this en­gine was also open source and its repo had a very de­tailed readme writ­ten in English.

To sum­ma­rize, Athena was not what we com­monly re­fer to as a game en­gine but an en­vi­ron­ment that also of­fered a JavaScript API for mak­ing games and apps for the PS2. It em­bed­ded a slightly mod­i­fied ver­sion of QuickJS which was a small and em­bed­d­a­ble JavaScript en­gine. This ex­plained how Athena was able to run JavaScript code on the PS2.

Therefore, Athena was the PS2 na­tive pro­gram writ­ten in C that took your JavaScript code, passed it through the QuickJS en­gine to in­ter­pret it and fi­nally, ran the rel­e­vant logic on the sys­tem.

What made it com­pelling was not that it just ran JS on the PS2 but that it of­fered an API suit­able for game de­vel­op­ment. It cov­ered :

* Rendering : Allowing you to dis­play sprites, text, shapes, etc… on the screen and an­i­mate them us­ing a game loop.

* Asset load­ing : Allowing you to load im­ages, sounds, fonts, etc…

* Input han­dling : Allowing you to re­ceive player in­put from a con­troller, mul­ti­ple ones or even from a mouse and key­board since the PS2 sup­ported these in­put meth­ods.

* File han­dling : Allowing you to write save files among other things.

and the list goes on.

I no­ticed how­ever, that the level of ab­strac­tion of­fered by the API was sim­i­lar to some­thing like p5.js, the HTML can­vas API or Raylib. That meant that you’d still needed to im­ple­ment col­li­sion de­tec­tion, scene man­age­ment, etc… your­self.

Now, that I got fa­mil­iar with Athena, I wanted to try to run the Sonic in­fi­nite run­ner port” on an em­u­la­tor. According to the pro­jec­t’s Readme. I needed to in­stall PCSX2 which is the most pop­u­lar em­u­la­tor for the PS2. Then, go into the set­tings and un­der the em­u­la­tion tab, check the box Enable host filesys­tem”.

Once this was done, I would need to open an athena.elf file and the game would start.

After in­stalling and con­fig­ur­ing the em­u­la­tor, I was ready to run the game. However, there was a prob­lem. I could not find the athena.elf file in the repo. It was nowhere to be found.

This is where I re­mem­bered to look at the releases” sec­tion of the repo be­cause a lot of open source pro­jects put ex­e­cuta­bles there, es­pe­cially if it’s a mo­bile or desk­top app pro­ject.

As ex­pected, the zip at­tached in that sec­tion con­tained the athena.elf file but not only. It also con­tained an as­sets folder, a main.js file, an athena.ini file and src folder con­tain­ing the rest of the game’s code.

The athena.ini file al­lowed you to con­fig­ure the en­try point of the pro­ject. Here, the en­try point was set to main.js which ex­plained how Athena would know what JavaScript to run. You could also con­fig­ure if you wanted to show Athena’s logo be­fore your game started by set­ting the boot_l­ogo prop­erty to true.

boot_l­ogo = true

dark­_­mode = true

de­fault­_script = main.js”

aud­srv = true

It now be­came ev­i­dent why we needed to check the Enable host filesys­tem” check box ear­lier. This was so that the em­u­la­tor could al­low Athena to ac­cess the as­sets folder and the source code that were es­sen­tial for our game.

Anyway, I opened the athena.elf file in PCSX2 and sur­pris­ingly, the game ac­tu­ally ran with no is­sues. It was amaz­ing to see that a game I wrote for the web was ported to the PS2 and I was there able to play it with a con­troller.

Now, the game looked a bit blurry which was ex­pected since this was sup­posed to em­u­late a PS2 which had a small res­o­lu­tion. Fortunately, I was able to make things more com­fort­able by up­ping the res­o­lu­tion in the graph­ics set­tings of the em­u­la­tor.

The dev process also seemed quite straight­for­ward. You would only need to open the folder con­tain­ing all the rel­e­vant files (athena.elf, main.js, etc…) in a code ed­i­tor like VSCode and open athena.elf in the em­u­la­tor. Now, you could make changes to your JS code and once you were ready to test, you would go un­der the PCSX2 sys­tem tab and click on re­set. This would restart the em­u­la­tor and you could see the lat­est changes. While not as seam­less as in web de­vel­op­ment with hot re­load­ing, it still was a rel­a­tively fast it­er­a­tion cy­cle.

It’s at that mo­ment, that I knew had to make a post about it and share this awe­some pro­ject with you. However, I still felt un­easy about one thing.

Nowadays, peo­ple down­load PS2 games as .iso files. For most games, you only need one .iso file that you then open in your em­u­la­tor. Less tech­ni­cal peo­ple can there­fore more eas­ily en­joy these older ti­tles.

However, to run the Sonic in­fi­nite run­ner game port”, I needed to not only check a box in the set­tings but also needed the en­tire pro­jec­t’s folder con­tain­ing the Athena ex­e­cutable and the source code.

I won­dered if in­stead, there was a way to dis­trib­ute the game as a sin­gle .iso file. This is were I sim­ply went back to the itch.io com­ment sec­tion and asked if it was pos­si­ble.

After a thor­ough back and forth that con­tin­ued on Discord, the process to con­vert my files into a sin­gle iso, I could dis­trib­ute, was now clear.

To make an iso you needed the fol­low­ing files :

* athena.elf : Which is the Athena ex­e­cutable.

* A JS file act­ing as the en­try point of the code­base.

* The rest of your source code if your code is more than one file, of­ten­times it’s in a folder called src.

* Two files one named ATHA_000.01 and the other SYSTEM.CNF needed to make the iso bootable.

As an aside, in case you want to also get into JavaScript PS2 game de­vel­op­ment, you can check this tem­plate I made con­tain­ing all of the files needed.

Once you had all the files, you had to make a zip archive con­tain­ing them all. One is­sue I had, was that if I cre­ated a zip out of the folder con­tain­ing the files, the re­sult­ing .iso would not work. However, if I se­lected the files one by one and then cre­ated the zip, I would ex­pe­ri­ence no is­sues. This is some­thing to keep in mind.

Now, the only step left was to con­vert the zip into an iso. As I was us­ing a Mac, the only re­li­able way I’ve found, was to use the web­site mcon­verter.eu and let them do the con­ver­sion.

However, the is­sue with this web­site is that you’re lim­ited in the num­ber of con­ver­sions you can do per day be­fore they ask you to pay. Additionally, if your zip archive is above a cer­tain size, you’ll also have to watch an ad be­fore you can do the con­ver­sion.

If you end up find­ing a bet­ter way us­ing ei­ther a CLI tool, a down­load­able app or some other web­site, feel free to share it in the com­ment sec­tion.

Once you had the iso, you could open it up in the em­u­la­tor like you would do with other PS2 games. You also did­n’t need to check the Enable host filesys­tem” op­tion any­more since all the rel­e­vant files needed were in­cluded in the iso.

If the game booted cor­rectly, then you now had a sin­gle file you could dis­trib­ute which was very con­ve­nient.

It was now time to get my feet wet. Before at­tempt­ing any­thing too com­pli­cated, my goal was to cre­ate a sim­ple Hello World” ex­am­ple where I would :

* Load some as­sets (In my case a font and an im­age).

* Set up a game loop that would run every frame.

* Handle player in­put so I could move a sprite around.

Before I could achieve any of these sub-goals, in main.js, I first de­fined a few con­stants that I would end up need­ing.

const { width: SCREEN_WIDTH, height: SCREEN_HEIGHT } = Screen.getMode();

const SCALE = 2;

const SPEED = 3;

const FRAME_WIDTH = 32;

const FRAME_HEIGHT = 44;

This is where I learned that you could get the screen’s width and height by first us­ing the Screen mod­ule avail­able glob­ally like all Athena pro­vided mod­ules (Meaning that no im­port state­ments were needed) and then call­ing the get­Mode method.

Then, to have a sta­ble frame rate and ac­cu­rate FPS count­ing, I needed to call the meth­ods setV­Sync() and set­Frame­Counter()

Screen.setVSync(true); // makes fram­er­ate sta­ble

Screen.setFrameCounter(true); // tog­gles frame count­ing and FPS col­lect­ing.

With the setup com­pleted, I wanted to load the font I used in my Sonic game and a Spritesheet of Sonic so that I could later an­i­mate it. I could achieve the fol­low­ing by cre­at­ing an in­stance of the Font and Image classes of­fered by Athena.

const ma­ni­a­Font = new Font(”./assets/mania.ttf”);

const sprite = new Image(”./assets/sonic.png”);

While I planned on han­dling player in­put later, I still needed a way to get the play­er’s con­troller so that my code could know when a given but­ton was pressed. This was made pos­si­ble by us­ing Athena’s Pads mod­ule.

// Get the first player con­troller

// First player -> 0, Second player -> 1

const pad = Pads.get(0);

Before I could cre­ate a game loop, I needed to first write the setup code re­quired to an­i­mate my spritesheet. Since all the frames where con­tained within a sin­gle im­age, I had to find a way to tell Athena what part of the im­age was to be ren­dered.

To achieve this, I first spent some time to get fa­mil­iar with the shape of the sprite ob­ject cre­ated ear­lier.

const sprite = new Image(”./assets/sonic.png”);

It turned out that we could set the width and the height of the sprite by mod­i­fy­ing the prop­er­ties of the ob­ject with the same names.

// for ex­am­ple

sprite.width = 30;

sprite.height = 40;

It also turned out that you could tell Athena what por­tion of the im­age to draw by set­ting the startx, endx, starty, endy prop­er­ties.

sprite.startx = 0;

sprite.endx = 32;

sprite.starty = 0;

sprite.endy = 44;

For ex­am­ple, if you had the fol­low­ing val­ues : startx = 0, endx = 32, starty = 0 and endy = 44 you would get the first frame ren­dered. This is be­cause in the spritesheet, every frame has a width of 32 and a height of 44. Also, the ori­gin (0,0) cor­re­sponds to the top-left cor­ner of the spritesheet.

Now that I knew how to dis­play a sin­gle frame within a wider im­age, I used the fol­low­ing logic to setup Sonic’s run an­i­ma­tion.

const spritePos = { x: SCREEN_WIDTH / 2, y: SCREEN_HEIGHT / 2 };

sprite.width = FRAME_WIDTH * SCALE;

sprite.height = FRAME_HEIGHT * SCALE;

// de­scribes where each frame is lo­cated within the sprite.

const runAn­im­Frames = [

{ startx: 0, endx: 32, starty: 0, endy: 44 },

{ startx: 32, endx: 64, starty: 0, endy: 44 },

{ startx: 64, endx: 96, starty: 0, endy: 44 },

{ startx: 96, endx: 128, starty: 0, endy: 44 },

{ startx: 128, endx: 160, starty: 0, endy: 44 },

{ startx: 160, endx: 192, starty: 0, endy: 44 },

{ startx: 192, endx: 224, starty: 0, endy: 44 },

{ startx: 224, endx: 256, starty: 0, endy: 44 },

let frameIn­dex = 0;

const frame­Du­ra­tion = 30;

...

Read the original on jslegenddev.substack.com »

10 221 shares, 11 trendiness

It's hard to build an oscillator

There’s an old elec­tron­ics joke that if you want to build an os­cil­la­tor, you should try build­ing an am­pli­fier. One of the fun­da­men­tal cri­te­ria for os­cil­la­tion is the pres­ence of sig­nal gain; with­out it, any os­cil­la­tion is bound to de­cay, just like a swing that’s no longer be­ing pushed must even­tu­ally come to a stop.

In re­al­ity, cir­cuits with gain can oc­ca­sion­ally os­cil­late by ac­ci­dent, but it’s rather dif­fi­cult to build a good ana­log os­cil­la­tor from scratch. The most com­mon cat­e­gory of os­cil­la­tors you can find on the in­ter­net are cir­cuits that don’t work re­li­ably. This is fol­lowed by ap­proaches that re­quire ex­otic com­po­nents, such as cen­ter-tapped in­duc­tors or in­can­des­cent light­bulbs. The fi­nal group are the lay­outs you can copy, but prob­a­bly won’t be able to ex­plain to a friend who does­n’t have an EE de­gree.

In to­day’s ar­ti­cle, I wanted to ap­proach the prob­lem in a dif­fer­ent way. I’ll as­sume that you’re up-to-date on some of the key lessons from ear­lier ar­ti­cles: that you can tell the dif­fer­ence be­tween volt­age and cur­rent, have a ba­sic grasp of tran­sis­tors, and know what hap­pens when a ca­pac­i­tor is charged through a re­sis­tor. With this in mind, let’s try to con­struct an os­cil­la­tor that’s easy to un­der­stand, runs well, and has a pre­dictable op­er­at­ing fre­quency. Further, let’s do it with­out peek­ing at some­one else’s home­work.

The sim­plest form of an os­cil­la­tor is a de­vice that uses neg­a­tive feed­back to cy­cle back and forth be­tween two un­sta­ble states. To il­lus­trate, think of a ma­chine equipped with a light sen­sor and a ro­botic arm. In the dark, the ma­chine is com­pelled to stroll over to the wall switch and flip it on. If it de­tects light, an­other part of its pro­gram­ming takes over and tog­gles the switch off. The ma­chine is doomed to an end­less cy­cle of switch-flip­ping at a fre­quency dic­tated by how quickly it can process in­for­ma­tion and re­act.

At first blush, we should be able to repli­cate this op­er­at­ing prin­ci­ple with a sin­gle n-chan­nel MOSFET. After all, a tran­sis­tor can be used as an elec­tron­i­cally-op­er­ated switch:

The tran­sis­tor turns on when the volt­age be­tween its gate ter­mi­nal and the source leg (Vgs) ex­ceeds a cer­tain thresh­old, usu­ally around 2 V. When the power sup­ply first ramps up, the tran­sis­tor is not con­duct­ing. With no cur­rent flow­ing through, there’s no volt­age drop across the re­sis­tor, so Vgs is pulled to­ward the pos­i­tive sup­ply rail. Once this volt­age crosses about 2 V, the tran­sis­tor be­gins to ad­mit cur­rent. It stands to rea­son that the process shorts the bot­tom ter­mi­nal of the re­sis­tor to the ground and causes Vgs will plunge to 0 V. If so, that would restart the cy­cle and pro­duce a square wave on the out­put leg.

In prac­tice, this is not the be­hav­ior you’ll see. For a MOSFET, the re­la­tion­ship be­tween Vgs and the ad­mit­ted cur­rent (Id) is steep, but the de­vice is not a bi­nary switch:

In par­tic­u­lar, there is a cer­tain point on that curve, some­where in the vicin­ity of 2 V, that cor­re­sponds to the tran­sis­tor only ad­mit­ting a cur­rent of about 300 µA. From Ohm’s law, this cur­rent flow­ing through a 10 kΩ re­sis­tor will pro­duce a volt­age drop of 3 V. In a 5 V cir­cuit, this puts Vgs at 5 V - 3 V = 2 V. In other words, there ex­ists a sta­ble equi­lib­rium that pre­vents os­cil­la­tion. It’s akin to our ro­bot-op­er­ated light switch be­ing half-on.

To fix this is­sue, we need to build an elec­tronic switch that has no sta­ble mid­point. This is known as Schmitt trig­ger and its sim­ple im­ple­men­ta­tion is shown be­low:

To an­a­lyze the de­sign, let’s as­sume the cir­cuit is run­ning off Vsupply = 5 V. If the in­put sig­nal is 0 V, the tran­sis­tor on the left is not con­duct­ing, which pulls Vgs for the other MOSFET all the way to 5 V. That in­put al­lows nearly ar­bi­trary cur­rents to flow through the right branch of the cir­cuit, mak­ing that cur­rent path more or less equiv­a­lent to a two-re­sis­tor a volt­age di­vider. We can cal­cu­late the mid­point volt­age of the di­vider:

This volt­age is also prop­a­gated the source ter­mi­nal of the in­put tran­sis­tor on the left. The ac­tual Vth for the BS170 tran­sis­tors in my pos­ses­sion is about 2.15 V, so for the in­put-side tran­sis­tor to turn on, the sup­plied sig­nal will need to ex­ceed Vs + Vth ≈ 2.6 V in ref­er­ence to the ground. When that hap­pens, a large volt­age drop ap­pears across R1, re­duc­ing the Vgs of the out­put-side tran­sis­tor be­low the thresh­old of con­duc­tion, and chok­ing off the cur­rent in the right branch.

At this point, there’s still cur­rent flow­ing through the com­mon re­sis­tor on the bot­tom, but it’s now in­creas­ingly sourced via the left branch. The left branch forms a new volt­age di­vider; be­cause R1has a higher re­sis­tance than R2, Vs is grad­u­ally re­duced, ef­fec­tively bump­ing up Vgsfor the left tran­sis­tor and thus knock­ing it more firmly into con­duc­tion even if the in­put volt­age re­mains con­stant. This is a pos­i­tive feed­back that gives the cir­cuit no op­tion to linger in a half-on state.

Once the tran­si­tion is com­plete, the volt­age drop across the bot­tom re­sis­tor is down from 450 mV to about 50 mV. This means that al­though the left tran­sis­tor first turned on when the in­put sig­nal crossed 2.6 V in ref­er­ence to the ground, it will not turn off un­til the volt­age drops all the way to 2.2 V — a 400 mV gap.

This cir­cuit lets us build what’s known as a re­lax­ation os­cil­la­tor. To do so, we only need to make two small tweaks. First, we need to loop an in­verted out­put sig­nal back onto the in­put; the most in­tu­itive way of do­ing this is to add an­other tran­sis­tor in a switch-like con­fig­u­ra­tion sim­i­lar to the failed de­sign of a sin­gle-tran­sis­tor os­cil­la­tor men­tioned ear­lier on. This build­ing block, marked on the left, out­puts Vsupply when the sig­nal routed to the gate ter­mi­nal is 0 V, and pro­duces roughly 0 V when the in­put is near Vsupply:

Next, to set a sen­si­ble os­cil­la­tion speed, we need to add a time de­lay, which can be ac­com­plished by charg­ing a ca­pac­i­tor through a re­sis­tor (middle sec­tion). The re­sis­tor needs to be large enough not to over­load the in­verter stage.

For the com­po­nent val­ues shown in the schematic, the cir­cuit should os­cil­late at a fre­quency of al­most ex­actly 3 kHz when sup­plied with 5 V:

The fre­quency is gov­erned by how long it takes for the ca­pac­i­tor to move Δv = 400 mV be­tween the two Schmitt thresh­olds volt­ages: the off” point at 2.2 V and the on” point at 2.6 V.

Because the over­all vari­a­tion in ca­pac­i­tor volt­age is small, the we can squint our eyes and say that the volt­age across the 100 kΩ re­sis­tor is nearly con­stant in every charge cy­cle. When the re­sis­tor is con­nected to the pos­i­tive rail, V ≈ 5 V — 2.4 V ≈ 2.6 V. Conversely, when the re­sis­tor is con­nected to the ground, we get V ≈ 2.4 V. If the volt­ages across the re­sis­tor are nearly con­stant, so are the re­sult­ing ca­pac­i­tor cur­rents:

From the fun­da­men­tal ca­pac­i­tor equa­tion (Δv = I · t/​C), we can solve for the charg­ing time needed to move the volt­age by Δv = 400 mV; the re­sult is about 154 µs for the charg­ing pe­riod and 167 µs for the dis­charg­ing pe­riod. The sum is 321 µs, cor­re­spond­ing to a fre­quency of about 3.1 kHz — pretty close to real life.

The cir­cuit can be sim­pli­fied to two tran­sis­tors at the ex­pense of read­abil­ity, but if you need an ana­log os­cil­la­tor with a lower com­po­nent count, an op­er­a­tional am­pli­fier is your best bet.

If you’re rusty on op-amps, I sug­gest paus­ing to re­view the ar­ti­cle linked in the pre­ced­ing para­graph. That said, to un­der­stand the next cir­cuit, all you need to know is that an op-amp com­pares two in­put volt­ages and that Vout swings to­ward the pos­i­tive rail if Vin+ ≫ Vin- or to­ward the neg­a­tive rail if Vin+ ≪ Vin-.

For sim­plic­ity, let’s choose R1 = R2 = R3 and then look at the non-in­vert­ing (Vin+) in­put of the chip. What we have here is a three-way volt­age di­vider: the sig­nal on the non-in­vert­ing in­put is sim­ple av­er­age of three volt­ages: Vsupply (5 V), ground (0 V), and Vout. We don’t know the value of Vout just yet, but it can only vary from 0 V to Vsupply, so the V sig­nal will al­ways stay be­tween ⅓ · Vsupply and ⅔ · Vsupply.

Next, let’s have a look at the in­vert­ing in­put (Vin-). When the cir­cuit is first pow­ered on, the ca­pac­i­tor C is­n’t charged, so Vin- sits at 0 V. Since the volt­age on the non-in­vert­ing in­put can’t be lower than ⅓ · Vsupply, this means that on power-on, Vin+ ≫ Vin-, send­ing the out­put volt­age to­ward the pos­i­tive rail. When Vout shoots up, it also bumps the Vin+ av­er­age to ⅔ · Vsupply.

Because Vout is now high, this starts the process of charg­ing the ca­pac­i­tor through the bot­tom re­sis­tor (R). After a while, the ca­pac­i­tor volt­age is bound to ex­ceed ⅔ · Vsupply. The ca­pac­i­tor volt­age is also hooked up to the am­pli­fier’s in­vert­ing in­put, and at that point, Vin- be­gins to ex­ceeds Vin+, nudg­ing the out­put volt­age lower. Stable equi­lib­rium is not pos­si­ble be­cause this out­put volt­age drop is im­me­di­ately re­flected in the three-way av­er­age pre­sent on the Vin+ leg, pulling it down and caus­ing the dif­fer­ence be­tween Vin- and Vin+ to widen. This pos­i­tive feed­back loop puts the am­pli­fier firmly into the Vin+ ≪ Vin-territory.

At that point, Vout must drop to 0 V, thus low­er­ing the volt­age on the non-in­vert­ing leg to ⅓ · Vsupply. With Vout low, the ca­pac­i­tor starts dis­charg­ing through R, but it needs to travel from the cur­rent charge state of ⅔ · Vsupply all the way to ⅓ · Vsupply be­fore Vin- be­comes lower than Vin+ and the cy­cle is al­lowed to restart.

The con­tin­ued charg­ing and dis­charg­ing of the ca­pac­i­tor be­tween ⅓ · Vsupply and ⅔ · Vsupply re­sults in pe­ri­odic os­cil­la­tion. The cir­cuit pro­duces a square wave sig­nal with a pe­riod dic­tated by the value of C and R. The fre­quency of these os­cil­la­tions can be ap­prox­i­mated anal­o­gously to what we’ve done for the dis­crete-tran­sis­tor vari­ant ear­lier on. In a 5 V cir­cuit with R1 = R2 = R3, the ca­pac­i­tor charges and dis­charges by Δv ≈ 1.67 V. If R = 10 kΩ, then the quasi-con­stant ca­pac­i­tor charg­ing cur­rent is I ≈ 2.5 V / 10 kΩ ≈ 250 µA.

Knowing Δv and I, and as­sum­ing C = 1 µF, we can tap into the ca­pac­i­tor equa­tion (Δv = I · t/​C) to solve for t. The re­sult is 6.67 ms. This puts the charge-dis­charge roundtrip at 13.34 ms, sug­gest­ing a fre­quency of 75 Hz. The ac­tual mea­sure­ment is shown be­low:

The ob­served fre­quency is about 7% lower than pre­dicted: 70 in­stead of 75 Hz. Although I could pin this on com­po­nent tol­er­ances, a more hon­est ex­pla­na­tion is that at Δv ≈ 1.67 V, the con­stant-cur­rent ap­prox­i­ma­tion of the ca­pac­i­tor charg­ing process is stretched thin; the seg­ments in the bot­tom os­cil­lo­scope trace di­verge quite a bit from a straight line. Not to worry; to re­duce Δv, we just need to bump up the value of R3. If we switch to 47 kΩ and keep every­thing else the same, the delta will be about 480 mV and the model we’re re­ly­ing on will give a more pre­cise re­sult.

If you’re in­ter­ested in a gen­eral for­mula to find the cir­cuit’s op­er­at­ing fre­quency, it helps to as­sume that R1 and R2 are the same. If so, we can re­place them with a new com­pos­ite re­sis­tor with half the re­sis­tance and solve the stan­dard volt­age di­vider equa­tion to find out what would hap­pen if the feed­back sig­nal moves from 0 V to Vsupply:

With two iden­ti­cal re­sis­tors, the ca­pac­i­tor wave­form is cen­tered around ½ Vsupply, so the for­mula for the av­er­age cur­rent is also pretty sim­ple (and does­n’t change be­tween the charge and dis­charge pe­ri­ods):

This gives us all we need to solve for fre­quency us­ing the ca­pac­i­tor equa­tion, rewrit­ten as t = Δv · C/I:

This fur­ther sim­pli­fies to:

…and in the spe­cific case of R1 = R2 = 10 kΩ plus R3 = 47 kΩ, we get:

The method out­lined ear­lier on is not the only con­cep­tual ap­proach to build os­cil­la­tors. Another way is to pro­duce res­o­nance. We can do this by tak­ing a stan­dard op-amp volt­age fol­lower which uses neg­a­tive feed­back to con­trol the out­put — and then mess with the feed­back loop in a par­tic­u­lar way.

In the ba­sic volt­age fol­lower con­fig­u­ra­tion, the op-amp reaches a sta­ble equi­lib­rium when Vin+ ≈ Vin- ≈ Vout. Again, the cir­cuit works only be­cause of the neg­a­tive feed­back loop; in its ab­sence, Vin- would di­verge from Vin+ and the out­put volt­age would swing to­ward one of the sup­ply rails.

To turn this cir­cuit into an os­cil­la­tor, we can build a feed­back loop that nor­mally pro­vides neg­a­tive feed­back, but that in­verts the wave­form at a par­tic­u­lar sine-wave fre­quency. This turns neg­a­tive feed­back into pos­i­tive feed­back; in­stead of sta­bi­liz­ing the out­put volt­age, it pro­duces in­creas­ing swings, but only at the fre­quency at which the in­ver­sion takes place.

Such a se­lec­tive wave­form in­ver­sion sounds com­pli­cated, but we can achieve it a fa­mil­iar build­ing block: an R-C low­pass fil­ter. The me­chan­ics of these fil­ters are dis­cussed in this ar­ti­cle; in a nut­shell, the arrange­ment pro­duces a fre­quency-de­pen­dent phase shift of 0° (at DC) to -90° (as the fre­quency ap­proaches in­fin­ity). If we cas­cade a cou­ple of these R-C stages, we can achieve a -180° phase shift at some cho­sen fre­quency, which is the same as flip­ping the wave­form.

A min­i­mal­is­tic but well-be­haved op-amp so­lu­tion is shown be­low:

In this par­tic­u­lar cir­cuit, an over­all -180° shift hap­pens when each of the R-C stages adds its own -60°. It’s easy to find the fre­quency at which this oc­curs. In the afore­men­tioned ar­ti­cle on sig­nal fil­ter­ing, we came up with the fol­low­ing for­mula de­scrib­ing the shift as­so­ci­ated with the fil­ter:

Arctangent is the in­verse of the tan­gent func­tion. In a right tri­an­gle, the tan­gent func­tion de­scribes the ra­tio of lengths of the op­po­site to the ad­ja­cent for a par­tic­u­lar an­gle; the arc­t­an­gent goes the other way round, giv­ing us an an­gle for a par­tic­u­lar ra­tio. In other words, if x = tan(α) then α = arc­tan(x). This al­lows us to rewrite the equa­tion as:

We’re try­ing to solve for f at which θ = -60°; the value of -tan(-60°) is roughly 1.73, so we can plug that into the equa­tion and then move every­thing ex­cept f to the right. Throwing in the com­po­nent val­ues for the first R-C stage in the schematic, we ob­tain:

You’ll no­tice that the re­sult is the same for the other two stages: they have higher re­sis­tances but pro­por­tion­ally lower ca­pac­i­tances, so the de­nom­i­na­tor of the frac­tion does­n’t change.

Oscilloscope traces for the cir­cuit are shown be­low:

Because the am­pli­fier’s gain is­n’t con­strained in any way, the out­put wave­form is a square wave. Nevertheless, in a low­pass cir­cuit with these char­ac­ter­is­tics, the re­sult­ing wave­forms are close enough to si­nu­soids that the sine-wave model ap­prox­i­mates the be­hav­ior nearly per­fectly. We can run a dis­crete-time sim­u­la­tion to show that the sine-wave be­hav­ior of these three R-C stages (gray) aligns pretty well with the square-wave case (blue):

To make the out­put a sine wave, it’s pos­si­ble to tin­ker with with the feed­back loop to lower the cir­cuit’s gain, but it’s hard to get it right; in­suf­fi­cient gain pre­vents os­cil­la­tion while ex­cess gain pro­duces dis­tor­tion. A sim­pler trick is to tap into the sig­nal on the non-in­vert­ing leg (bottom os­cil­lo­scope trace) and use the other part of a dual op-amp IC to am­plify this sig­nal to your heart’s de­sire.

Some read­ers might be won­der­ing why I de­signed the stages so that each of them has an im­ped­ance ten times larger than the stage be­fore it. This is to pre­vent the fil­ters from ap­pre­cia­bly load­ing each other. If all the im­ped­ances were in the same ball­park, the mid­dle fil­ter could source cur­rents from the left as eas­ily as it could from the right. In that sit­u­a­tion, find­ing the point of -180° phase shift with de­cent ac­cu­racy would re­quire cal­cu­lat­ing the trans­fer func­tion for the en­tire six-com­po­nent Franken-filter; the task is doable but — to use a math­e­mat­i­cal term — rather un­pleas­ant.

Footnote: in the lit­er­a­ture, the cir­cuit is more of­ten con­structed us­ing high­pass stages and a dis­crete tran­sis­tor as an am­pli­fier. I’d wa­ger that most au­thors who pre­sent the dis­crete-tran­sis­tor so­lu­tion have not ac­tu­ally tried it in prac­tice; oth­er­wise, they would have found it to be quite finicky. The ver­sion pre­sented in this ar­ti­cle is dis­cussed here.

If you en­joyed the con­tent, please sub­scribe. I’m not sell­ing any­thing; it’s just a good way to stay in touch with the writ­ers you like.

...

Read the original on lcamtuf.substack.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.