10 interesting stories served every morning and every evening.




1 647 shares, 31 trendiness

BirdyChat Becomes Europe’s First WhatsApp-Interoperable Chat App

Today we are ex­cited to share a big mile­stone. BirdyChat is now the first chat app in Europe that can ex­change mes­sages with WhatsApp un­der the Digital Markets Act. This brings us closer to our mis­sion of giv­ing work con­ver­sa­tions a proper home.

WhatsApp is cur­rently rolling out in­ter­op­er­abil­ity sup­port across Europe. As this roll­out con­tin­ues, the fea­ture will be­come fully avail­able to both BirdyChat and WhatsApp users in the com­ing months.

...

Read the original on www.birdy.chat »

2 597 shares, 19 trendiness

Minnesota reels after second fatal shooting by federal agents

The Department of Homeland Security said the man was armed with a gun and two mag­a­zines of am­mu­ni­tion and cir­cu­lated a photo of the weapon. DHS said a Border Patrol agent fired in self-de­fense. The Minnesota Bureau of Criminal Apprehension (BCA), the state’s chief in­ves­tiga­tive agency, said it was not al­lowed ac­cess to the scene.

...

Read the original on www.startribune.com »

3 446 shares, 35 trendiness

Adoption of electric vehicles tied to real-world reductions in air pollution, study finds

When California neigh­bor­hoods in­creased their num­ber of zero-emis­sions ve­hi­cles (ZEV) be­tween 2019 and 2023, they also ex­pe­ri­enced a re­duc­tion in air pol­lu­tion. For every 200 ve­hi­cles added, ni­tro­gen diox­ide (NO₂) lev­els dropped 1.1%. The re­sults, ob­tained from a new analy­sis based on statewide satel­lite data, are among the first to con­firm the en­vi­ron­men­tal health ben­e­fits of ZEVs, which in­clude fully elec­tric and plug-in hy­brid cars, in the real world. The study was funded in part by the National Institutes of Health and just pub­lished in The Lancet Planetary Health.

While the shift to elec­tric ve­hi­cles is largely aimed at curb­ing cli­mate change in the fu­ture, it is also ex­pected to im­prove air qual­ity and ben­e­fit pub­lic health in the near term. But few stud­ies have tested that as­sump­tion with ac­tual data, partly be­cause ground-level air pol­lu­tion mon­i­tors have lim­ited spa­tial cov­er­age. A 2023 study from the Keck School of Medicine of USC us­ing these ground-level mon­i­tors sug­gested that ZEV adop­tion was linked to lower air pol­lu­tion, but the re­sults were not de­fin­i­tive.

Now, the same re­search team has con­firmed the link with high-res­o­lu­tion satel­lite data, which can de­tect NO₂ in the at­mos­phere by mea­sur­ing how the gas ab­sorbs and re­flects sun­light. The pol­lu­tant, re­leased from burn­ing fos­sil fu­els, can trig­ger asthma at­tacks, cause bron­chi­tis, and in­crease the risk of heart dis­ease and stroke.

This im­me­di­ate im­pact on air pol­lu­tion is re­ally im­por­tant be­cause it also has an im­me­di­ate im­pact on health. We know that traf­fic-re­lated air pol­lu­tion can harm res­pi­ra­tory and car­dio­vas­cu­lar health over both the short and long term,” said Erika Garcia, PhD, MPH, as­sis­tant pro­fes­sor of pop­u­la­tion and pub­lic health sci­ences at the Keck School of Medicine and the study’s se­nior au­thor.

The find­ings of­fer sup­port for the con­tin­ued adop­tion of elec­tric ve­hi­cles. Over the study pe­riod, ZEV reg­is­tra­tions in­creased from 2% to 5% of all light-duty ve­hi­cles (a cat­e­gory that in­cludes cars, SUVs, pickup trucks and vans) across California, sug­gest­ing that the po­ten­tial for im­prov­ing air pol­lu­tion and pub­lic health re­mains largely un­tapped.

We’re not even fully there in terms of elec­tri­fy­ing, but our re­search shows that California’s tran­si­tion to elec­tric ve­hi­cles is al­ready mak­ing mea­sur­able dif­fer­ences in the air we breathe,” said the study’s lead au­thor, San­drah Eckel, PhD, as­so­ci­ate pro­fes­sor of pop­u­la­tion and pub­lic health sci­ences at the Keck School of Medicine.

For the analy­sis, the re­searchers di­vided California into 1,692 neigh­bor­hoods, us­ing a ge­o­graphic unit sim­i­lar to zip codes. They ob­tained pub­licly avail­able data from the state’s Department of Motor Vehicles on the num­ber of ZEVs reg­is­tered in each neigh­bor­hood. ZEVs in­clude full-bat­tery elec­tric cars, plug-in hy­brids and fuel-cell cars, but not heav­ier duty ve­hi­cles like de­liv­ery trucks and semi trucks.

Next, the re­search team ob­tained data from the Tropospheric Monitoring Instrument (TROPOMI), a high-res­o­lu­tion satel­lite sen­sor that pro­vides daily, global mea­sure­ments of NO₂ and other pol­lu­tants. They used this data to cal­cu­late an­nual av­er­age NO₂ lev­els in each California neigh­bor­hood from 2019 to 2023.

Over the study pe­riod, a typ­i­cal neigh­bor­hood gained 272 ZEVs, with most neigh­bor­hoods adding be­tween 18 and 839. For every 200 new ZEVs reg­is­tered, NO₂ lev­els dropped 1.1%, a mea­sur­able im­prove­ment in air qual­ity.

These find­ings show that cleaner air is­n’t just a the­ory—it’s al­ready hap­pen­ing in com­mu­ni­ties across California,” Eckel said.

To con­firm that these re­sults were re­li­able, the re­searchers con­ducted sev­eral ad­di­tional analy­ses. They ac­counted for pan­demic-re­lated changes as a con­trib­u­tor to NO₂ de­cline, such as ex­clud­ing the year 2020 and con­trol­ling for chang­ing gas prices and work-from-home pat­terns. The re­searchers also con­firmed that neigh­bor­hoods that added more gas-pow­ered cars saw the ex­pected rise in pol­lu­tion. Finally, they repli­cated their re­sults us­ing up­dated data from ground-level mon­i­tors from 2012 to 2023.

We tested our analy­sis in many dif­fer­ent ways, and the re­sults con­sis­tently sup­port our main find­ing,” Garcia said.

These re­sults show that TROPOMI satel­lite data—which cov­ers nearly the en­tire planet—can re­li­ably track changes in com­bus­tion-re­lated air pol­lu­tion, of­fer­ing a new way to study the ef­fects of the tran­si­tion to elec­tric ve­hi­cles and other en­vi­ron­men­tal in­ter­ven­tions.

Next, Garcia, Eckel and their team are com­par­ing data on ZEV adop­tion with data on asthma-re­lated emer­gency room vis­its and hos­pi­tal­iza­tions across California. The study could be one of the first to doc­u­ment real-world health im­prove­ments as California con­tin­ues to em­brace elec­tric ve­hi­cles.

In ad­di­tion to Garcia and Eckel, the study’s other au­thors are Futu Chen, Sam J. Silva and Jill Johnston from the Department of Population and Public Health Sciences, Keck School of Medicine of USC, University of Southern California; Daniel L. Goldberg from the Milken Institute School of Public Health, The George Washington University; Lawrence A. Palinkas from the Herbert Wertheim School of Public Health and Human Longevity Science, University of California, San Diego; and Alberto Campos and Wilma Franco from the Southeast Los Angeles Collaborative.

This work was sup­ported by the National Institutes of Health/National Institute of Environmental Health Sciences [R01ES035137, P30ES007048]; the National Aeronautics and Space Administration Health and Air Quality Applied Sciences Team [80NSSC21K0511]; and the National Aeronautics and Space Administration Atmospheric Composition Modeling and Analysis Program [80NSSC23K1002].

...

Read the original on keck.usc.edu »

4 338 shares, 66 trendiness

Deutsche Telekom is throttling the internet!

Play Video: Deutsche Telekom is throt­tling the in­ter­net. Let’s do some­thing about it!

Play Video: Deutsche Telekom is throt­tling the in­ter­net. Let’s do some­thing about it!

Epicenter.works, the Society for Civil Rights, the Federation of German Consumer Organizations, and Stanford Professor Barbara van Schewick are fil­ing an of­fi­cial com­plaint with the Federal Network Agency against Deutsche Telekom’s un­fair busi­ness prac­tices.

Deutsche Telekom is cre­at­ing ar­ti­fi­cial bot­tle­necks at ac­cess points to its net­work. Financially strong ser­vices that pay Telekom get through quickly and work per­fectly. Services that can­not af­ford this are slowed down and of­ten load slowly or not at all.

This means Telekom de­cides which ser­vices we can use with­out is­sues, vi­o­lat­ing net neu­tral­ity. We are fil­ing a com­plaint with the Federal Network Agency to stop this un­fair prac­tice to­gether!

...

Read the original on netzbremse.de »

5 294 shares, 40 trendiness

Google confirms 'high-friction' sideloading flow is coming to Android

Google has re­sponded to our re­cent re­port on new Google Play strings hint­ing at changes to how Android will han­dle side­loaded apps in the fu­ture. The com­pany has now con­firmed that a high-friction” in­stall process is on the way.

Replying to our story on X, Matthew Forsyth, Director of Product Management, Google Play Developer Experience & Chief Product Explainer, said the sys­tem is­n’t a side­load­ing re­stric­tion, but an Accountability Layer.” Advanced users will still be able to choose Install with­out ver­i­fy­ing,” though Google says that path will in­volve ex­tra steps meant to en­sure users un­der­stand the risks of in­stalling apps from un­ver­i­fied de­vel­op­ers.

That ex­pla­na­tion broadly matches what we’re see­ing in re­cent ver­sions of Google Play, where new warn­ing mes­sages em­pha­size de­vel­oper ver­i­fi­ca­tion, in­ter­net re­quire­ments, and po­ten­tial risks, while still al­low­ing users to pro­ceed.

What re­mains to be seen is how far Google takes this high-friction” ap­proach. Clear warn­ings are one thing, but qui­etly mak­ing side­load­ing more painful is an­other. Android’s open­ness has al­ways de­pended on power users be­ing able to in­stall apps with­out ex­ces­sive hoops.

For now, Google has­n’t sug­gested re­quire­ments like us­ing a PC or ex­ter­nal tools, and we hope the added fric­tion is lim­ited to risk ed­u­ca­tion.

...

Read the original on www.androidauthority.com »

6 271 shares, 9 trendiness

I added a Bluesky comment section to my blog

You can now view replies to this blog post made on Bluesky di­rectly on this web­site. Check it out here!

I’ve al­ways wanted to host a com­ment sec­tion on my site, but it’s dif­fi­cult be­cause the con­tent is sta­t­i­cally gen­er­ated and hosted on a CDN. I could host com­ments on a sep­a­rate VPS or cloud ser­vice. But main­tain­ing a dy­namic web ser­vice like this can be ex­pen­sive and time-con­sum­ing — in gen­eral, I’m not in­ter­ested in be­ing an un­paid, part-time DevOps en­gi­neer.

Recently, how­ever, I read a blog post by Cory Zue about how he em­bed­ded a com­ment sec­tion from Bluesky on his blog. I im­me­di­ately un­der­stood to ben­e­fits of this ap­proach. With this ap­proach, Bluesky could han­dle all of the dif­fi­cult work in­volved in man­ag­ing a so­cial me­dia like ac­count ver­i­fi­ca­tion, host­ing, stor­age, spam, and mod­er­a­tion. Meanwhile be­cause Bluesky is an open plat­form with a pub­lic API, it’s easy to di­rectly em­bed com­ments on my own site.

There are other ser­vices that could be used for this pur­pose in­stead. Notably, I could em­bed replies from the so­cial me­dia for­merly known as Twitter. Or I could use a plat­form like Disqus or even gis­cus, which hosts com­ments on GitHub Discussions. But I see Bluesky as a clearly su­pe­rior choice among these op­tions. For one, Bluesky is built on top of an open so­cial me­dia plat­form in AT Proto, mean­ing it can’t eas­ily be taken over by an au­thor­i­tar­ian bil­lion­aire creep. Moreover, Bluesky is a full-fledged so­cial me­dia plat­form, which nat­u­rally makes it a bet­ter op­tion for host­ing a con­ver­sa­tion than GitHub.

Zue pub­lished a stand­alone pack­age called bluesky-com­ments that al­lows em­bed­ding com­ments in a React com­po­nent as he did. But I de­cided to build this fea­ture my­self in­stead. Mainly this is be­cause I wanted to make a few styling changes any­way to match the rest of my site. But I also wanted to leave the op­tion open to adding more fea­tures in the fu­ture, which would be eas­ier to do if I wrote the code my­self. The en­tire im­ple­men­ta­tion is small re­gard­less, amount­ing to only ~200 LOC be­tween the UI com­po­nents and API func­tions.

Initially, I planned to al­low peo­ple to di­rectly post on Bluesky via my site. This would work by pro­vid­ing an OAuth flow that gives my site per­mis­sion to post on Bluesky on be­half of the user. I ac­tu­ally did get the auth flow work­ing, but build­ing out a UI for post­ing and re­ply­ing to ex­ist­ing com­ments is dif­fi­cult to do well. Going down this path quickly leads to build­ing what is es­sen­tially a cus­tom Bluesky client, which I did­n’t have the time or in­ter­est in do­ing right now. Moreover, be­cause the user needs to go through the auth flow and sign-in to their Bluesky ac­count, the process is not re­ally much eas­ier than post­ing di­rectly on a linked Bluesky post.

Without the re­quire­ment of al­low­ing oth­ers to di­rectly post on my site, the im­ple­men­ta­tion be­came much sim­pler. Essentially, my task was to spec­ify a Bluesky post that cor­re­sponds to the ar­ti­cle in the site’s meta­data. Then, when the page loads I fetch the replies to that post from Bluesky, parse the re­sponse, and dis­play the re­sults in a sim­ple com­ment sec­tion UI.

As ex­plained in my last post, this site is built us­ing React Server Components and Parcel. The con­tent of my ar­ti­cles are writ­ten us­ing MDX, an ex­ten­sion to Markdown that al­lows di­rectly em­bed­ding JavaScript and JSX. In each post, I ex­port a meta­data ob­ject that I val­i­date us­ing a Zod schema. For in­stance, the meta­data for this post looks like this:

The value of bsky­PostId ref­er­ences the Bluesky post from which I’ll pull replies to dis­play in the com­ment sec­tion. Because my pro­ject is built in TypeScript, it was easy to in­te­grate with the Bluesky TypeScript SDK (@bluesky/api on NPM). Reading the Bluesky API doc­u­men­ta­tion and Zue’s im­ple­men­ta­tion led me to the get­Post­Thread end­point. Given an AT Protocol URI, this end­point re­turns an ob­ject with data on the given post and its replies.

I could have in­ter­acted di­rectly with the Bluesky API from my React com­po­nent us­ing fetch and use­Ef­fect. However, it can be a bit tricky to cor­rectly han­dle load­ing and a er­ror states, even for a sim­ple fea­ture like this. Because of this, I de­cided to use the Tanstack re­act-query pack­age to man­age the API re­quest/​re­sponse cy­cle. This li­brary takes care of the messy work of han­dling er­rors, re­tries, and load­ing states while I sim­ply pro­vide it a func­tion to fetch the post data.

Once I ob­tain the Bluesky re­sponse, the next task is pars­ing out the con­tent and meta­data for the replies. Bluesky sup­ports a rich con­tent struc­ture in its posts for rep­re­sent­ing markup, ref­er­ences, and at­tach­ments. Building out a UI that fully re­spects this rich con­tent would be dif­fi­cult. Instead, I de­cided to keep it sim­ple by just pulling out the text con­tent from each re­ply.

Even so, build­ing a UI that prop­erly dis­plays threaded com­ments, par­tic­u­larly one that is for­mat­ted well on small mo­bile de­vices, can be tricky. For now, my ap­proach was to again keep it sim­ple. I in­dented each re­ply and added a left bor­der to make it eas­ier to fol­low re­ply threads. Otherwise, I mostly copied de­sign el­e­ments for lay­out of the pro­file pic­ture and post date from Bluesky.

Lastly, I added a UI com­po­nent link­ing to the par­ent post on Bluesky, and en­cour­ag­ing peo­ple to add to the con­ver­sa­tion there. With this, the read-only com­ment sec­tion im­ple­men­ta­tion was com­plete. If there’s in­ter­est, I could pub­lish my ver­sion of Bluesky com­ments as a stand­alone pack­age. But sev­eral of the choices I made were rel­a­tively spe­cific to my own site. Moreover, the im­ple­men­ta­tion is sim­ple enough that oth­ers could prob­a­bly build their own ver­sion from read­ing the source code, just as I did us­ing Zue’s ver­sion.

Let me know what you think by re­ply­ing on Bluesky. Hopefully this can help in­crease en­gage­ment with my blog posts, but then again, my last ar­ti­cle gen­er­ated no replies, so maybe not 😭.

Join the con­ver­sa­tion by re­ply­ing on Bluesky…

...

Read the original on micahcantor.com »

7 257 shares, 14 trendiness

Europe wants to end its dangerous reliance on US internet technology

Imagine the in­ter­net sud­denly stops work­ing. Payment sys­tems in your lo­cal food store go down. Healthcare sys­tems in the re­gional hos­pi­tal flat­line. Your work soft­ware tools, and all the in­for­ma­tion they con­tain, dis­ap­pear.

You reach out for in­for­ma­tion but strug­gle to com­mu­ni­cate with fam­ily and friends, or to get the lat­est up­dates on what is hap­pen­ing, as so­cial me­dia plat­forms are all down. Just as some­one can pull the plug on your com­puter, it’s pos­si­ble to shut down the sys­tem it con­nects to.

This is­n’t an out­landish sce­nario. Technical fail­ures, cy­ber-at­tacks and nat­ural dis­as­ters can all bring down key parts of the in­ter­net. And as the US gov­ern­ment makes in­creas­ing de­mands of European lead­ers, it is pos­si­ble to imag­ine Europe los­ing ac­cess to the dig­i­tal in­fra­struc­ture pro­vided by US firms as part of the geopo­lit­i­cal bar­gain­ing process.

At the World Economic Forum in Davos, Switzerland, the EUs pres­i­dent, Ursula von der Leyen, has high­lighted the structural im­per­a­tive” for Europe to build a new form of in­de­pen­dence” — in­clud­ing in its tech­no­log­i­cal ca­pac­ity and se­cu­rity. And, in fact, moves are al­ready be­ing made across the con­ti­nent to start re­gain­ing some in­de­pen­dence from US tech­nol­ogy.

A small num­ber of US-headquartered big tech com­pa­nies now con­trol a large pro­por­tion of the world’s cloud com­put­ing in­fra­struc­ture, that is the global net­work of re­mote servers that store, man­age and process all our apps and data. Amazon Web Services (AWS), Microsoft Azure and Google Cloud are re­ported to hold about 70% of the European mar­ket, while European cloud providers have only 15%.

My re­search sup­ports the idea that re­ly­ing on a few global providers in­creases vul­ner­a­bilty for Europe’s pri­vate and pub­lic sec­tors — in­clud­ing the risk of cloud com­put­ing dis­rup­tion, whether caused by tech­ni­cal is­sues, geopo­lit­i­cal dis­putes or ma­li­cious ac­tiv­ity.

Two re­cent ex­am­ples — both the re­sult of ap­par­ent tech­ni­cal fail­ures — were the hours‑long AWS in­ci­dent in October 2025, which dis­rupted thou­sands of ser­vices such as bank­ing apps across the world, and the ma­jor Cloudflare in­ci­dent two months later, which took LinkedIn, Zoom and other com­mu­ni­ca­tion plat­forms of­fline.

The im­pact of a ma­jor power dis­rup­tion on cloud com­put­ing ser­vices was also demon­strated when Spain, Portugal and some of south-west France en­dured a mas­sive power cut in April 2025.

There are signs that Europe is start­ing to take the need for greater dig­i­tal in­de­pen­dence more se­ri­ously. In the Swedish coastal city of Helsingborg, for ex­am­ple, a one-year pro­ject is test­ing how var­i­ous pub­lic ser­vices would func­tion in the sce­nario of a dig­i­tal black­out.

Would el­derly peo­ple still re­ceive their med­ical pre­scrip­tions? Can so­cial ser­vices con­tinue to pro­vide care and ben­e­fits to all the city’s res­i­dents?

This pi­o­neer­ing pro­ject seeks to quan­tify the full range of hu­man, tech­ni­cal and le­gal chal­lenges that a col­lapse of tech­ni­cal ser­vices would cre­ate, and to un­der­stand what level of risk is ac­cept­able in each sec­tor. The aim is to build a model of cri­sis pre­pared­ness that can be shared with other mu­nic­i­pal­i­ties and re­gions later this year.

Elsewhere in Europe, other fore­run­ners are tak­ing ac­tion to strengthen their dig­i­tal sov­er­eignty by wean­ing them­selves off re­liance on global big tech com­pa­nies — in part through col­lab­o­ra­tion and adop­tion of open source soft­ware. This tech­nol­ogy is treated as a dig­i­tal pub­lic good that can be moved be­tween dif­fer­ent clouds and op­er­ated un­der sov­er­eign con­di­tions.

In north­ern Germany, the state of Schleswig-Holstein has made per­haps the clear­est break with dig­i­tal de­pen­dency. The state gov­ern­ment has re­placed most of its Microsoft-powered com­puter sys­tems with open-source al­ter­na­tives, can­celling nearly 70% of its li­censes. Its tar­get is to use big tech ser­vices only in ex­cep­tional cases by the end of the decade.

Across France, Germany, the Netherlands and Italy, gov­ern­ments are in­vest­ing both na­tion­ally and transna­tion­ally in the de­vel­op­ment of dig­i­tal open-source plat­forms and tools for chat, video and doc­u­ment man­age­ment — akin to dig­i­tal Lego bricks that ad­min­is­tra­tions can host on their own terms.

In Sweden, a sim­i­lar sys­tem for chat, video and on­line col­lab­o­ra­tion, de­vel­oped by the National Insurance Agency, runs in do­mes­tic data cen­tres rather than for­eign clouds. It is be­ing of­fered as a ser­vice for Swedish pub­lic au­thor­i­ties look­ing for sov­er­eign dig­i­tal al­ter­na­tives.

For Europe — and any na­tion — to mean­ing­fully ad­dress the risks posed by dig­i­tal black­out and cloud col­lapse, dig­i­tal in­fra­struc­ture needs to be treated with the same se­ri­ous­ness as phys­i­cal in­fra­struc­ture such as ports, roads and power grids.

Control, main­te­nance and cri­sis pre­pared­ness of dig­i­tal in­fra­struc­ture should be seen as core pub­lic re­spon­si­bil­i­ties, rather than some­thing to be out­sourced to global big tech firms, open for for­eign in­flu­ence.

To en­cour­age greater fo­cus on dig­i­tal re­silience among its mem­ber states, the EU has de­vel­oped a cloud sov­er­eignty frame­work to guide pro­cure­ment of cloud ser­vices — with the in­ten­tion of keep­ing European data un­der European con­trol. The up­com­ing Cloud and AI Development Act is ex­pected to bring more fo­cus and re­sources to this area.

Governments and pri­vate com­pa­nies should be en­cour­aged to de­mand se­cu­rity, open­ness and in­ter­op­er­abil­ity when seek­ing bids for pro­vi­sion of their cloud ser­vices — not merely low prices. But in the same way, as in­di­vid­u­als, we can all make a dif­fer­ence with the choices we make.

Just as it’s ad­vis­able to en­sure your own ac­cess to food, wa­ter and med­i­cine in a time of cri­sis, be mind­ful of what ser­vices you use per­son­ally and pro­fes­sion­ally. Consider where your emails, per­sonal pho­tos and con­ver­sa­tions are stored. Who can ac­cess and use your data, and un­der what con­di­tions? How eas­ily can every­thing be backed up, re­trieved and trans­ferred to an­other ser­vice?

No coun­try, let alone con­ti­nent, will ever be com­pletely dig­i­tally in­de­pen­dent, and nor should they be. But by pulling to­gether, Europe can en­sure its dig­i­tal sys­tems re­main ac­ces­si­ble even in a cri­sis — just as is ex­pected from its phys­i­cal in­fra­struc­ture.

...

Read the original on theconversation.com »

8 250 shares, 14 trendiness

Wiper malware targeted Poland energy grid, but failed to knock out electricity

Researchers on Friday said that Poland’s elec­tric grid was tar­geted by wiper mal­ware, likely un­leashed by Russia state hack­ers, in an at­tempt to dis­rupt elec­tric­ity de­liv­ery op­er­a­tions.

A cy­ber­at­tack, Reuters re­ported, oc­curred dur­ing the last week of December. The news or­ga­ni­za­tion said it was aimed at dis­rupt­ing com­mu­ni­ca­tions be­tween re­new­able in­stal­la­tions and the power dis­tri­b­u­tion op­er­a­tors but failed for rea­sons not ex­plained.

On Friday, se­cu­rity firm ESET said the mal­ware re­spon­si­ble was a wiper, a type of mal­ware that per­ma­nently erases code and data stored on servers with the goal of de­stroy­ing op­er­a­tions com­pletely. After study­ing the tac­tics, tech­niques, and pro­ce­dures (TTPs) used in the at­tack, com­pany re­searchers said the wiper was likely the work of a Russian gov­ern­ment hacker group tracked un­der the name Sandworm.

Based on our analy­sis of the mal­ware and as­so­ci­ated TTPs, we at­tribute the at­tack to the Russia-aligned Sandworm APT with medium con­fi­dence due to a strong over­lap with nu­mer­ous pre­vi­ous Sandworm wiper ac­tiv­ity we an­a­lyzed,” said ESET re­searchers. We’re not aware of any suc­cess­ful dis­rup­tion oc­cur­ring as a re­sult of this at­tack.”

Sandworm has a long his­tory of de­struc­tive at­tacks waged on be­half of the Kremlin and aimed at ad­ver­saries. Most no­table was one in Ukraine in December 2015. It left roughly 230,000 peo­ple with­out elec­tric­ity for about six hours dur­ing one of the cold­est months of the year. The hack­ers used gen­eral pur­pose mal­ware known as BlackEnergy to pen­e­trate power com­pa­nies’ su­per­vi­sory con­trol and data ac­qui­si­tion sys­tems and, from there, ac­ti­vate le­git­i­mate func­tion­al­ity to stop elec­tric­ity dis­tri­b­u­tion. The in­ci­dent was the first known mal­ware-fa­cil­i­tated black­out.

...

Read the original on arstechnica.com »

9 191 shares, 9 trendiness

Raspberry Pi Drag Race: Pi 1 to Pi 5 – Performance Comparison

Today we’re go­ing to be tak­ing a look at what al­most 13 years of de­vel­op­ment has done for the Raspberry Pi. I have one of each gen­er­a­tion of Pi from the orig­i­nal Pi that was launched in 2012 through to the Pi 5 which was re­leased just over a year ago.

We’ll take a look at what has changed be­tween each gen­er­a­tion and how their per­for­mance and power con­sump­tion has im­proved by run­ning some tests on them.

Here’s my video of the test­ing process and re­sults, read on for the write-up;

Some of the above parts are af­fil­i­ate links. By pur­chas­ing prod­ucts through the above links, you’ll be sup­port­ing this chan­nel, at no ad­di­tional cost to you.

This is the orig­i­nal Raspberry Pi, which was launched in February 2012.

This Pi has a Broadcom BCM2835 SOC which fea­tures a sin­gle ARM1176JZF-S core run­ning at 700MHz along with a VideoCore IV GPU. It has 512 MB of DDR RAM.

In terms of con­nec­tiv­ity, it only has 100Mb net­work­ing and 2 x USB 2.0 ports. Video out­put is 1080P through a full-size HDMI port or ana­logue video out through a com­pos­ite video con­nec­tor and au­dio out­put is pro­vided through a 3.5mm au­dio jack. It does­n’t have any WiFi or Bluetooth con­nec­tiv­ity but it does have some of the fea­tures that we still have on more re­cent mod­els like DSI and CSI ports, a full size SD card reader for the op­er­at­ing sys­tem and GPIO pins, al­though only 26 of them at this stage.

Power is sup­plied through a mi­cro USB port and it is rated for 5V and 700mA.

It was priced at $35 — which at the time was in­cred­i­bly cheap for what was es­sen­tially a palm-sized com­puter.

The Raspberry Pi 2 was launched 3 years later, in February 2015 and this Pi looked quite dif­fer­ent to the orig­i­nal and sim­i­lar to the Pi’s we know to­day.

The Pi 2 has a sig­nif­i­cantly bet­ter proces­sor than the orig­i­nal. The Broadcom BCM2836 SOC has 4 Cortex-A7 cores run­ning at 900 MHz and it re­tained the same VideoCore IV GPU. RAM was also bumped up to 1GB.

It added an­other 2 x USB 2.0 ports along­side the 100Mb Ethernet port. The com­pos­ite video port dis­ap­peared and the ana­logue video out­put was moved into the au­dio jack.

The GPIO pins were in­creased to 40 pins which has fol­lowed the same pin lay­out since — which has re­ally helped in main­tain­ing com­pat­i­bil­ity with hats and ac­ces­sories. The SD card reader was also changed to a mi­croSD card reader.

The power cir­cuitry was bumped up to 800mA to ac­com­mo­date the more pow­er­ful CPU.

The Raspberry Pi 3 was launched just a year later, in February 2016.

The Pi 3’s new Broadcom BCM2837 SOC re­tained the same 4-core ar­chi­tec­ture but these were changed to 64-bit Cortex A53 cores run­ning at 1.2Ghz.

RAM was kept at 1GB but was now DDR2.

There was no change to the USB or Ethernet con­nec­tiv­ity on the orig­i­nal Pi 3 but we did see WiFi and Bluetooth added for the first time. WiFi was sin­gle band 2.4GHz and we had Bluetooth 4.1.

The ver­sion that I have is ac­tu­ally the 3B+, which was launched a lit­tle later. The main im­prove­ments over the orig­i­nal Pi 3 were a 0.2GHz boost to the clock speed and the up­grade to Gigabit net­work­ing with PoE (Power over Ethernet) sup­port and dual-band WiFi.

The power cir­cuitry was again im­proved, still run­ning at 5V but now up to 1.34A, which was al­most dou­ble the Pi 2.

Next came the Pi 4 in June 2019. This Pi came at one of the worst times for global man­u­fac­tur­ing and was no­to­ri­ously dif­fi­cult to get hold of due to the im­pact of COVID on the global sup­ply chain. Quite iron­i­cally, this hard-to-get Pi is the one that I’ve got the most of, mainly due to my wa­ter-cooled Pi clus­ter build.

The Pi 4 has a Broadcom BCM2711 SOC with 4 Cortex-A72 cores run­ning at 1.5GHz. So again a slight clock speed in­crease over the Pi 3 but still re­tain­ing 4 cores. It also in­cludes a bump up to a VideoCore VI GPU.

This was the first model to fea­ture dif­fer­ent RAM con­fig­u­ra­tions. It was orig­i­nally avail­able in 1, 2, 4GB vari­ants fea­tur­ing LPDDR4 RAM and in March of 2020 an 8GB vari­ant was added to the linup as well. This ob­vi­ously re­sulted in a few dif­fer­ent price points but im­pres­sively they still man­aged to keep a $35 of­fer­ing 7 years af­ter the launch of the first Pi.

It re­tained the same form fac­tor as the Pi 3 but with the net­work and Ethernet ports switched around. Notably, two of the USB ports were up­graded to USB 3.0, net­work­ing was now gi­ga­bit eth­er­net like the 3B+, WiFi was dual-band and it had Bluetooth 5.0.

They also changed the sin­gle full-size HDMI port to two mi­cro HDMI ports. Most peo­ple I know don’t like this change and find it an­noy­ing to have to use adap­tors to work with com­mon dis­plays and these mi­cro HDMI ports are prone to break­ing when they are used of­ten. I think gen­eral hob­by­ists and mak­ers would pre­fer this to still be a sin­gle full-size port but Pi’s are of­ten used in com­mer­cial dis­play ap­pli­ca­tions so I guess that’s why they went with this dual mi­cro HDMI con­fig­u­ra­tion.

The power cir­cuit was ac­tu­ally re­duced in this model, from 1.34 down to 1.25A and the port was changed to USB C.

Lastly and most re­cently we have the Pi 5 which was launched in October 2023.

This Pi fea­tures a Broadcom BCM2712 SOC with 4 Cortex A76 cores run­ning at a sig­nif­i­cantly faster 2.4Ghz and a VideoCore VII GPU run­ning at 800MHz.

So quite a bump up in CPU and GPU per­for­mance.

It is of­fered in 3 RAM con­fig­u­ra­tions but the drop in a 1GB of­fer­ing means that they’re no longer avail­able at the $35 price point. There is a fairly sig­nif­i­cant in­crease in price up to $50 for the base 2GB vari­ant.

Some other no­table changes are the in­clu­sion of a PCIe port which en­ables IO ex­pan­sion and a much im­proved power cir­cuit. The PCIe port is quite com­monly used to add an NVMe SSD in­stead of a mi­croSD card for the op­er­at­ing sys­tem.

The power cir­cuit was up­graded to han­dle the PCIe port ad­di­tion, now step­ping up to 5V at up to 5A, along with a power but­ton for the first time.

The change in power sup­ply re­quire­ments to 5V and 5A is a bit an­noy­ing as most power de­liv­ery ca­pa­ble sup­plies cap out 2.5 or 3A at 5V. It would have been more uni­ver­sal to re­quire a 9V 3A sup­ply to meet the Pis power re­quire­ments. I as­sume they steered away from this be­cause the Pi’s cir­cuitry runs at 5V and 3.3V and they would have then needed to add an­other on­board DC-DC con­verter which in­creases com­plex­ity, size and po­ten­tially the cost, it would also have made it a bit less ef­fi­cient. But this does mean that you most likely need to buy a USB C power sup­ply that has been pur­pose-built for the Pi 5.

The Pi 5 is also the first Pi to have its own ded­i­cated fan socket.

So that’s a sum­mary of the hard­ware changes, now let’s boot them up and take a look at their per­for­mance.

To com­pare the per­for­mance be­tween the Pi’s, I’m go­ing to run the fol­low­ing tests.

* I’m go­ing to at­tempt to play­back a 1080P YouTube video in the browser, al­though I ex­pect we’ll have prob­lems with this up to the Pi 4.

* We’ll then run a Sysbench CPU bench­mark which I’ll do both for a sin­gle-core and mul­ti­core.

* Then test the stor­age speed us­ing James Chambers Pi Benchmark script.

* Lastly, we’ll look at Power Consumption, both at idle and with the CPU maxed out.

* And then use that data to de­ter­mine each Pi’s Performance per Watt.

To keep things as con­sis­tent as pos­si­ble I’m go­ing to be run­ning the lat­est avail­able ver­sion of Pi OS from Raspberry Pi Imager for each Pi. I was pleas­antly sur­prised to find that you can still flash an OS im­age for the orig­i­nal Pi in their lat­est ver­sion of Imager.

I’ll be test­ing them all run­ning on a 32GB Sandisk Ultra mi­croSD card. I’ll also be us­ing an Ice Tower cooler on each to en­sure they don’t run any­where near ther­mal throt­tling.

I started with the orig­i­nal Pi and its first boot and setup process was a les­son in pa­tience. It took me the best part of two hours to get the first boot com­plete, the Pi up­dated and the test­ing util­i­ties in­stalled but I got there in the end.

Even once set up it takes about 8 min­utes to boot up to the desk­top and the CPU stays pegged at 100% for an­other two to three min­utes be­fore drop­ping down to about 20% at idle.

The orig­i­nal Pi re­fused to open up the browser, so that’s where my YouTube video play­back test ended.

The Pi 2 man­aged to open the browser and ac­tu­ally started play­ing back a 1080P video, which was sur­pris­ing, but play­back was ter­ri­ble. It dropped pretty much all of the frames both in the win­dow and fullscreen.

The Pi 3 played video back no­tice­ably bet­ter than the Pi 2, but it’s still quite a long way away from be­ing us­able and still drops a lot of frames.

The Pi 4 han­dled 1080P video rea­son­ably well. It had some ini­tial trou­ble but then set­tled down. Fullscreen is also a bit choppy but is also us­able.

The Pi 5 han­dled 1080P play­back well with­out any sig­nif­i­cant is­sues both in the win­dow and fullscreen.

Next was the Sysbench CPU bench­mark. I ran three tests on each and av­er­aged the scores and I did this for both sin­gle-core and mul­ti­core.

In sin­gle core, the Pi 1 man­aged a rather dis­mal score of 68, the Pi 2 got a bit more than dou­ble this score but the real step up was with the Pi 3 which man­aged 18 times higher than the Pi 2. The Pi 4 and Pi 5 also of­fered good im­prove­ments on the pre­vi­ous gen­er­a­tions.

Similarly in mul­ti­core, the Pi 3 scored over 18 times the score of the Pi2 and the Pi 4 and 5 pro­vided good im­prove­ments on the Pi 3’s score.

Comparing the com­bined mul­ti­core score of the Pi 5 to what the sin­gle core on the Pi 1 can do, the Pi 5 is a lit­tle over 600 times faster.

Next, I tried run­ning a GLMark2 GPU bench­mark on them. I used the GLMark2-es2-wayland ver­sion which is de­signed for OpenGL ES so that the Pi 1 was sup­ported.

I was sur­prised that the Pi 1 was even able to run GLMark2 — it did com­plete the bench­mark, al­though the score was­n’t all that im­pres­sive.

These re­sults re­ally show how the Pi’s GPU has im­proved in the last two gen­er­a­tions. Prior to these tests, I had never seen a score be­low 100 and the Pi 1, 2 and 3 man­aged to fall short of triple dig­its. Pi 5 scored over 2.5 times higher than the Pi 4.

Next was the stor­age speed test us­ing James Chambers Pi Benchmarks script. The bus speed has in­creased over the years from 25MHz on the Pi 1 to 100MHz on the Pi 5, so I ex­pect we’ll see these re­flected in the bench­mark scores.

The stor­age speed test’s re­sults aren’t as dra­matic as the CPU and GPU re­sults but show a steady im­prove­ment be­tween gen­er­a­tions. The Pi 3 did a bit worse than the Pi 2 but this small dif­fer­ence is likely just due to vari­abil­ity in the tests.

Next, I ran the iPerf net­work speed test on each.

The Pi 1 does­n’t quite get close to its the­o­ret­i­cal 100Mbps but the Pi 2 does. The Pi 3 B+ al­though hav­ing Gigabit Ethernet is lim­ited by this run­ning over USB 2.0 which only has a the­o­ret­i­cal max­i­mum of 300MBps, so it came quite close. Both the Pi 4 and 5 ex­pect­edly come close to the­o­ret­i­cal Gigabit speeds.

Lastly, I tested the power con­sump­tion of each Pi at idle and un­der load.

I used the same Pi 5 power adap­tor to test all of the Pis to keep things con­sis­tent and I just used a USB C to mi­cro USB adap­tor for the Pi 1, 2 and 3.

The idle re­sults were closer than I ex­pected. The Pi 2 had the low­est idle power draw and the Pi 5 the high­est, but all were within a watt or two of each other. At full load, you can see the in­crease in CPU power draw more phys­i­cal power with the Pi 5 draw­ing al­most three times the Pi 1 and Pi 2.

Converted to per­for­mance per watt us­ing the Sysbench re­sults, we can again see how much bet­ter the Pi 4 and 5 are over the Pi 1 and 2. There is a clear im­prove­ment in the per­for­mance that each gen­er­a­tion of Pi is able to get per watt of power, which is es­sen­tially its ef­fi­ciency. Although the Pi 5 draws more power than the Pi 1 un­der full load, you’re get­ting al­most 200 times more power out of it per watt.

I re­ally en­joyed work­ing through this pro­ject to see how much Pi’s have changed over the years, par­tic­u­larly in terms of per­for­mance. I still re­mem­ber be­ing amazed at the size and price of the orig­i­nal Pi when it came out and it’s great that they’re still fully sup­ported and can still be used for pro­jects — al­beit with less CPU-intensive pro­jects.

Let me know what you think has been the biggest im­prove­ment to the Pi over the years and what you’d still like to see added to fu­ture mod­els in the com­ments sec­tion be­low.

I per­son­ally re­ally like the ad­di­tion of the PCIe port on the Pi 5 and I’d like to see 2.5Gb net­work­ing and a DisplayPort or USB C with DisplayPort added to a fu­ture gen­er­a­tion of Pi.

...

Read the original on the-diy-life.com »

10 189 shares, 9 trendiness

What Worked, and Where We Go Next

On March 14, 2025, Albedo’s first satel­lite, Clarity-1, launched on SpaceX Transporter-13. We took a big swing with our pathfinder. The mis­sion goals:

Prove sus­tain­able or­bit op­er­a­tions in VLEO — an or­bital regime long con­sid­ered too harsh for com­mer­cial satel­lites — by over­com­ing thick at­mos­pheric drag, dan­ger­ous atomic oxy­gen, and ex­treme speeds. Prove our mid-size, high-per­for­mance Precision bus — de­signed and built in-house in just over two years.Cap­ture 10 cm res­o­lu­tion vis­i­ble im­agery and 2-meter ther­mal in­frared im­agery, a feat pre­vi­ously achieved only by ex­quis­ite, bil­lion dol­lar gov­ern­ment sys­tems.

We achieved the first two goals de­fin­i­tively and val­i­dated 98% of the tech­nol­ogy re­quired for the third. This was an ex­tra­or­di­nar­ily am­bi­tious first satel­lite. We de­signed and built a high-per­for­mance bus on time and on bud­get, in­te­grated a large-aper­ture tele­scope, and op­er­ated in an en­vi­ron­ment no com­mer­cial com­pany had sus­tained op­er­a­tions in, funded en­tirely by pri­vate cap­i­tal.

This is the full story.

Let’s start with the re­sult that mat­ters most: VLEO works. And it works bet­ter than even we ex­pected.

For decades, Very Low Earth Orbit was writ­ten off as im­prac­ti­cal for nor­mal satel­lite life­times. The at­mos­phere is thicker, cre­at­ing drag that would de­or­bit nor­mal satel­lites in weeks. If the drag did­n’t kill you, atomic oxy­gen would erode your so­lar ar­rays and sur­faces. To suc­ceed in VLEO re­quired a fun­da­men­tally dif­fer­ent satel­lite de­sign.

The drag co­ef­fi­cient was the head­line: 12% bet­ter than our de­sign tar­get. Measured mul­ti­ple times at al­ti­tudes be­tween 350 km - 380 km with a re­peat­able re­sult, this val­i­dates our mod­els pro­duc­ing a satel­lite lifes­pan of five years at 275 km al­ti­tude, av­er­aged across the so­lar cy­cle. This was one of our most crit­i­cal as­sump­tions, and we ex­ceeded it.

Atomic oxy­gen (AO) is the silent killer in VLEO. The deeper you go, the more AO you en­counter. It de­grades so­lar ar­rays and other tra­di­tional satel­lite ma­te­ri­als. We de­vel­oped a new class of so­lar ar­rays with unique mea­sures de­signed to mit­i­gate AO degra­da­tion. They work. Even as we de­scended deeper into VLEO and AO flu­ence in­creased log­a­rith­mi­cally, our power gen­er­a­tion stayed con­stant. The so­lar ar­rays are hold­ing up as de­signed.

Clarity-1 demon­strated over 100 km of con­trolled al­ti­tude de­scent, sta­tion­keep­ing in VLEO, and sur­vived a so­lar storm that tem­porar­ily spiked at­mos­pheric den­sity — the im­pact on Clarity’s de­scent rate was barely no­tice­able. Momentum man­age­ment worked. Fault de­tec­tion worked. Our thrust plan­ning model was val­i­dated against GOCE data (a 2009 VLEO R&D mis­sion) with sub-me­ter ac­cu­racy. Radiation tol­er­ance was ex­cel­lent, with 4x fewer sin­gle-event up­sets than ex­pected. Orbit de­ter­mi­na­tion was di­aled.

Developed and built in just over two years, our in-house bus Precision is now TRL-9: flight-proven on-or­bit.

Every bus sub­sys­tem worked. Every piece of in-house tech­nol­ogy we de­vel­oped per­formed: our CMG steer­ing law, our op­er­a­tional modes, flight and ground soft­ware, elec­tron­ics boards, and our novel ther­mal man­age­ment sys­tem.  We hit our em­bed­ded soft­ware GNC tim­ing dead­lines, we con­verged our at­ti­tude and or­bit de­ter­mi­na­tion es­ti­ma­tors, we saw 4π stera­dian com­mand and teleme­try an­tenna cov­er­age, and we got on-or­bit ac­tu­als for our power gen­er­a­tion and loads.

Our cloud-na­tive ground sys­tem was in­cred­i­ble. Contact plan­ning across 25 ground sta­tions was com­pletely au­to­mated. Mission sched­ul­ing up­dated every 15 min­utes to in­cor­po­rate new task­ing and the lat­est satel­lite state in­for­ma­tion, smoothly tran­si­tion­ing to up­dated on-board com­mand loads with vi­sual track­ing of each sched­ule and its sta­tus. Automated thrust plan­ning to achieve our de­sired or­bital tra­jec­tory sup­ported 30+ ma­neu­vers per day. Our en­gi­neers could track and com­mand the satel­lite from any­where with in­ter­net and a se­cure VPN.

We pushed 14 suc­cess­ful flight soft­ware fea­ture up­dates on-or­bit — and even ex­e­cuted one FPGA up­date, which is ex­cep­tion­ally rare. The abil­ity to con­tin­u­ously im­prove through­out Clarity’s op­er­a­tional life proved es­sen­tial — every ma­jor so­lu­tion to chal­lenges we faced in­volved flight soft­ware up­dates. On-orbit soft­ware up­grades are ex­ceed­ingly tricky to get right, but Clarity-1 was de­signed from day one around this foun­da­tional ca­pa­bil­ity.

The first month of the mis­sion was magic.

An hour af­ter launch, we watched Clarity-1 de­ploy from the pre­mium cake­top­per slot into LEO, giv­ing us an in­cred­i­ble view of the Nile River as she sep­a­rated from the rocket.

First con­tact came just three hours later at 5:11am MT. Imagine sit­ting in Mission Control, watch­ing two ground sta­tion passes with no data, then on the third: heaps of green, healthy teleme­try stream­ing into all of the sub­sys­tem dash­boards. Clarity had nailed her au­tonomous boot-up se­quence and rocket sep­a­ra­tion rate cap­ture. Stuck the land­ing.

The next mile­stone — and the one many of us were most anx­ious about — was our au­tonomous Protect Mode, ba­si­cally our VLEO ver­sion of Safe Mode.

We nailed it 14 hours af­ter launch.

By 6:45pm that same day, Clarity was in Operational mode, ready for com­mis­sion­ing.

The days that fol­lowed were a blur of check­boxes turn­ing green. 4-CMG com­mis­sion­ing com­plete. Payload power-on and check­out val­i­dated. Thermal bal­ance for both vis­i­ble and ther­mal sen­sors con­firmed. Our first on-or­bit soft­ware up­date went flaw­lessly.

Clarity uses Control Moment Gyroscopes (CMGs) to steer the satel­lite, giv­ing us more agility than more com­monly used re­ac­tion wheels. We moved onto val­i­dat­ing GNC modes such as GroundTrack, which we use to point at com­mu­ni­ca­tion ground ter­mi­nals.

We moved on to com­mis­sion­ing our X-band ra­dio — the high-rate link to down­link im­agery. After we un­cov­ered an is­sue with our ground sta­tion provider’s point­ing mode, the 800 Mbps link be­gan pump­ing down data on every pass. The wave­forms were clean. Textbook. A di­rect rep­re­sen­ta­tion of how locked in our pre­ci­sion CMG point­ing was.

With our first satel­lite at this level of com­plex­ity, we could­n’t be­lieve how smoothly it had gone. Years of de­vel­op­ing new tech­nolo­gies had been val­i­dated in a frac­tion of the com­mis­sion­ing time we’d an­tic­i­pated.

Next up was ma­neu­ver­ing from our LEO drop-off al­ti­tude down to VLEO, where it would be safe to eject the tele­scope con­t­a­m­i­na­tion cover and start snap­ping pic­tures.

One of our four CMGs ex­pe­ri­enced a tem­per­a­ture spike in the fly­wheel bear­ing. Our Fault Detection, Isolation, and Recovery (FDIR) logic caught it im­me­di­ately, spun it down, and ex­e­cuted au­to­mated re­cov­ery ac­tions. But it would­n’t spin back up. Manual re­cov­ery at­tempts fol­lowed. Also un­suc­cess­ful.

Rushing back into CMG op­er­a­tions with­out un­der­stand­ing the fail­ure mech­a­nism risked killing the mis­sion en­tirely, so we turned off the other three and put the satel­lite in two-axis sta­bi­liza­tion us­ing the mag­netic torque rods.

We had a choice. Hack to­gether novel 3-CMG con­trol al­go­rithms as fast as pos­si­ble and risk los­ing an­other, or fig­ure out how to lever­age only the torque rods to achieve 3-axis con­trol with suf­fi­cient ac­cu­racy to nav­i­gate the ma­neu­ver to VLEO.

We went with the torque rods.

On satel­lites this size (~600 kg), mag­netic torque rods are typ­i­cally used for mo­men­tum dump­ing, not at­ti­tude con­trol. But we’d built Clarity with un­usu­ally beefy torque rods due to the el­e­vated mo­men­tum man­age­ment needs in VLEO. Our GNC team went heads down and de­vel­oped al­go­rithms to achieve 3-axis at­ti­tude con­trol us­ing only torque rods.

Within a month, we had it work­ing.

Both of our elec­tric thrusters com­mis­sioned quickly and were work­ing well. But with torque rods only, our at­ti­tude con­trol had 15 to 20 de­grees of er­ror, some­times reach­ing ~45 de­grees. And ma­neu­ver­ing to VLEO is­n’t point into the wind and fire” — it’s con­tin­u­ous vec­tor and tra­jec­tory man­age­ment across an or­bit. That kind of con­trol er­ror meant in­ef­fi­cient burns and a much harder de­scent plan.

As the de­scent pro­gressed, how­ever, the team learned and it­er­ated. With more it­er­a­tion and flight soft­ware up­dates, we up­loaded on­board logic in­formed by sev­eral sources of live data that di­aled in our thrust vec­tor con­trol to within 5 de­grees of the tar­get. The au­tonomous thrust plan­ning sys­tem we built en­abled us to claw back per­for­mance that nearly matched our orig­i­nally pro­jected de­scent speed.

We ma­neu­vered safely past the ISS and en­tered VLEO. Eager to pop off the con­t­a­m­i­na­tion cover.

Once we reached safe al­ti­tude, it was time to jet­ti­son the con­t­a­m­i­na­tion cover pro­tect­ing our tele­scope.

There are hor­ror sto­ries about con­t­a­m­i­na­tion cov­ers get­ting stuck af­ter months of tem­per­a­ture fluc­tu­a­tions.

Clarity’s was flaw­less. I’ll never for­get see­ing this blip in teleme­try live — con­firm­ing through Newton’s third law that the jet­ti­son was suc­cess­ful. Shortly af­ter, LeoLabs con­firmed track­ing of two sep­a­rate ob­jects.

We were ready to start imag­ing.

Here’s where it got com­pli­cated.

Our GNC and FSW teams were close but not yet fin­ished with the new 3-CMG con­trol law. CMGs are rarely used in com­mer­cial space, let alone by a startup. Then take one more step: sin­gu­lar­ity-prone 3-CMG con­trol that to our knowl­edge has not been at­tempted on a non-ex­quis­ite satel­lite, and cer­tainly not de­vel­oped and up­loaded on-or­bit. Traditional al­go­rithms re­quire at least four CMGs to pro­vide ca­pa­bil­ity vol­umes free of sin­gu­lar­i­ties.

We were ea­ger to make some amount of progress, so we started imag­ing on torque rods even though there would be se­vere lim­i­ta­tions: 50+ pix­els of smear, large mis­point­ing from the wob­ble of torque rod con­trol due to earth’s mag­netic field, and down­link lim­ited to at best two small im­ages per day. The last two con­straints meant we were at risk of spend­ing pre­cious down­link ca­pac­ity on clouds.

Sure enough, the first two days of pix­els were mostly clouds, but we were happy to peek through a lit­tle in this im­age.

Although we could­n’t con­trol at­ti­tude ac­cu­rately, we did still have good at­ti­tude knowl­edge af­ter the fact. AyJay whipped up a clever idea with Claude Code that au­to­mated post­ing weather con­di­tions in Slack for each col­lec­tion. We an­a­lyzed that to de­ter­mine which im­ages were likely clear, and se­lected those for down­link.

We ad­justed the fo­cus po­si­tion a few times, and im­ages con­tin­ued get­ting bet­ter.

Out of the box, the new al­go­rithms and soft­ware per­formed per­fectly.

This vi­su­al­iza­tion shows real teleme­try of Clarity per­form­ing seven back-to-back imag­ing ma­neu­vers, with lim­ited 3-CMG agility, fol­lowed by an X-band down­link over Iceland min­utes later. The satel­lite was ex­e­cut­ing so­phis­ti­cated at­ti­tude pro­files with very low con­trol er­ror. Fiber-optic gyro mea­sure­ments showed ex­quis­ite jit­ter per­for­mance.

In real time, col­lect­ing and down­link­ing those seven im­ages took ten min­utes.

And this is where our ground soft­ware re­ally showed its teeth. On most mis­sions, data on the ground” is just the start — turn­ing raw bits into some­thing view­able is a slow chain of hand­offs and batch pro­cess­ing. For us, within sec­onds of the down­link fin­ish­ing, the im­age prod­uct pipeline was al­ready post­ing processed snip­pets into our com­pany Slack. Literally sec­onds.

That end-to-end loop — pho­tons in or­bit to a view­able prod­uct on the ground, within min­utes — is a ca­pa­bil­ity that’s still rare in this in­dus­try.

We were ready to ex­e­cute fo­cus cal­i­bra­tion.

Large tele­scope op­tics ex­pe­ri­ence hy­gro­scopic dry­out dur­ing the first few months on-or­bit — mois­ture trapped in ma­te­ri­als dur­ing ground as­sem­bly slowly re­leases in the vac­uum of space, caus­ing the fo­cus po­si­tion to drift. Dialing in best fo­cus re­quires dozens of it­er­a­tions: cap­ture im­ages, an­a­lyze sharp­ness, ad­just fo­cus po­si­tion, re­peat. Each cy­cle gets you closer to the op­ti­cal per­for­mance the sys­tem was de­signed for, and our tele­scope’s on-ground align­ment was ver­i­fied to spec.

After a few it­er­a­tions of this, we could start to see cars.

Even this early into imag­ing, the in­frared im­ages blew us away. Using a low-cost mi­crobolome­ter — a frac­tion of the price of cooled IR sen­sors — we cap­tured ther­mal sig­na­tures that showed ships in Tokyo Bay, steel pro­cess­ing fa­cil­i­ties where we could dis­tin­guish in­di­vid­ual coke ovens from their smoke­stacks, and dis­tinct sig­na­tures be­tween real veg­e­ta­tion and turf — a good proxy for cam­ou­flage de­tec­tion. Day or night, clear as day.

Three days into the ex­cite­ment, CMG prob­lems started again.

A sec­ond CMG be­gan show­ing the same teleme­try sig­na­tures we now rec­og­nized as warn­ing signs.

What we had learned from the in­ves­ti­ga­tion: the al­low­able tem­per­a­ture spec­i­fi­ca­tions of the CMGs were much higher than the true limit, con­strained by what the lu­bri­cant in­side the fly­wheel could han­dle. A straight­for­ward fix for the fu­ture — an un­for­tu­nate cor­ner case to learn about in hind­sight.

The sec­ond CMG show­ing is­sues was also on the hot side of the satel­lite. While we had over­hauled the ve­hi­cle and CMG op­er­a­tions to pre­vent ad­di­tional bear­ing wear, the dam­age had al­ready been done in the first month of the mis­sion.

We spent months try­ing every­thing we could to get the CMGs to op­er­ate sus­tain­ably. The team at­tempted many clever so­lu­tions, one of which re­vived the first CMG that had locked up. We up­loaded a fea­ture to se­lect any 3 of the 4 CMGs for op­er­a­tor com­mand­ing. But we weren’t able to get sus­tained, re­li­able op­er­a­tion.

Despite the CMG chal­lenges, here’s what the imag­ing jour­ney proved.

The full end-to-end im­age chain works. Photons hit our op­tics, get cap­tured by our sen­sor, processed through pay­load elec­tron­ics, pack­e­tized and en­crypted, trans­mit­ted via our X-band ra­dio, re­ceived on the ground, and processed into im­age prod­ucts. The en­tire chain is val­i­dated.

The end-to-end loop is fast. Within 30 sec­onds of a down­link, processed im­age snip­pets were al­ready post­ing to our com­pany Slack.

Sensor per­for­mance ex­ceeded ex­pec­ta­tions. Dynamic range, ra­diom­e­try, color bal­ance, band-to-band align­ment — all look great, even on un­cal­i­brated im­agery.

We can scan out long im­ages. Our line-scan­ning ap­proach pro­duced strips 20-30 kilo­me­ters long, ex­actly as de­signed.

Pointing ac­cu­racy and high qual­ity teleme­try val­i­dates the in­gre­di­ents for pre­cise ge­olo­ca­tion. The data we need to pin­point where each pixel lands on Earth to

Jitter and smear are low. Fiber-optic gyro mea­sure­ments con­firmed 3x lower smear and 11x lower jit­ter com­pared to our goal — a crit­i­cal in­gre­di­ent for ex­quis­ite im­agery.

Our pro­pri­etary im­age sched­uler works. The au­to­mated sys­tem that plans col­lec­tions, man­ages con­straints, and op­ti­mizes what we cap­ture each day per­formed as de­signed.

Nine months into the mis­sion, we lost con­tact with Clarity-1.

By that point, we had largely ex­hausted our op­tions on the CMGs. The path to fur­ther im­age qual­ity im­prove­ment had ef­fec­tively closed.

We had been track­ing in­ter­mit­tent mem­ory is­sues in our TT&C ra­dio through­out the mis­sion, work­ing around them as they ap­peared. Our best the­ory is that one of these is­sues es­ca­lated in a way that cor­rupted on­board mem­ory and is pre­vent­ing re­boots. We’ve tried sev­eral re­cov­ery ap­proaches. So far, none have worked, and the like­li­hood of re­cov­ery looks low at this point.

But here’s what mat­ters: the VLEO val­i­da­tion data we col­lected is suf­fi­cient.

We com­bined a state-of-the-art at­mos­pheric den­sity model, our high-fi­delity or­bital dy­nam­ics force mod­els, and months of nat­ural or­bit de­cay data from 350 to 380 km al­ti­tude to de­ter­mine Clarity’s co­ef­fi­cient of drag — with re­peat­able re­sults at dif­fer­ent al­ti­tudes. That drag co­ef­fi­cient, paired with our demon­strated abil­ity to main­tain al­ti­tude in VLEO for months us­ing high-ef­fi­ciency thrusters, tells us ex­actly how the ve­hi­cle be­haves un­der aero­dy­namic drag across the VLEO regime — and val­i­dates an av­er­age five-year lifes­pan at 275 km across the so­lar cy­cle. Telemetry from our so­lar ar­rays, to­gether with on­board atomic oxy­gen sen­sor data, shows peak power gen­er­a­tion stayed con­stant af­ter ex­po­sure to VLEO lev­els of AO flu­ence — prov­ing our AO mit­i­ga­tion worked.

Thanks to our friends at LeoLabs, we’ve val­i­dated that Clarity is main­tain­ing at­ti­tude au­tonomously. She’s still up there, still ori­ented, still de­scend­ing through VLEO. Just not talk­ing to us.

Even be­fore this, we had started de­vel­op­ing an in-house TT&C ra­dio for our sys­tems mov­ing for­ward, rather than reusing this ra­dio that was pro­cured from a third party. We’ll in­cor­po­rate learn­ings from this re­li­a­bil­ity is­sue into that.

We’re still work­ing the prob­lem. This chap­ter is­n’t over yet. But even if it is, Clarity-1 gave us what we needed to build what comes next.

If you think about ex­quis­ite im­agery as a pyra­mid, we needed 100% of the sys­tems work­ing to­gether to achieve the pin­na­cle: 10 cm vis­i­ble im­agery. We got to about 98%. Everything else in that pyra­mid — the en­tire foun­da­tion — is proven and re­tired.

Our drag co­ef­fi­cient. Our atomic oxy­gen re­silience. Our so­lar ar­rays. Our ther­mal man­age­ment. Our flight soft­ware. Our ground soft­ware. Our CMG steer­ing laws. Our pre­ci­sion point­ing al­go­rithms. Our pay­load elec­tron­ics. Our sen­sor per­for­mance. Our im­age pro­cess­ing chain. Our abil­ity to op­er­ate sus­tain­ably in VLEO. Our team.

We know ex­actly what to fix. It’s straight for­ward: op­er­ate the CMGs at lower tem­per­a­ture. The sys­tem ther­mal de­sign is al­ready up­dated in the next build to max­i­mize CMG life go­ing for­ward.

Beyond the CMGs, there were a hand­ful of learn­ings on the mar­gins. We learned our sec­ondary mir­ror struc­ture could be stiffer — al­ready in the up­dated de­sign. We learned we could use more heater ca­pac­ity in some pay­load zones — al­ready fixed.

We learned from the things that worked, too. We’re well down the de­vel­op­ment path for next-gen flight soft­ware, avion­ics, and power dis­tri­b­u­tion. Orbit de­ter­mi­na­tion and ge­olo­ca­tion will be even bet­ter. Additional sur­face treat­ments will im­prove drag co­ef­fi­cient fur­ther. Power-generation will in­crease while main­tain­ing the proven atomic oxy­gen re­silience. The list goes on.

The path to ex­quis­ite im­agery is clear. And that’s only one of many ex­cit­ing ca­pa­bil­i­ties un­locked by sus­tain­able op­er­a­tions in VLEO.

Our next VLEO mis­sion will in­cor­po­rate these learn­ings and demon­strate new fea­tures that en­able mis­sions be­yond imag­ing — we’ll share more de­tails soon. In par­al­lel, imag­ing re­mains a core fo­cus: we’re con­tin­u­ing to build op­ti­cal pay­loads for EO/IR mis­sions as part of a broader VLEO roadmap.

The suc­cesses of Clarity-1 re­in­forced our core con­vic­tion: VLEO is­n’t just a bet­ter or­bit for imag­ing — it’s the next pro­duc­tive or­bital layer.

The physics are un­for­giv­ing, but that’s ex­actly why it mat­ters. Go lower and you un­lock a step-change in per­for­mance: sharper sens­ing, faster links, lower la­tency, and a new level of re­spon­sive­ness. The rea­son VLEO has been writ­ten off for decades is­n’t lack of up­side — it’s that most satel­lites sim­ply can’t sur­vive there long enough to mat­ter.

Now we know they can.

Clarity proved the hard parts: sus­tain­able VLEO op­er­a­tions, val­i­dated drag and life­time mod­els, atomic oxy­gen re­silience, and a flight-proven high-per­for­mance bus. We’re not spec­u­lat­ing about VLEO. We’re op­er­at­ing in it, learn­ing in it, and cap­i­tal­ized to scale it.

...

Read the original on albedo.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.