10 interesting stories served every morning and every evening.
Today we are excited to share a big milestone. BirdyChat is now the first chat app in Europe that can exchange messages with WhatsApp under the Digital Markets Act. This brings us closer to our mission of giving work conversations a proper home.
WhatsApp is currently rolling out interoperability support across Europe. As this rollout continues, the feature will become fully available to both BirdyChat and WhatsApp users in the coming months.
...
Read the original on www.birdy.chat »
The Department of Homeland Security said the man was armed with a gun and two magazines of ammunition and circulated a photo of the weapon. DHS said a Border Patrol agent fired in self-defense. The Minnesota Bureau of Criminal Apprehension (BCA), the state’s chief investigative agency, said it was not allowed access to the scene.
...
Read the original on www.startribune.com »
When California neighborhoods increased their number of zero-emissions vehicles (ZEV) between 2019 and 2023, they also experienced a reduction in air pollution. For every 200 vehicles added, nitrogen dioxide (NO₂) levels dropped 1.1%. The results, obtained from a new analysis based on statewide satellite data, are among the first to confirm the environmental health benefits of ZEVs, which include fully electric and plug-in hybrid cars, in the real world. The study was funded in part by the National Institutes of Health and just published in The Lancet Planetary Health.
While the shift to electric vehicles is largely aimed at curbing climate change in the future, it is also expected to improve air quality and benefit public health in the near term. But few studies have tested that assumption with actual data, partly because ground-level air pollution monitors have limited spatial coverage. A 2023 study from the Keck School of Medicine of USC using these ground-level monitors suggested that ZEV adoption was linked to lower air pollution, but the results were not definitive.
Now, the same research team has confirmed the link with high-resolution satellite data, which can detect NO₂ in the atmosphere by measuring how the gas absorbs and reflects sunlight. The pollutant, released from burning fossil fuels, can trigger asthma attacks, cause bronchitis, and increase the risk of heart disease and stroke.
“This immediate impact on air pollution is really important because it also has an immediate impact on health. We know that traffic-related air pollution can harm respiratory and cardiovascular health over both the short and long term,” said Erika Garcia, PhD, MPH, assistant professor of population and public health sciences at the Keck School of Medicine and the study’s senior author.
The findings offer support for the continued adoption of electric vehicles. Over the study period, ZEV registrations increased from 2% to 5% of all light-duty vehicles (a category that includes cars, SUVs, pickup trucks and vans) across California, suggesting that the potential for improving air pollution and public health remains largely untapped.
“We’re not even fully there in terms of electrifying, but our research shows that California’s transition to electric vehicles is already making measurable differences in the air we breathe,” said the study’s lead author, Sandrah Eckel, PhD, associate professor of population and public health sciences at the Keck School of Medicine.
For the analysis, the researchers divided California into 1,692 neighborhoods, using a geographic unit similar to zip codes. They obtained publicly available data from the state’s Department of Motor Vehicles on the number of ZEVs registered in each neighborhood. ZEVs include full-battery electric cars, plug-in hybrids and fuel-cell cars, but not heavier duty vehicles like delivery trucks and semi trucks.
Next, the research team obtained data from the Tropospheric Monitoring Instrument (TROPOMI), a high-resolution satellite sensor that provides daily, global measurements of NO₂ and other pollutants. They used this data to calculate annual average NO₂ levels in each California neighborhood from 2019 to 2023.
Over the study period, a typical neighborhood gained 272 ZEVs, with most neighborhoods adding between 18 and 839. For every 200 new ZEVs registered, NO₂ levels dropped 1.1%, a measurable improvement in air quality.
“These findings show that cleaner air isn’t just a theory—it’s already happening in communities across California,” Eckel said.
To confirm that these results were reliable, the researchers conducted several additional analyses. They accounted for pandemic-related changes as a contributor to NO₂ decline, such as excluding the year 2020 and controlling for changing gas prices and work-from-home patterns. The researchers also confirmed that neighborhoods that added more gas-powered cars saw the expected rise in pollution. Finally, they replicated their results using updated data from ground-level monitors from 2012 to 2023.
“We tested our analysis in many different ways, and the results consistently support our main finding,” Garcia said.
These results show that TROPOMI satellite data—which covers nearly the entire planet—can reliably track changes in combustion-related air pollution, offering a new way to study the effects of the transition to electric vehicles and other environmental interventions.
Next, Garcia, Eckel and their team are comparing data on ZEV adoption with data on asthma-related emergency room visits and hospitalizations across California. The study could be one of the first to document real-world health improvements as California continues to embrace electric vehicles.
In addition to Garcia and Eckel, the study’s other authors are Futu Chen, Sam J. Silva and Jill Johnston from the Department of Population and Public Health Sciences, Keck School of Medicine of USC, University of Southern California; Daniel L. Goldberg from the Milken Institute School of Public Health, The George Washington University; Lawrence A. Palinkas from the Herbert Wertheim School of Public Health and Human Longevity Science, University of California, San Diego; and Alberto Campos and Wilma Franco from the Southeast Los Angeles Collaborative.
This work was supported by the National Institutes of Health/National Institute of Environmental Health Sciences [R01ES035137, P30ES007048]; the National Aeronautics and Space Administration Health and Air Quality Applied Sciences Team [80NSSC21K0511]; and the National Aeronautics and Space Administration Atmospheric Composition Modeling and Analysis Program [80NSSC23K1002].
...
Read the original on keck.usc.edu »
Play Video: Deutsche Telekom is throttling the internet. Let’s do something about it!
Play Video: Deutsche Telekom is throttling the internet. Let’s do something about it!
Epicenter.works, the Society for Civil Rights, the Federation of German Consumer Organizations, and Stanford Professor Barbara van Schewick are filing an official complaint with the Federal Network Agency against Deutsche Telekom’s unfair business practices.
Deutsche Telekom is creating artificial bottlenecks at access points to its network. Financially strong services that pay Telekom get through quickly and work perfectly. Services that cannot afford this are slowed down and often load slowly or not at all.
This means Telekom decides which services we can use without issues, violating net neutrality. We are filing a complaint with the Federal Network Agency to stop this unfair practice together!
...
Read the original on netzbremse.de »
Google has responded to our recent report on new Google Play strings hinting at changes to how Android will handle sideloaded apps in the future. The company has now confirmed that a “high-friction” install process is on the way.
Replying to our story on X, Matthew Forsyth, Director of Product Management, Google Play Developer Experience & Chief Product Explainer, said the system isn’t a sideloading restriction, but an “Accountability Layer.” Advanced users will still be able to choose “Install without verifying,” though Google says that path will involve extra steps meant to ensure users understand the risks of installing apps from unverified developers.
That explanation broadly matches what we’re seeing in recent versions of Google Play, where new warning messages emphasize developer verification, internet requirements, and potential risks, while still allowing users to proceed.
What remains to be seen is how far Google takes this “high-friction” approach. Clear warnings are one thing, but quietly making sideloading more painful is another. Android’s openness has always depended on power users being able to install apps without excessive hoops.
For now, Google hasn’t suggested requirements like using a PC or external tools, and we hope the added friction is limited to risk education.
...
Read the original on www.androidauthority.com »
You can now view replies to this blog post made on Bluesky directly on this website. Check it out here!
I’ve always wanted to host a comment section on my site, but it’s difficult because the content is statically generated and hosted on a CDN. I could host comments on a separate VPS or cloud service. But maintaining a dynamic web service like this can be expensive and time-consuming — in general, I’m not interested in being an unpaid, part-time DevOps engineer.
Recently, however, I read a blog post by Cory Zue about how he embedded a comment section from Bluesky on his blog. I immediately understood to benefits of this approach. With this approach, Bluesky could handle all of the difficult work involved in managing a social media like account verification, hosting, storage, spam, and moderation. Meanwhile because Bluesky is an open platform with a public API, it’s easy to directly embed comments on my own site.
There are other services that could be used for this purpose instead. Notably, I could embed replies from the social media formerly known as Twitter. Or I could use a platform like Disqus or even giscus, which hosts comments on GitHub Discussions. But I see Bluesky as a clearly superior choice among these options. For one, Bluesky is built on top of an open social media platform in AT Proto, meaning it can’t easily be taken over by an authoritarian billionaire creep. Moreover, Bluesky is a full-fledged social media platform, which naturally makes it a better option for hosting a conversation than GitHub.
Zue published a standalone package called bluesky-comments that allows embedding comments in a React component as he did. But I decided to build this feature myself instead. Mainly this is because I wanted to make a few styling changes anyway to match the rest of my site. But I also wanted to leave the option open to adding more features in the future, which would be easier to do if I wrote the code myself. The entire implementation is small regardless, amounting to only ~200 LOC between the UI components and API functions.
Initially, I planned to allow people to directly post on Bluesky via my site. This would work by providing an OAuth flow that gives my site permission to post on Bluesky on behalf of the user. I actually did get the auth flow working, but building out a UI for posting and replying to existing comments is difficult to do well. Going down this path quickly leads to building what is essentially a custom Bluesky client, which I didn’t have the time or interest in doing right now. Moreover, because the user needs to go through the auth flow and sign-in to their Bluesky account, the process is not really much easier than posting directly on a linked Bluesky post.
Without the requirement of allowing others to directly post on my site, the implementation became much simpler. Essentially, my task was to specify a Bluesky post that corresponds to the article in the site’s metadata. Then, when the page loads I fetch the replies to that post from Bluesky, parse the response, and display the results in a simple comment section UI.
As explained in my last post, this site is built using React Server Components and Parcel. The content of my articles are written using MDX, an extension to Markdown that allows directly embedding JavaScript and JSX. In each post, I export a metadata object that I validate using a Zod schema. For instance, the metadata for this post looks like this:
The value of bskyPostId references the Bluesky post from which I’ll pull replies to display in the comment section. Because my project is built in TypeScript, it was easy to integrate with the Bluesky TypeScript SDK (@bluesky/api on NPM). Reading the Bluesky API documentation and Zue’s implementation led me to the getPostThread endpoint. Given an AT Protocol URI, this endpoint returns an object with data on the given post and its replies.
I could have interacted directly with the Bluesky API from my React component using fetch and useEffect. However, it can be a bit tricky to correctly handle loading and a error states, even for a simple feature like this. Because of this, I decided to use the Tanstack react-query package to manage the API request/response cycle. This library takes care of the messy work of handling errors, retries, and loading states while I simply provide it a function to fetch the post data.
Once I obtain the Bluesky response, the next task is parsing out the content and metadata for the replies. Bluesky supports a rich content structure in its posts for representing markup, references, and attachments. Building out a UI that fully respects this rich content would be difficult. Instead, I decided to keep it simple by just pulling out the text content from each reply.
Even so, building a UI that properly displays threaded comments, particularly one that is formatted well on small mobile devices, can be tricky. For now, my approach was to again keep it simple. I indented each reply and added a left border to make it easier to follow reply threads. Otherwise, I mostly copied design elements for layout of the profile picture and post date from Bluesky.
Lastly, I added a UI component linking to the parent post on Bluesky, and encouraging people to add to the conversation there. With this, the read-only comment section implementation was complete. If there’s interest, I could publish my version of Bluesky comments as a standalone package. But several of the choices I made were relatively specific to my own site. Moreover, the implementation is simple enough that others could probably build their own version from reading the source code, just as I did using Zue’s version.
Let me know what you think by replying on Bluesky. Hopefully this can help increase engagement with my blog posts, but then again, my last article generated no replies, so maybe not 😭.
Join the conversation by replying on Bluesky…
...
Read the original on micahcantor.com »
Imagine the internet suddenly stops working. Payment systems in your local food store go down. Healthcare systems in the regional hospital flatline. Your work software tools, and all the information they contain, disappear.
You reach out for information but struggle to communicate with family and friends, or to get the latest updates on what is happening, as social media platforms are all down. Just as someone can pull the plug on your computer, it’s possible to shut down the system it connects to.
This isn’t an outlandish scenario. Technical failures, cyber-attacks and natural disasters can all bring down key parts of the internet. And as the US government makes increasing demands of European leaders, it is possible to imagine Europe losing access to the digital infrastructure provided by US firms as part of the geopolitical bargaining process.
At the World Economic Forum in Davos, Switzerland, the EU’s president, Ursula von der Leyen, has highlighted the “structural imperative” for Europe to “build a new form of independence” — including in its technological capacity and security. And, in fact, moves are already being made across the continent to start regaining some independence from US technology.
A small number of US-headquartered big tech companies now control a large proportion of the world’s cloud computing infrastructure, that is the global network of remote servers that store, manage and process all our apps and data. Amazon Web Services (AWS), Microsoft Azure and Google Cloud are reported to hold about 70% of the European market, while European cloud providers have only 15%.
My research supports the idea that relying on a few global providers increases vulnerabilty for Europe’s private and public sectors — including the risk of cloud computing disruption, whether caused by technical issues, geopolitical disputes or malicious activity.
Two recent examples — both the result of apparent technical failures — were the hours‑long AWS incident in October 2025, which disrupted thousands of services such as banking apps across the world, and the major Cloudflare incident two months later, which took LinkedIn, Zoom and other communication platforms offline.
The impact of a major power disruption on cloud computing services was also demonstrated when Spain, Portugal and some of south-west France endured a massive power cut in April 2025.
There are signs that Europe is starting to take the need for greater digital independence more seriously. In the Swedish coastal city of Helsingborg, for example, a one-year project is testing how various public services would function in the scenario of a digital blackout.
Would elderly people still receive their medical prescriptions? Can social services continue to provide care and benefits to all the city’s residents?
This pioneering project seeks to quantify the full range of human, technical and legal challenges that a collapse of technical services would create, and to understand what level of risk is acceptable in each sector. The aim is to build a model of crisis preparedness that can be shared with other municipalities and regions later this year.
Elsewhere in Europe, other forerunners are taking action to strengthen their digital sovereignty by weaning themselves off reliance on global big tech companies — in part through collaboration and adoption of open source software. This technology is treated as a digital public good that can be moved between different clouds and operated under sovereign conditions.
In northern Germany, the state of Schleswig-Holstein has made perhaps the clearest break with digital dependency. The state government has replaced most of its Microsoft-powered computer systems with open-source alternatives, cancelling nearly 70% of its licenses. Its target is to use big tech services only in exceptional cases by the end of the decade.
Across France, Germany, the Netherlands and Italy, governments are investing both nationally and transnationally in the development of digital open-source platforms and tools for chat, video and document management — akin to digital Lego bricks that administrations can host on their own terms.
In Sweden, a similar system for chat, video and online collaboration, developed by the National Insurance Agency, runs in domestic data centres rather than foreign clouds. It is being offered as a service for Swedish public authorities looking for sovereign digital alternatives.
For Europe — and any nation — to meaningfully address the risks posed by digital blackout and cloud collapse, digital infrastructure needs to be treated with the same seriousness as physical infrastructure such as ports, roads and power grids.
Control, maintenance and crisis preparedness of digital infrastructure should be seen as core public responsibilities, rather than something to be outsourced to global big tech firms, open for foreign influence.
To encourage greater focus on digital resilience among its member states, the EU has developed a cloud sovereignty framework to guide procurement of cloud services — with the intention of keeping European data under European control. The upcoming Cloud and AI Development Act is expected to bring more focus and resources to this area.
Governments and private companies should be encouraged to demand security, openness and interoperability when seeking bids for provision of their cloud services — not merely low prices. But in the same way, as individuals, we can all make a difference with the choices we make.
Just as it’s advisable to ensure your own access to food, water and medicine in a time of crisis, be mindful of what services you use personally and professionally. Consider where your emails, personal photos and conversations are stored. Who can access and use your data, and under what conditions? How easily can everything be backed up, retrieved and transferred to another service?
No country, let alone continent, will ever be completely digitally independent, and nor should they be. But by pulling together, Europe can ensure its digital systems remain accessible even in a crisis — just as is expected from its physical infrastructure.
...
Read the original on theconversation.com »
Researchers on Friday said that Poland’s electric grid was targeted by wiper malware, likely unleashed by Russia state hackers, in an attempt to disrupt electricity delivery operations.
A cyberattack, Reuters reported, occurred during the last week of December. The news organization said it was aimed at disrupting communications between renewable installations and the power distribution operators but failed for reasons not explained.
On Friday, security firm ESET said the malware responsible was a wiper, a type of malware that permanently erases code and data stored on servers with the goal of destroying operations completely. After studying the tactics, techniques, and procedures (TTPs) used in the attack, company researchers said the wiper was likely the work of a Russian government hacker group tracked under the name Sandworm.
“Based on our analysis of the malware and associated TTPs, we attribute the attack to the Russia-aligned Sandworm APT with medium confidence due to a strong overlap with numerous previous Sandworm wiper activity we analyzed,” said ESET researchers. “We’re not aware of any successful disruption occurring as a result of this attack.”
Sandworm has a long history of destructive attacks waged on behalf of the Kremlin and aimed at adversaries. Most notable was one in Ukraine in December 2015. It left roughly 230,000 people without electricity for about six hours during one of the coldest months of the year. The hackers used general purpose malware known as BlackEnergy to penetrate power companies’ supervisory control and data acquisition systems and, from there, activate legitimate functionality to stop electricity distribution. The incident was the first known malware-facilitated blackout.
...
Read the original on arstechnica.com »
Today we’re going to be taking a look at what almost 13 years of development has done for the Raspberry Pi. I have one of each generation of Pi from the original Pi that was launched in 2012 through to the Pi 5 which was released just over a year ago.
We’ll take a look at what has changed between each generation and how their performance and power consumption has improved by running some tests on them.
Here’s my video of the testing process and results, read on for the write-up;
Some of the above parts are affiliate links. By purchasing products through the above links, you’ll be supporting this channel, at no additional cost to you.
This is the original Raspberry Pi, which was launched in February 2012.
This Pi has a Broadcom BCM2835 SOC which features a single ARM1176JZF-S core running at 700MHz along with a VideoCore IV GPU. It has 512 MB of DDR RAM.
In terms of connectivity, it only has 100Mb networking and 2 x USB 2.0 ports. Video output is 1080P through a full-size HDMI port or analogue video out through a composite video connector and audio output is provided through a 3.5mm audio jack. It doesn’t have any WiFi or Bluetooth connectivity but it does have some of the features that we still have on more recent models like DSI and CSI ports, a full size SD card reader for the operating system and GPIO pins, although only 26 of them at this stage.
Power is supplied through a micro USB port and it is rated for 5V and 700mA.
It was priced at $35 — which at the time was incredibly cheap for what was essentially a palm-sized computer.
The Raspberry Pi 2 was launched 3 years later, in February 2015 and this Pi looked quite different to the original and similar to the Pi’s we know today.
The Pi 2 has a significantly better processor than the original. The Broadcom BCM2836 SOC has 4 Cortex-A7 cores running at 900 MHz and it retained the same VideoCore IV GPU. RAM was also bumped up to 1GB.
It added another 2 x USB 2.0 ports alongside the 100Mb Ethernet port. The composite video port disappeared and the analogue video output was moved into the audio jack.
The GPIO pins were increased to 40 pins which has followed the same pin layout since — which has really helped in maintaining compatibility with hats and accessories. The SD card reader was also changed to a microSD card reader.
The power circuitry was bumped up to 800mA to accommodate the more powerful CPU.
The Raspberry Pi 3 was launched just a year later, in February 2016.
The Pi 3’s new Broadcom BCM2837 SOC retained the same 4-core architecture but these were changed to 64-bit Cortex A53 cores running at 1.2Ghz.
RAM was kept at 1GB but was now DDR2.
There was no change to the USB or Ethernet connectivity on the original Pi 3 but we did see WiFi and Bluetooth added for the first time. WiFi was single band 2.4GHz and we had Bluetooth 4.1.
The version that I have is actually the 3B+, which was launched a little later. The main improvements over the original Pi 3 were a 0.2GHz boost to the clock speed and the upgrade to Gigabit networking with PoE (Power over Ethernet) support and dual-band WiFi.
The power circuitry was again improved, still running at 5V but now up to 1.34A, which was almost double the Pi 2.
Next came the Pi 4 in June 2019. This Pi came at one of the worst times for global manufacturing and was notoriously difficult to get hold of due to the impact of COVID on the global supply chain. Quite ironically, this hard-to-get Pi is the one that I’ve got the most of, mainly due to my water-cooled Pi cluster build.
The Pi 4 has a Broadcom BCM2711 SOC with 4 Cortex-A72 cores running at 1.5GHz. So again a slight clock speed increase over the Pi 3 but still retaining 4 cores. It also includes a bump up to a VideoCore VI GPU.
This was the first model to feature different RAM configurations. It was originally available in 1, 2, 4GB variants featuring LPDDR4 RAM and in March of 2020 an 8GB variant was added to the linup as well. This obviously resulted in a few different price points but impressively they still managed to keep a $35 offering 7 years after the launch of the first Pi.
It retained the same form factor as the Pi 3 but with the network and Ethernet ports switched around. Notably, two of the USB ports were upgraded to USB 3.0, networking was now gigabit ethernet like the 3B+, WiFi was dual-band and it had Bluetooth 5.0.
They also changed the single full-size HDMI port to two micro HDMI ports. Most people I know don’t like this change and find it annoying to have to use adaptors to work with common displays and these micro HDMI ports are prone to breaking when they are used often. I think general hobbyists and makers would prefer this to still be a single full-size port but Pi’s are often used in commercial display applications so I guess that’s why they went with this dual micro HDMI configuration.
The power circuit was actually reduced in this model, from 1.34 down to 1.25A and the port was changed to USB C.
Lastly and most recently we have the Pi 5 which was launched in October 2023.
This Pi features a Broadcom BCM2712 SOC with 4 Cortex A76 cores running at a significantly faster 2.4Ghz and a VideoCore VII GPU running at 800MHz.
So quite a bump up in CPU and GPU performance.
It is offered in 3 RAM configurations but the drop in a 1GB offering means that they’re no longer available at the $35 price point. There is a fairly significant increase in price up to $50 for the base 2GB variant.
Some other notable changes are the inclusion of a PCIe port which enables IO expansion and a much improved power circuit. The PCIe port is quite commonly used to add an NVMe SSD instead of a microSD card for the operating system.
The power circuit was upgraded to handle the PCIe port addition, now stepping up to 5V at up to 5A, along with a power button for the first time.
The change in power supply requirements to 5V and 5A is a bit annoying as most power delivery capable supplies cap out 2.5 or 3A at 5V. It would have been more universal to require a 9V 3A supply to meet the Pis power requirements. I assume they steered away from this because the Pi’s circuitry runs at 5V and 3.3V and they would have then needed to add another onboard DC-DC converter which increases complexity, size and potentially the cost, it would also have made it a bit less efficient. But this does mean that you most likely need to buy a USB C power supply that has been purpose-built for the Pi 5.
The Pi 5 is also the first Pi to have its own dedicated fan socket.
So that’s a summary of the hardware changes, now let’s boot them up and take a look at their performance.
To compare the performance between the Pi’s, I’m going to run the following tests.
* I’m going to attempt to playback a 1080P YouTube video in the browser, although I expect we’ll have problems with this up to the Pi 4.
* We’ll then run a Sysbench CPU benchmark which I’ll do both for a single-core and multicore.
* Then test the storage speed using James Chambers Pi Benchmark script.
* Lastly, we’ll look at Power Consumption, both at idle and with the CPU maxed out.
* And then use that data to determine each Pi’s Performance per Watt.
To keep things as consistent as possible I’m going to be running the latest available version of Pi OS from Raspberry Pi Imager for each Pi. I was pleasantly surprised to find that you can still flash an OS image for the original Pi in their latest version of Imager.
I’ll be testing them all running on a 32GB Sandisk Ultra microSD card. I’ll also be using an Ice Tower cooler on each to ensure they don’t run anywhere near thermal throttling.
I started with the original Pi and its first boot and setup process was a lesson in patience. It took me the best part of two hours to get the first boot complete, the Pi updated and the testing utilities installed but I got there in the end.
Even once set up it takes about 8 minutes to boot up to the desktop and the CPU stays pegged at 100% for another two to three minutes before dropping down to about 20% at idle.
The original Pi refused to open up the browser, so that’s where my YouTube video playback test ended.
The Pi 2 managed to open the browser and actually started playing back a 1080P video, which was surprising, but playback was terrible. It dropped pretty much all of the frames both in the window and fullscreen.
The Pi 3 played video back noticeably better than the Pi 2, but it’s still quite a long way away from being usable and still drops a lot of frames.
The Pi 4 handled 1080P video reasonably well. It had some initial trouble but then settled down. Fullscreen is also a bit choppy but is also usable.
The Pi 5 handled 1080P playback well without any significant issues both in the window and fullscreen.
Next was the Sysbench CPU benchmark. I ran three tests on each and averaged the scores and I did this for both single-core and multicore.
In single core, the Pi 1 managed a rather dismal score of 68, the Pi 2 got a bit more than double this score but the real step up was with the Pi 3 which managed 18 times higher than the Pi 2. The Pi 4 and Pi 5 also offered good improvements on the previous generations.
Similarly in multicore, the Pi 3 scored over 18 times the score of the Pi2 and the Pi 4 and 5 provided good improvements on the Pi 3’s score.
Comparing the combined multicore score of the Pi 5 to what the single core on the Pi 1 can do, the Pi 5 is a little over 600 times faster.
Next, I tried running a GLMark2 GPU benchmark on them. I used the GLMark2-es2-wayland version which is designed for OpenGL ES so that the Pi 1 was supported.
I was surprised that the Pi 1 was even able to run GLMark2 — it did complete the benchmark, although the score wasn’t all that impressive.
These results really show how the Pi’s GPU has improved in the last two generations. Prior to these tests, I had never seen a score below 100 and the Pi 1, 2 and 3 managed to fall short of triple digits. Pi 5 scored over 2.5 times higher than the Pi 4.
Next was the storage speed test using James Chambers Pi Benchmarks script. The bus speed has increased over the years from 25MHz on the Pi 1 to 100MHz on the Pi 5, so I expect we’ll see these reflected in the benchmark scores.
The storage speed test’s results aren’t as dramatic as the CPU and GPU results but show a steady improvement between generations. The Pi 3 did a bit worse than the Pi 2 but this small difference is likely just due to variability in the tests.
Next, I ran the iPerf network speed test on each.
The Pi 1 doesn’t quite get close to its theoretical 100Mbps but the Pi 2 does. The Pi 3 B+ although having Gigabit Ethernet is limited by this running over USB 2.0 which only has a theoretical maximum of 300MBps, so it came quite close. Both the Pi 4 and 5 expectedly come close to theoretical Gigabit speeds.
Lastly, I tested the power consumption of each Pi at idle and under load.
I used the same Pi 5 power adaptor to test all of the Pis to keep things consistent and I just used a USB C to micro USB adaptor for the Pi 1, 2 and 3.
The idle results were closer than I expected. The Pi 2 had the lowest idle power draw and the Pi 5 the highest, but all were within a watt or two of each other. At full load, you can see the increase in CPU power draw more physical power with the Pi 5 drawing almost three times the Pi 1 and Pi 2.
Converted to performance per watt using the Sysbench results, we can again see how much better the Pi 4 and 5 are over the Pi 1 and 2. There is a clear improvement in the performance that each generation of Pi is able to get per watt of power, which is essentially its efficiency. Although the Pi 5 draws more power than the Pi 1 under full load, you’re getting almost 200 times more power out of it per watt.
I really enjoyed working through this project to see how much Pi’s have changed over the years, particularly in terms of performance. I still remember being amazed at the size and price of the original Pi when it came out and it’s great that they’re still fully supported and can still be used for projects — albeit with less CPU-intensive projects.
Let me know what you think has been the biggest improvement to the Pi over the years and what you’d still like to see added to future models in the comments section below.
I personally really like the addition of the PCIe port on the Pi 5 and I’d like to see 2.5Gb networking and a DisplayPort or USB C with DisplayPort added to a future generation of Pi.
...
Read the original on the-diy-life.com »
On March 14, 2025, Albedo’s first satellite, Clarity-1, launched on SpaceX Transporter-13. We took a big swing with our pathfinder. The mission goals:
Prove sustainable orbit operations in VLEO — an orbital regime long considered too harsh for commercial satellites — by overcoming thick atmospheric drag, dangerous atomic oxygen, and extreme speeds. Prove our mid-size, high-performance Precision bus — designed and built in-house in just over two years.Capture 10 cm resolution visible imagery and 2-meter thermal infrared imagery, a feat previously achieved only by exquisite, billion dollar government systems.
We achieved the first two goals definitively and validated 98% of the technology required for the third. This was an extraordinarily ambitious first satellite. We designed and built a high-performance bus on time and on budget, integrated a large-aperture telescope, and operated in an environment no commercial company had sustained operations in, funded entirely by private capital.
This is the full story.
Let’s start with the result that matters most: VLEO works. And it works better than even we expected.
For decades, Very Low Earth Orbit was written off as impractical for normal satellite lifetimes. The atmosphere is thicker, creating drag that would deorbit normal satellites in weeks. If the drag didn’t kill you, atomic oxygen would erode your solar arrays and surfaces. To succeed in VLEO required a fundamentally different satellite design.
The drag coefficient was the headline: 12% better than our design target. Measured multiple times at altitudes between 350 km - 380 km with a repeatable result, this validates our models producing a satellite lifespan of five years at 275 km altitude, averaged across the solar cycle. This was one of our most critical assumptions, and we exceeded it.
Atomic oxygen (AO) is the silent killer in VLEO. The deeper you go, the more AO you encounter. It degrades solar arrays and other traditional satellite materials. We developed a new class of solar arrays with unique measures designed to mitigate AO degradation. They work. Even as we descended deeper into VLEO and AO fluence increased logarithmically, our power generation stayed constant. The solar arrays are holding up as designed.
Clarity-1 demonstrated over 100 km of controlled altitude descent, stationkeeping in VLEO, and survived a solar storm that temporarily spiked atmospheric density — the impact on Clarity’s descent rate was barely noticeable. Momentum management worked. Fault detection worked. Our thrust planning model was validated against GOCE data (a 2009 VLEO R&D mission) with sub-meter accuracy. Radiation tolerance was excellent, with 4x fewer single-event upsets than expected. Orbit determination was dialed.
Developed and built in just over two years, our in-house bus Precision is now TRL-9: flight-proven on-orbit.
Every bus subsystem worked. Every piece of in-house technology we developed performed: our CMG steering law, our operational modes, flight and ground software, electronics boards, and our novel thermal management system. We hit our embedded software GNC timing deadlines, we converged our attitude and orbit determination estimators, we saw 4π steradian command and telemetry antenna coverage, and we got on-orbit actuals for our power generation and loads.
Our cloud-native ground system was incredible. Contact planning across 25 ground stations was completely automated. Mission scheduling updated every 15 minutes to incorporate new tasking and the latest satellite state information, smoothly transitioning to updated on-board command loads with visual tracking of each schedule and its status. Automated thrust planning to achieve our desired orbital trajectory supported 30+ maneuvers per day. Our engineers could track and command the satellite from anywhere with internet and a secure VPN.
We pushed 14 successful flight software feature updates on-orbit — and even executed one FPGA update, which is exceptionally rare. The ability to continuously improve throughout Clarity’s operational life proved essential — every major solution to challenges we faced involved flight software updates. On-orbit software upgrades are exceedingly tricky to get right, but Clarity-1 was designed from day one around this foundational capability.
The first month of the mission was magic.
An hour after launch, we watched Clarity-1 deploy from the premium caketopper slot into LEO, giving us an incredible view of the Nile River as she separated from the rocket.
First contact came just three hours later at 5:11am MT. Imagine sitting in Mission Control, watching two ground station passes with no data, then on the third: heaps of green, healthy telemetry streaming into all of the subsystem dashboards. Clarity had nailed her autonomous boot-up sequence and rocket separation rate capture. Stuck the landing.
The next milestone — and the one many of us were most anxious about — was our autonomous Protect Mode, basically our VLEO version of Safe Mode.
We nailed it 14 hours after launch.
By 6:45pm that same day, Clarity was in Operational mode, ready for commissioning.
The days that followed were a blur of checkboxes turning green. 4-CMG commissioning complete. Payload power-on and checkout validated. Thermal balance for both visible and thermal sensors confirmed. Our first on-orbit software update went flawlessly.
Clarity uses Control Moment Gyroscopes (CMGs) to steer the satellite, giving us more agility than more commonly used reaction wheels. We moved onto validating GNC modes such as GroundTrack, which we use to point at communication ground terminals.
We moved on to commissioning our X-band radio — the high-rate link to downlink imagery. After we uncovered an issue with our ground station provider’s pointing mode, the 800 Mbps link began pumping down data on every pass. The waveforms were clean. Textbook. A direct representation of how locked in our precision CMG pointing was.
With our first satellite at this level of complexity, we couldn’t believe how smoothly it had gone. Years of developing new technologies had been validated in a fraction of the commissioning time we’d anticipated.
Next up was maneuvering from our LEO drop-off altitude down to VLEO, where it would be safe to eject the telescope contamination cover and start snapping pictures.
One of our four CMGs experienced a temperature spike in the flywheel bearing. Our Fault Detection, Isolation, and Recovery (FDIR) logic caught it immediately, spun it down, and executed automated recovery actions. But it wouldn’t spin back up. Manual recovery attempts followed. Also unsuccessful.
Rushing back into CMG operations without understanding the failure mechanism risked killing the mission entirely, so we turned off the other three and put the satellite in two-axis stabilization using the magnetic torque rods.
We had a choice. Hack together novel 3-CMG control algorithms as fast as possible and risk losing another, or figure out how to leverage only the torque rods to achieve 3-axis control with sufficient accuracy to navigate the maneuver to VLEO.
We went with the torque rods.
On satellites this size (~600 kg), magnetic torque rods are typically used for momentum dumping, not attitude control. But we’d built Clarity with unusually beefy torque rods due to the elevated momentum management needs in VLEO. Our GNC team went heads down and developed algorithms to achieve 3-axis attitude control using only torque rods.
Within a month, we had it working.
Both of our electric thrusters commissioned quickly and were working well. But with torque rods only, our attitude control had 15 to 20 degrees of error, sometimes reaching ~45 degrees. And maneuvering to VLEO isn’t “point into the wind and fire” — it’s continuous vector and trajectory management across an orbit. That kind of control error meant inefficient burns and a much harder descent plan.
As the descent progressed, however, the team learned and iterated. With more iteration and flight software updates, we uploaded onboard logic informed by several sources of live data that dialed in our thrust vector control to within 5 degrees of the target. The autonomous thrust planning system we built enabled us to claw back performance that nearly matched our originally projected descent speed.
We maneuvered safely past the ISS and entered VLEO. Eager to pop off the contamination cover.
Once we reached safe altitude, it was time to jettison the contamination cover protecting our telescope.
There are horror stories about contamination covers getting stuck after months of temperature fluctuations.
Clarity’s was flawless. I’ll never forget seeing this blip in telemetry live — confirming through Newton’s third law that the jettison was successful. Shortly after, LeoLabs confirmed tracking of two separate objects.
We were ready to start imaging.
Here’s where it got complicated.
Our GNC and FSW teams were close but not yet finished with the new 3-CMG control law. CMGs are rarely used in commercial space, let alone by a startup. Then take one more step: singularity-prone 3-CMG control that to our knowledge has not been attempted on a non-exquisite satellite, and certainly not developed and uploaded on-orbit. Traditional algorithms require at least four CMGs to provide capability volumes free of singularities.
We were eager to make some amount of progress, so we started imaging on torque rods even though there would be severe limitations: 50+ pixels of smear, large mispointing from the wobble of torque rod control due to earth’s magnetic field, and downlink limited to at best two small images per day. The last two constraints meant we were at risk of spending precious downlink capacity on clouds.
Sure enough, the first two days of pixels were mostly clouds, but we were happy to peek through a little in this image.
Although we couldn’t control attitude accurately, we did still have good attitude knowledge after the fact. AyJay whipped up a clever idea with Claude Code that automated posting weather conditions in Slack for each collection. We analyzed that to determine which images were likely clear, and selected those for downlink.
We adjusted the focus position a few times, and images continued getting better.
Out of the box, the new algorithms and software performed perfectly.
This visualization shows real telemetry of Clarity performing seven back-to-back imaging maneuvers, with limited 3-CMG agility, followed by an X-band downlink over Iceland minutes later. The satellite was executing sophisticated attitude profiles with very low control error. Fiber-optic gyro measurements showed exquisite jitter performance.
In real time, collecting and downlinking those seven images took ten minutes.
And this is where our ground software really showed its teeth. On most missions, “data on the ground” is just the start — turning raw bits into something viewable is a slow chain of handoffs and batch processing. For us, within seconds of the downlink finishing, the image product pipeline was already posting processed snippets into our company Slack. Literally seconds.
That end-to-end loop — photons in orbit to a viewable product on the ground, within minutes — is a capability that’s still rare in this industry.
We were ready to execute focus calibration.
Large telescope optics experience hygroscopic dryout during the first few months on-orbit — moisture trapped in materials during ground assembly slowly releases in the vacuum of space, causing the focus position to drift. Dialing in best focus requires dozens of iterations: capture images, analyze sharpness, adjust focus position, repeat. Each cycle gets you closer to the optical performance the system was designed for, and our telescope’s on-ground alignment was verified to spec.
After a few iterations of this, we could start to see cars.
Even this early into imaging, the infrared images blew us away. Using a low-cost microbolometer — a fraction of the price of cooled IR sensors — we captured thermal signatures that showed ships in Tokyo Bay, steel processing facilities where we could distinguish individual coke ovens from their smokestacks, and distinct signatures between real vegetation and turf — a good proxy for camouflage detection. Day or night, clear as day.
Three days into the excitement, CMG problems started again.
A second CMG began showing the same telemetry signatures we now recognized as warning signs.
What we had learned from the investigation: the allowable temperature specifications of the CMGs were much higher than the true limit, constrained by what the lubricant inside the flywheel could handle. A straightforward fix for the future — an unfortunate corner case to learn about in hindsight.
The second CMG showing issues was also on the hot side of the satellite. While we had overhauled the vehicle and CMG operations to prevent additional bearing wear, the damage had already been done in the first month of the mission.
We spent months trying everything we could to get the CMGs to operate sustainably. The team attempted many clever solutions, one of which revived the first CMG that had locked up. We uploaded a feature to select any 3 of the 4 CMGs for operator commanding. But we weren’t able to get sustained, reliable operation.
Despite the CMG challenges, here’s what the imaging journey proved.
The full end-to-end image chain works. Photons hit our optics, get captured by our sensor, processed through payload electronics, packetized and encrypted, transmitted via our X-band radio, received on the ground, and processed into image products. The entire chain is validated.
The end-to-end loop is fast. Within 30 seconds of a downlink, processed image snippets were already posting to our company Slack.
Sensor performance exceeded expectations. Dynamic range, radiometry, color balance, band-to-band alignment — all look great, even on uncalibrated imagery.
We can scan out long images. Our line-scanning approach produced strips 20-30 kilometers long, exactly as designed.
Pointing accuracy and high quality telemetry validates the ingredients for precise geolocation. The data we need to pinpoint where each pixel lands on Earth to
Jitter and smear are low. Fiber-optic gyro measurements confirmed 3x lower smear and 11x lower jitter compared to our goal — a critical ingredient for exquisite imagery.
Our proprietary image scheduler works. The automated system that plans collections, manages constraints, and optimizes what we capture each day performed as designed.
Nine months into the mission, we lost contact with Clarity-1.
By that point, we had largely exhausted our options on the CMGs. The path to further image quality improvement had effectively closed.
We had been tracking intermittent memory issues in our TT&C radio throughout the mission, working around them as they appeared. Our best theory is that one of these issues escalated in a way that corrupted onboard memory and is preventing reboots. We’ve tried several recovery approaches. So far, none have worked, and the likelihood of recovery looks low at this point.
But here’s what matters: the VLEO validation data we collected is sufficient.
We combined a state-of-the-art atmospheric density model, our high-fidelity orbital dynamics force models, and months of natural orbit decay data from 350 to 380 km altitude to determine Clarity’s coefficient of drag — with repeatable results at different altitudes. That drag coefficient, paired with our demonstrated ability to maintain altitude in VLEO for months using high-efficiency thrusters, tells us exactly how the vehicle behaves under aerodynamic drag across the VLEO regime — and validates an average five-year lifespan at 275 km across the solar cycle. Telemetry from our solar arrays, together with onboard atomic oxygen sensor data, shows peak power generation stayed constant after exposure to VLEO levels of AO fluence — proving our AO mitigation worked.
Thanks to our friends at LeoLabs, we’ve validated that Clarity is maintaining attitude autonomously. She’s still up there, still oriented, still descending through VLEO. Just not talking to us.
Even before this, we had started developing an in-house TT&C radio for our systems moving forward, rather than reusing this radio that was procured from a third party. We’ll incorporate learnings from this reliability issue into that.
We’re still working the problem. This chapter isn’t over yet. But even if it is, Clarity-1 gave us what we needed to build what comes next.
If you think about exquisite imagery as a pyramid, we needed 100% of the systems working together to achieve the pinnacle: 10 cm visible imagery. We got to about 98%. Everything else in that pyramid — the entire foundation — is proven and retired.
Our drag coefficient. Our atomic oxygen resilience. Our solar arrays. Our thermal management. Our flight software. Our ground software. Our CMG steering laws. Our precision pointing algorithms. Our payload electronics. Our sensor performance. Our image processing chain. Our ability to operate sustainably in VLEO. Our team.
We know exactly what to fix. It’s straight forward: operate the CMGs at lower temperature. The system thermal design is already updated in the next build to maximize CMG life going forward.
Beyond the CMGs, there were a handful of learnings on the margins. We learned our secondary mirror structure could be stiffer — already in the updated design. We learned we could use more heater capacity in some payload zones — already fixed.
We learned from the things that worked, too. We’re well down the development path for next-gen flight software, avionics, and power distribution. Orbit determination and geolocation will be even better. Additional surface treatments will improve drag coefficient further. Power-generation will increase while maintaining the proven atomic oxygen resilience. The list goes on.
The path to exquisite imagery is clear. And that’s only one of many exciting capabilities unlocked by sustainable operations in VLEO.
Our next VLEO mission will incorporate these learnings and demonstrate new features that enable missions beyond imaging — we’ll share more details soon. In parallel, imaging remains a core focus: we’re continuing to build optical payloads for EO/IR missions as part of a broader VLEO roadmap.
The successes of Clarity-1 reinforced our core conviction: VLEO isn’t just a better orbit for imaging — it’s the next productive orbital layer.
The physics are unforgiving, but that’s exactly why it matters. Go lower and you unlock a step-change in performance: sharper sensing, faster links, lower latency, and a new level of responsiveness. The reason VLEO has been written off for decades isn’t lack of upside — it’s that most satellites simply can’t survive there long enough to matter.
Now we know they can.
Clarity proved the hard parts: sustainable VLEO operations, validated drag and lifetime models, atomic oxygen resilience, and a flight-proven high-performance bus. We’re not speculating about VLEO. We’re operating in it, learning in it, and capitalized to scale it.
...
Read the original on albedo.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.