10 interesting stories served every morning and every evening.
The shift is largely due to the rapid expansion of solar energy, which is growing faster than any other source of electricity. Together, wind and solar generated 30 percent of E. U. power last year, while fossil fuels provided 29 percent, according to the analysis from Ember, a think tank based in London. Including hydro, renewables provided nearly half of all E.U. power in 2025.
Last year, for the first time, wind and solar supplied more power than fossil fuels to the E. U., according to a new analysis.
The shift is largely due to the rapid expansion of solar energy, which is growing faster than any other source of electricity. Together, wind and solar generated 30 percent of E. U. power last year, while fossil fuels provided 29 percent, according to the analysis from Ember, a think tank based in London. Including hydro, renewables provided nearly half of all E.U. power in 2025.
Last year, for the first time, wind and solar supplied more power than fossil fuels to the E. U., according to a new analysis.
The analysis finds that solar is making gains in every E. U. country, while coal is broadly in retreat. Last year, solar alone supplied more than 20 percent of power in Hungary, Cyprus, Greece, Spain, and the Netherlands. Meanwhile, in 19 European countries, coal accounted for less than 5 percent of power. In 2025, both Ireland and Finland joined the ranks of European countries that have shuttered their last remaining coal plants.
Warming, however, continues to challenge the shift to clean energy as drought saps hydropower. Last year, hydro output dropped slightly in the E. U., and natural gas power rose to compensate.
“The next priority for the E. U. should be to put a serious dent in reliance on expensive, imported gas,” said Ember analyst Beatrice Petrovich. “Gas not only makes the E.U. more vulnerable to energy blackmail, it’s also driving up prices.”
In parts of Europe, there are signs that increasingly cheap batteries are beginning to displace natural gas in the early evening, when power demand is high, but solar output is waning. Said Petrovich, “As this trend accelerates it could limit how much gas is needed in evening hours, therefore stabilizing prices.”
An E. U. Plan to Slash Micropollutants in Wastewater Is Under Attack
...
Read the original on e360.yale.edu »
The Department of Homeland Security said the man was armed with a gun and two magazines of ammunition and circulated a photo of the weapon. DHS said a Border Patrol agent fired in self-defense. The Minnesota Bureau of Criminal Apprehension (BCA), the state’s chief investigative agency, said it was not allowed access to the scene.
...
Read the original on www.startribune.com »
Windows PCs by default will backup their encryption keys to the cloud, and Microsoft isn’t afraid to share those with the FBI if requested.
Windows PCs by default will backup their encryption keys to the cloud, and Microsoft isn’t afraid to share those with the FBI if requested.
Microsoft has confirmed in a statement to Forbes that the company will provide the FBI access to BitLocker encryption keys if a valid legal order is requested. These keys enable the ability to decrypt and access the data on a computer running Windows, giving law enforcement the means to break into a device and access its data.
The news comes as Forbes reports that Microsoft gave the FBI the BitLocker encryption keys to access a device in Guam that law enforcement believed to have “evidence that would help prove individuals handling the island’s Covid unemployment assistance program were part of a plot to steal funds” in early 2025.
This was possible because the device in question had its BitLocker encryption key saved in the cloud. By default, Windows 11 forces the use of a Microsoft Account, and the OS will automatically tie your BitLocker encryption key to your online account so that users can easily recover their data in scenarios where they might get locked out. This can be disabled, letting you choose where to save them locally, but the default behavior is to store the key in Microsoft’s cloud when setting up a PC with a Microsoft Account.
“While key recovery offers convenience, it also carries a risk of unwanted access, so Microsoft believes customers are in the best position to decide… how to manage their keys,” Microsoft spokesperson Charles Chamberlayne said in a statement to Forbes.
Microsoft told Forbes that it receives around 20 requests for BitLocker encryption keys from the FBI a year, but the majority of requests are unable to be met because the encryption key was never uploaded to the company’s cloud.
This is notable as other tech companies, such as Apple, have famously refused to provide law enforcement with access to encrypted data stored on their products. Apple has openly fought against the FBI in the past when it was asked to provide a backdoor into an iPhone. Other tech giants, such as Meta, will store encryption keys in the cloud, but use zero-knowledge architectures and encrypt the keys server-side so that only the user can access them.
It’s frankly shocking that the encryption keys that do get uploaded to Microsoft aren’t encrypted on the cloud side, too. That would prevent Microsoft from seeing the keys, but it seems that, as things currently stand, those keys are available in an unencrypted state, and it is a privacy nightmare for customers.
To see Microsoft so willingly hand over the keys to encrypted Windows PCs is concerning, and should make everybody using a modern Windows computer think twice before backing up their keys to the cloud. You can see which PCs have their BitLocker keys stored on Microsoft’s servers on the Microsoft Account website here, which will let you delete them if present.
Follow Windows Central on Google News to keep our latest news, insights, and features at the top of your feeds!
...
Read the original on www.windowscentral.com »
Today we are excited to share a big milestone. BirdyChat is now the first chat app in Europe that can exchange messages with WhatsApp under the Digital Markets Act. This brings us closer to our mission of giving work conversations a proper home.
WhatsApp is currently rolling out interoperability support across Europe. As this rollout continues, the feature will become fully available to both BirdyChat and WhatsApp users in the coming months.
...
Read the original on www.birdy.chat »
There’s a kind of polite fiction at the heart of the software industry. It goes something like this:
Estimating how long software projects will take is very hard, but not impossible. A skilled engineering team can, with time and effort, learn how long it will take for them to deliver work, which will in turn allow their organization to make good business plans.
This is, of course, false. As every experienced software engineer knows, it is not possible to accurately estimate software projects. The tension between this polite fiction and its well-understood falseness causes a lot of strange activity in tech companies.
For instance, many engineering teams estimate work in t-shirt sizes instead of time, because it just feels too obviously silly to the engineers in question to give direct time estimates. Naturally, these t-shirt sizes are immediately translated into hours and days when the estimates make their way up the management chain.
Alternatively, software engineers who are genuinely trying to give good time estimates have ridiculous heuristics like “double your initial estimate and add 20%“. This is basically the same as giving up and saying “just estimate everything at a month”.
Should tech companies just stop estimating? One of my guiding principles is that when a tech company is doing something silly, they’re probably doing it for a good reason. In other words, practices that appear to not make sense are often serving some more basic, illegible role in the organization. So what is the actual purpose of estimation, and how can you do it well as a software engineer?
Before I get into that, I should justify my core assumption a little more. People have written a lot about this already, so I’ll keep it brief.
I’m also going to concede that sometimes you can accurately estimate software work, when that work is very well-understood and very small in scope. For instance, if I know it takes half an hour to deploy a service, and I’m being asked to update the text in a link, I can accurately estimate the work at something like 45 minutes: five minutes to push the change up, ten minutes to wait for CI, thirty minutes to deploy.
For most of us, the majority of software work is not like this. We work on poorly-understood systems and cannot predict exactly what must be done in advance. Most programming in large systems is research: identifying prior art, mapping out enough of the system to understand the effects of changes, and so on. Even for fairly small changes, we simply do not know what’s involved in making the change until we go and look.
The pro-estimation dogma says that these questions ought to be answered during the planning process, so that each individual piece of work being discussed is scoped small enough to be accurately estimated. I’m not impressed by this answer. It seems to me to be a throwback to the bad old days of software architecture, where one architect would map everything out in advance, so that individual programmers simply had to mechanically follow instructions. Nobody does that now, because it doesn’t work: programmers must be empowered to make architectural decisions, because they’re the ones who are actually in contact with the code. Even if it did work, that would simply shift the impossible-to-estimate part of the process backwards, into the planning meeting (where of course you can’t write or run code, which makes it near-impossible to accurately answer the kind of questions involved).
In short: software engineering projects are not dominated by the known work, but by the unknown work, which always takes 90% of the time. However, only the known work can be accurately estimated. It’s therefore impossible to accurately estimate software projects in advance.
Estimates do not help engineering teams deliver work more efficiently. Many of the most productive years of my career were spent on teams that did no estimation at all: we were either working on projects that had to be done no matter what, and so didn’t really need an estimate, or on projects that would deliver a constant drip of value as we went, so we could just keep going indefinitely.
In a very real sense, estimates aren’t even made by engineers at all. If an engineering team comes up with a long estimate for a project that some VP really wants, they will be pressured into lowering it (or some other, more compliant engineering team will be handed the work). If the estimate on an undesirable project - or a project that’s intended to “hold space” for future unplanned work - is too short, the team will often be encouraged to increase it, or their manager will just add a 30% buffer.
One exception to this is projects that are technically impossible, or just genuinely prohibitively difficult. If a manager consistently fails to pressure their teams into giving the “right” estimates, that can send a signal up that maybe the work can’t be done after all. Smart VPs and directors will try to avoid taking on technically impossible projects.
Another exception to this is areas of the organization that senior leadership doesn’t really care about. In a sleepy backwater, often the formal estimation process does actually get followed to the letter, because there’s no director or VP who wants to jump in and shape the estimates to their ends. This is one way that some parts of a tech company can have drastically different engineering cultures to other parts. I’ll let you imagine the consequences when the company is re-orged and these teams are pulled into the spotlight.
Estimates are political tools for non-engineers in the organization. They help managers, VPs, directors, and C-staff decide on which projects get funded and which projects get cancelled.
The standard way of thinking about estimates is that you start with a proposed piece of software work, and you then go and figure out how long it will take. This is entirely backwards. Instead, teams will often start with the estimate, and then go and figure out what kind of software work they can do to meet it.
Suppose you’re working on a LLM chatbot, and your director wants to implement “talk with a PDF”. If you have six months to do the work, you might implement a robust file upload system, some pipeline to chunk and embed the PDF content for semantic search, a way to extract PDF pages as image content to capture formatting and diagrams, and so on. If you have one day to do the work, you will naturally search for simpler approaches: for instance, converting the PDF to text client-side and sticking the entire thing in the LLM context, or offering a plain-text “grep the PDF” tool.
This is true at even at the level of individual lines of code. When you have weeks or months until your deadline, you might spend a lot of time thinking airily about how you could refactor the codebase to make your new feature fit in as elegantly as possible. When you have hours, you will typically be laser-focused on finding an approach that will actually work. There are always many different ways to solve software problems. Engineers thus have quite a lot of discretion about how to get it done.
So how do I estimate, given all that?
I gather as much political context as possible before I even look at the code. How much pressure is on this project? Is it a casual ask, or do we have to find a way to do this? What kind of estimate is my management chain looking for? There’s a huge difference between “the CTO really wants this in one week” and “we were looking for work for your team and this seemed like it could fit”.
Ideally, I go to the code with an estimate already in hand. Instead of asking myself “how long would it take to do this”, where “this” could be any one of a hundred different software designs, I ask myself “which approaches could be done in one week?“.
I spend more time worrying about unknowns than knowns. As I said above, unknown work always dominates software projects. The more “dark forests” in the codebase this feature has to touch, the higher my estimate will be - or, more concretely, the tighter I need to constrain the set of approaches to the known work.
Finally, I go back to my manager with a risk assessment, not with a concrete estimate. I don’t ever say “this is a four-week project”. I say something like “I don’t think we’ll get this done in one week, because X Y Z would need to all go right, and at least one of those things is bound to take a lot more work than we expect. Ideally, I go back to my manager with a series of plans, not just one:
* We tackle X Y Z directly, which might all go smoothly but if it blows out we’ll be here for a month
* We bypass Y and Z entirely, which would introduce these other risks but possibly allow us to hit the deadline
* We bring in help from another team who’s more familiar with X and Y, so we just have to focus on Z
In other words, I don’t “break down the work to determine how long it will take”. My management chain already knows how long they want it to take. My job is to figure out the set of software approaches that match that estimate.
Sometimes that set is empty: the project is just impossible, no matter how you slice it. In that case, my management chain needs to get together and figure out some way to alter the requirements. But if I always said “this is impossible”, my managers would find someone else to do their estimates. When I do that, I’m drawing on a well of trust that I build up by making pragmatic estimates the rest of the time.
Many engineers find this approach distasteful. One reason is that they don’t like estimating in conditions of uncertainty, so they insist on having all the unknown questions answered in advance. I have written a lot about this in Engineers who won’t commit and How I provide technical clarity to non-technical leaders, but suffice to say that I think it’s cowardly. If you refuse to estimate, you’re forcing someone less technical to estimate for you.
Some engineers think that their job is to constantly push back against engineering management, and that helping their manager find technical compromises is betraying some kind of sacred engineering trust. I wrote about this in Software engineers should be a little bit cynical. If you want to spend your career doing that, that’s fine, but I personally find it more rewarding to find ways to work with my managers (who have almost exclusively been nice people).
Other engineers might say that they rarely feel this kind of pressure from their directors or VPs to alter estimates, and that this is really just the sign of a dysfunctional engineering organization. Maybe! I can only speak for the engineering organizations I’ve worked in. But my suspicion is that these engineers are really just saying that they work “out of the spotlight”, where there’s not much pressure in general and teams can adopt whatever processes they want. There’s nothing wrong with that. But I don’t think it qualifies you to give helpful advice to engineers who do feel this kind of pressure.
The common view is that a manager proposes some technical project, the team gets together to figure out how long it would take to build, and then the manager makes staffing and planning decisions with that information. In fact, it’s the reverse: a manager comes to the team with an estimate already in hand (though they might not come out and admit it), and then the team must figure out what kind of technical project might be possible within that estimate.
This is because estimates are not by or for engineering teams. They are tools used for managers to negotiate with each other about planned work. Very occasionally, when a project is literally impossible, the estimate can serve as a way for the team to communicate that fact upwards. But that requires trust. A team that is always pushing back on estimates will not be believed when they do encounter a genuinely impossible proposal.
When I estimate, I extract the range my manager is looking for, and only then do I go through the code and figure out what can be done in that time. I never come back with a flat “two weeks” figure. Instead, I come back with a range of possibilities, each with their own risks, and let my manager make that tradeoff.
It is not possible to accurately estimate software work. Software projects spend most of their time grappling with unknown problems, which by definition can’t be estimated in advance. To estimate well, you must therefore basically ignore all the known aspects of the work, and instead try and make educated guesses about how many unknowns there are, and how scary each unknown is.
edit: I should thank one of my readers, Karthik, who emailed me to ask about estimates, thus revealing to me that I had many more opinions than I thought.
edit: This post got a bunch of comments on Hacker News. Some non-engineers made the point that well-paid professionals should be expected to estimate their work, even if the estimate is completely fictional. Sure, I agree, as long as we’re on the same page that it’s fictional!
A couple of engineers argued that estimation was a solved problem. I’m not convinced by their examples. I agree you can probably estimate “build a user flow in Svelte”, but it’s much harder to estimate “build a user flow in Svelte on top of an existing large codebase”. I should have been more clear in the post that I think that’s the hard part, for the normal reasons that it’s very hard to work in large codebases, which I write about endlessly on this blog.
edit: There are also some comments on Lobste.rs, including a good note that the capability of the team obviously has a huge impact on any estimates. In my experience, this is not commonly understood: companies expect estimates to be fungible between engineers or teams, when in fact some engineers and teams can deliver work ten times more quickly (and others cannot deliver work at all, no matter how much time they have).
Another commenter politely suggested I read Software Estimation: Demystifying the Black Art, which I’ve never heard of. I’ll put it on my list.
...
Read the original on www.seangoedecke.com »
Disclaimer: None of this is written by AI, I’m still a real person writing my own blog like its 1999
I finally figured out how to do Gigabit Ethernet over my existing phone wires.
I’ve mostly lived with powerline adapters over recent years. Some worked well, some did not (try few and return what doesn’t work in your home). One I had for a while gave me stable 30 Mbps, which was little but good enough for internet at the time. I care very much about having stable low latency for gaming, more than bandwidth.
Fast forward to my current situation, that powerline adapter regularly lost connection which was a major problem. I got some new ones with the latest and greatest G.hn 2400 standard. The final contender served around 180 Mbps to my office (with high variance 120 to 280 Mbps), or around 80 Mbps to the top floor. It’s good enough to watch YouTube/TV yet it’s far from impressive.
One peculiar thing from the UK: Internet providers don’t truly offer gigabit internet. They have a range of deals like 30 Mbps — 75 Mbps — 150 Mbps — 300 Mbps — 500 Mbps — 900 Mbps, each one costing a few more pounds per month than the last. This makes the UK simultaneously one of the cheapest and one of the most expensive countries to get Internet.
Long story short, new place, new hardware, new deals, the internet has been running at 500 Mbps for some time now.
Every 50 GB of Helldivers 2 update (because these idiots shipped the same content in duplicate 5 times) is a painful reminder that the setup is not operating at capacity.
Problem: How to get 500 Mbps to my room?
I’ve been looking for a way to reuse phone wires for a while, because British houses are full of phone sockets. There are 2 sockets in my office room.
I can’t stress enough how much we love our phone sockets. It’s not uncommon to have a one bed flat with 2 phone sockets in the living room and 2 phone sockets in the bedroom and a master socket in the technical room. It’s ridiculous.
A new house bought today could have 10 phone sockets and 0 Ethernet sockets. There is still no regulation that requires new build to get Ethernet wiring (as far as I know).
There’s got to be a way to use the existing phone infrastructure.
I know the technology exists. It’s one of the rare cases where the technology exists and is mature, but nobody can be bothered to make products for it.
The standards that run powerline adapters (HomePlug AV200, AV500, G.hn 2400) can work with any pair of wires. It should work ten times better on dedicated phone wires instead of noisy power wires, if only manufacturers could be bothered to pull their fingers out of their arse and make the products that are needed.
After countless years of research, I finally found one German manufacturer that’s making what needs to be made https://www.gigacopper.net/wp/en/home-networking/
I was lazy so I ordered online in self-service (which is definitely the wrong way to go about it). It’s available on Ebay DE and Amazon DE, it’s possible to order from either with a UK account, make sure to enter a UK address for delivery (some items don’t allow it).
The better approach is almost certainly to speak to the seller to get a quote, with international shipping and the import invoice excluding VAT (to avoid paying VAT on VAT).
The package got the usual Royal Mail treatment:
* The package was shipped by DHL Germany
* The package was transferred to Royal Mail when entering the UK
* After some days, the DHL website said they tried to deliver but nobody home, this is bullshit
* Royal website said the package reached the depot and was awaiting delivery, this is bullshit
* In reality, the package was stuck at the border, as usual
* Google to find “website to pay import fee on parcel”
* Entered the DHL tracking number into the Royal Mail form for a Royal Mail tracking number
* The website said that the parcel had import fees to pay, this is correct
* Paid the fee online, 20% VAT + a few pounds of handling fees
* The package will be scheduled for delivery a few days later
* Royal Mail and DHL updated their status another two or three times with false information
* Royal Mail delivered a letter saying there was a package waiting on fees, though it was paid
Basically, you need to follow the tracking regularly until the package is tagged as lost or failed delivery, which is the cue to pay import fees.
It’s the normal procedure to buy things from Europe since Brexit 2020. It’s actually quite shocking that Royal Mail still hasn’t updated their tracking system to be able to give a status “waiting on import fees to be paid online”. They had 6 years!
This is the gigacopper G4201TM: 1 RJ11 phone line, 1 RJ45 gigabit Ethernet port, 1 power
* It came with a German to UK power adapter (unexpected and useful)
* It came with a standard RJ11 cable (expected and useless)
* 3M removable hanging strip to stick to the wall, the device is very light
There is a gigacopper G4202TM: with an RJ45 to connect to the phone line instead of a RJ11 (not sure if it’s a newer model or just a variant, as that one has two gigabit Ethernet ports). Don’t be confused by having a RJ45 port that is not a RJ45 port.
There is a gigacopper G4201C (1 port) and G4204C (4 port) for Ethernet over coaxial. Some countries have coax in every room for TV/satellite. This may be of interest to some readers.
Plugged it and it works!
I discovered soon afterwards that I bought the wrong item. There is an InHome and a Client/Server variant of the product. Make sure to buy the InHome variant.
* The InHome variant can have up to 16 devices, communicating to any peer on the medium, with sub millisecond latency.
* The client-server variant is preconfigured as a pair, splitting the bandwidth 70% download / 30% upload, with few milliseconds latency. I think it’s a use case for ISP and long range connections.
Thankfully the difference is only the firmware. I spoke to the vendor who was very helpful and responsive. They sent me the firmware and the tools to patch.
I have a fetish for low latency. This screenshot is oddly satisfying.
The web interface says 1713 Mbps on the physical layer, the debugging tool says PHONE 200MHz — Connected 1385 Mbps.
I wanted to verify whether the device can do a full Gigabit. Unfortunately I realized I don’t have any device that can test that.
Phones are wireless, which is too slow to test anything. I checked out of curiosity, my phone did 100 Mbps to 400 Mbps right next to the router. Grabbed two laptops only to realize they didn’t have any Ethernet port. I dug up an old laptop from storage with an Ethernet port. The laptop couldn’t boot, the CPU fan didn’t start and the laptop refused to boot with a dead fan.
There is a hard lesson here: 1 Gbps ought to be enough for any home. Using the phone line is as good as having Ethernet wiring through the house if it can deliver a (shared) 1.7 Gbps link to multiple rooms.
Still, I really wanted to verify that the device can do a full Gbps, I procured an USB-C to Ethernet adapter.
Full speed achieved, testing from a phone to a computer with iperf3.
Some readers might wonder about the wiring.
I didn’t check the wiring before buying anything because it’s pointless. British sockets are always daisy chained in an incomprehensible maze.
Phone sockets need 2 wires and can be daisy chained. Ethernet sockets need 8 wires. They often use the same Cat5 cable because it’s the most widely available (8 wires cable, the 6 extra wires can remain unconnected).
It’s possible to swap the phone socket for an RJ45 socket, if you only have 2 sockets connected with the right cable. It’s not possible when sockets are daisy chained. (You could put a double or triple RJ45 socket with a switch to break a daisy chain, but it quickly becomes impractical in a British house with 5 to 10 sockets in an arbitrary layout.)
I opened one socket in the office room. There are two Cat5 cables daisy chained. There are 3 wires connected.
It’s probably daisy chained with the other socket in the room, or it’s daisy chained with the socket in the other room that’s closer. Who knows.
I opened the BT master socket in the technical room. It should have the cables coming from the other rooms. It should connect the internal phone wires with the external phone line.
There is one single Cat5 cable. There are 4 wires connected. It’s definitely not a master socket. WTF?!
It’s interesting that this socket has 4 wires connected but the socket in the office has 3 wires connected. The idiot who did the wiring was inconsistent. The gigacopper device can operate over 2 wires (200 MHz Phone SISO) or over 4 wires (100 MHz Phone MIMO). I can try the other modes if I finish the job.
The search for the master socket continues. The cables from the other floors should all be coming down somewhere around here. There is a blank plate next to it (right).
This might be the external phone line? A bunch of wires are crimped together, colours do not match. It’s the hell of a mess.
Only sure thing, they are different cables because they are different colours. They might be going to a junction box somewhere else. Probably behind a wall that’s impossible to access!
Conclusion: There is zero chance to get proper Ethernet wiring out of this mess.
The gigacopper device to do gigabit Ethernet over phone line is a miracle!
There is an enormous untapped market for gigabit Ethernet over phone sockets in the UK.
...
Read the original on thehftguy.com »
Small slights from a manager may seem like no big deal, but new research from Wharton reveals that even the mildest of mistreatment at work can affect more than just employee morale.
The study finds that when managers at a national retail chain failed to deliver birthday greetings on time, it resulted in a 50% increase in absenteeism and a reduction of more than two working hours per month. The lost productivity was a form of revenge, with slighted employees taking more paid sick time, arriving late, leaving early, and taking longer breaks.
“Insults are about a lack of respect, and that’s what this is really all about. There are huge and small lacks of respect, but they all leave a mark,” says Wharton management professor Peter Cappelli, who conducted the study with Liat Eldor and Michal Hodor, both assistant professors at Tel Aviv University’s Coller School of Management.
The study, “The Lower Boundary of Workplace Mistreatment: Do Small Slights Matter?”, is published in the journal Proceedings of the National Academy of Sciences. While there are a growing number of papers that examine the effects of severe workplace mistreatment such as sexual and physical harassment, the study is the first to measure the cause and effect of minor infractions.
Read more at Knowledge at Wharton.
...
Read the original on penntoday.upenn.edu »
When California neighborhoods increased their number of zero-emissions vehicles (ZEV) between 2019 and 2023, they also experienced a reduction in air pollution. For every 200 vehicles added, nitrogen dioxide (NO₂) levels dropped 1.1%. The results, obtained from a new analysis based on statewide satellite data, are among the first to confirm the environmental health benefits of ZEVs, which include fully electric and plug-in hybrid cars, in the real world. The study was funded in part by the National Institutes of Health and just published in The Lancet Planetary Health.
While the shift to electric vehicles is largely aimed at curbing climate change in the future, it is also expected to improve air quality and benefit public health in the near term. But few studies have tested that assumption with actual data, partly because ground-level air pollution monitors have limited spatial coverage. A 2023 study from the Keck School of Medicine of USC using these ground-level monitors suggested that ZEV adoption was linked to lower air pollution, but the results were not definitive.
Now, the same research team has confirmed the link with high-resolution satellite data, which can detect NO₂ in the atmosphere by measuring how the gas absorbs and reflects sunlight. The pollutant, released from burning fossil fuels, can trigger asthma attacks, cause bronchitis, and increase the risk of heart disease and stroke.
“This immediate impact on air pollution is really important because it also has an immediate impact on health. We know that traffic-related air pollution can harm respiratory and cardiovascular health over both the short and long term,” said Erika Garcia, PhD, MPH, assistant professor of population and public health sciences at the Keck School of Medicine and the study’s senior author.
The findings offer support for the continued adoption of electric vehicles. Over the study period, ZEV registrations increased from 2% to 5% of all light-duty vehicles (a category that includes cars, SUVs, pickup trucks and vans) across California, suggesting that the potential for improving air pollution and public health remains largely untapped.
“We’re not even fully there in terms of electrifying, but our research shows that California’s transition to electric vehicles is already making measurable differences in the air we breathe,” said the study’s lead author, Sandrah Eckel, PhD, associate professor of population and public health sciences at the Keck School of Medicine.
For the analysis, the researchers divided California into 1,692 neighborhoods, using a geographic unit similar to zip codes. They obtained publicly available data from the state’s Department of Motor Vehicles on the number of ZEVs registered in each neighborhood. ZEVs include full-battery electric cars, plug-in hybrids and fuel-cell cars, but not heavier duty vehicles like delivery trucks and semi trucks.
Next, the research team obtained data from the Tropospheric Monitoring Instrument (TROPOMI), a high-resolution satellite sensor that provides daily, global measurements of NO₂ and other pollutants. They used this data to calculate annual average NO₂ levels in each California neighborhood from 2019 to 2023.
Over the study period, a typical neighborhood gained 272 ZEVs, with most neighborhoods adding between 18 and 839. For every 200 new ZEVs registered, NO₂ levels dropped 1.1%, a measurable improvement in air quality.
“These findings show that cleaner air isn’t just a theory—it’s already happening in communities across California,” Eckel said.
To confirm that these results were reliable, the researchers conducted several additional analyses. They accounted for pandemic-related changes as a contributor to NO₂ decline, such as excluding the year 2020 and controlling for changing gas prices and work-from-home patterns. The researchers also confirmed that neighborhoods that added more gas-powered cars saw the expected rise in pollution. Finally, they replicated their results using updated data from ground-level monitors from 2012 to 2023.
“We tested our analysis in many different ways, and the results consistently support our main finding,” Garcia said.
These results show that TROPOMI satellite data—which covers nearly the entire planet—can reliably track changes in combustion-related air pollution, offering a new way to study the effects of the transition to electric vehicles and other environmental interventions.
Next, Garcia, Eckel and their team are comparing data on ZEV adoption with data on asthma-related emergency room visits and hospitalizations across California. The study could be one of the first to document real-world health improvements as California continues to embrace electric vehicles.
In addition to Garcia and Eckel, the study’s other authors are Futu Chen, Sam J. Silva and Jill Johnston from the Department of Population and Public Health Sciences, Keck School of Medicine of USC, University of Southern California; Daniel L. Goldberg from the Milken Institute School of Public Health, The George Washington University; Lawrence A. Palinkas from the Herbert Wertheim School of Public Health and Human Longevity Science, University of California, San Diego; and Alberto Campos and Wilma Franco from the Southeast Los Angeles Collaborative.
This work was supported by the National Institutes of Health/National Institute of Environmental Health Sciences [R01ES035137, P30ES007048]; the National Aeronautics and Space Administration Health and Air Quality Applied Sciences Team [80NSSC21K0511]; and the National Aeronautics and Space Administration Atmospheric Composition Modeling and Analysis Program [80NSSC23K1002].
...
Read the original on keck.usc.edu »
You can now view replies to this blog post made on Bluesky directly on this website. Check it out here!
I’ve always wanted to host a comment section on my site, but it’s difficult because the content is statically generated and hosted on a CDN. I could host comments on a separate VPS or cloud service. But maintaining a dynamic web service like this can be expensive and time-consuming — in general, I’m not interested in being an unpaid, part-time DevOps engineer.
Recently, however, I read a blog post by Cory Zue about how he embedded a comment section from Bluesky on his blog. I immediately understood to benefits of this approach. With this approach, Bluesky could handle all of the difficult work involved in managing a social media like account verification, hosting, storage, spam, and moderation. Meanwhile because Bluesky is an open platform with a public API, it’s easy to directly embed comments on my own site.
There are other services that could be used for this purpose instead. Notably, I could embed replies from the social media formerly known as Twitter. Or I could use a platform like Disqus or even giscus, which hosts comments on GitHub Discussions. But I see Bluesky as a clearly superior choice among these options. For one, Bluesky is built on top of an open social media platform in AT Proto, meaning it can’t easily be taken over by an authoritarian billionaire creep. Moreover, Bluesky is a full-fledged social media platform, which naturally makes it a better option for hosting a conversation than GitHub.
Zue published a standalone package called bluesky-comments that allows embedding comments in a React component as he did. But I decided to build this feature myself instead. Mainly this is because I wanted to make a few styling changes anyway to match the rest of my site. But I also wanted to leave the option open to adding more features in the future, which would be easier to do if I wrote the code myself. The entire implementation is small regardless, amounting to only ~200 LOC between the UI components and API functions.
Initially, I planned to allow people to directly post on Bluesky via my site. This would work by providing an OAuth flow that gives my site permission to post on Bluesky on behalf of the user. I actually did get the auth flow working, but building out a UI for posting and replying to existing comments is difficult to do well. Going down this path quickly leads to building what is essentially a custom Bluesky client, which I didn’t have the time or interest in doing right now. Moreover, because the user needs to go through the auth flow and sign-in to their Bluesky account, the process is not really much easier than posting directly on a linked Bluesky post.
Without the requirement of allowing others to directly post on my site, the implementation became much simpler. Essentially, my task was to specify a Bluesky post that corresponds to the article in the site’s metadata. Then, when the page loads I fetch the replies to that post from Bluesky, parse the response, and display the results in a simple comment section UI.
As explained in my last post, this site is built using React Server Components and Parcel. The content of my articles are written using MDX, an extension to Markdown that allows directly embedding JavaScript and JSX. In each post, I export a metadata object that I validate using a Zod schema. For instance, the metadata for this post looks like this:
The value of bskyPostId references the Bluesky post from which I’ll pull replies to display in the comment section. Because my project is built in TypeScript, it was easy to integrate with the Bluesky TypeScript SDK (@bluesky/api on NPM). Reading the Bluesky API documentation and Zue’s implementation led me to the getPostThread endpoint. Given an AT Protocol URI, this endpoint returns an object with data on the given post and its replies.
I could have interacted directly with the Bluesky API from my React component using fetch and useEffect. However, it can be a bit tricky to correctly handle loading and a error states, even for a simple feature like this. Because of this, I decided to use the Tanstack react-query package to manage the API request/response cycle. This library takes care of the messy work of handling errors, retries, and loading states while I simply provide it a function to fetch the post data.
Once I obtain the Bluesky response, the next task is parsing out the content and metadata for the replies. Bluesky supports a rich content structure in its posts for representing markup, references, and attachments. Building out a UI that fully respects this rich content would be difficult. Instead, I decided to keep it simple by just pulling out the text content from each reply.
Even so, building a UI that properly displays threaded comments, particularly one that is formatted well on small mobile devices, can be tricky. For now, my approach was to again keep it simple. I indented each reply and added a left border to make it easier to follow reply threads. Otherwise, I mostly copied design elements for layout of the profile picture and post date from Bluesky.
Lastly, I added a UI component linking to the parent post on Bluesky, and encouraging people to add to the conversation there. With this, the read-only comment section implementation was complete. If there’s interest, I could publish my version of Bluesky comments as a standalone package. But several of the choices I made were relatively specific to my own site. Moreover, the implementation is simple enough that others could probably build their own version from reading the source code, just as I did using Zue’s version.
Let me know what you think by replying on Bluesky. Hopefully this can help increase engagement with my blog posts, but then again, my last article generated no replies, so maybe not 😭.
Join the conversation by replying on Bluesky…
...
Read the original on micahcantor.com »
Imagine the internet suddenly stops working. Payment systems in your local food store go down. Healthcare systems in the regional hospital flatline. Your work software tools, and all the information they contain, disappear.
You reach out for information but struggle to communicate with family and friends, or to get the latest updates on what is happening, as social media platforms are all down. Just as someone can pull the plug on your computer, it’s possible to shut down the system it connects to.
This isn’t an outlandish scenario. Technical failures, cyber-attacks and natural disasters can all bring down key parts of the internet. And as the US government makes increasing demands of European leaders, it is possible to imagine Europe losing access to the digital infrastructure provided by US firms as part of the geopolitical bargaining process.
At the World Economic Forum in Davos, Switzerland, the EU’s president, Ursula von der Leyen, has highlighted the “structural imperative” for Europe to “build a new form of independence” — including in its technological capacity and security. And, in fact, moves are already being made across the continent to start regaining some independence from US technology.
A small number of US-headquartered big tech companies now control a large proportion of the world’s cloud computing infrastructure, that is the global network of remote servers that store, manage and process all our apps and data. Amazon Web Services (AWS), Microsoft Azure and Google Cloud are reported to hold about 70% of the European market, while European cloud providers have only 15%.
My research supports the idea that relying on a few global providers increases vulnerabilty for Europe’s private and public sectors — including the risk of cloud computing disruption, whether caused by technical issues, geopolitical disputes or malicious activity.
Two recent examples — both the result of apparent technical failures — were the hours‑long AWS incident in October 2025, which disrupted thousands of services such as banking apps across the world, and the major Cloudflare incident two months later, which took LinkedIn, Zoom and other communication platforms offline.
The impact of a major power disruption on cloud computing services was also demonstrated when Spain, Portugal and some of south-west France endured a massive power cut in April 2025.
There are signs that Europe is starting to take the need for greater digital independence more seriously. In the Swedish coastal city of Helsingborg, for example, a one-year project is testing how various public services would function in the scenario of a digital blackout.
Would elderly people still receive their medical prescriptions? Can social services continue to provide care and benefits to all the city’s residents?
This pioneering project seeks to quantify the full range of human, technical and legal challenges that a collapse of technical services would create, and to understand what level of risk is acceptable in each sector. The aim is to build a model of crisis preparedness that can be shared with other municipalities and regions later this year.
Elsewhere in Europe, other forerunners are taking action to strengthen their digital sovereignty by weaning themselves off reliance on global big tech companies — in part through collaboration and adoption of open source software. This technology is treated as a digital public good that can be moved between different clouds and operated under sovereign conditions.
In northern Germany, the state of Schleswig-Holstein has made perhaps the clearest break with digital dependency. The state government has replaced most of its Microsoft-powered computer systems with open-source alternatives, cancelling nearly 70% of its licenses. Its target is to use big tech services only in exceptional cases by the end of the decade.
Across France, Germany, the Netherlands and Italy, governments are investing both nationally and transnationally in the development of digital open-source platforms and tools for chat, video and document management — akin to digital Lego bricks that administrations can host on their own terms.
In Sweden, a similar system for chat, video and online collaboration, developed by the National Insurance Agency, runs in domestic data centres rather than foreign clouds. It is being offered as a service for Swedish public authorities looking for sovereign digital alternatives.
For Europe — and any nation — to meaningfully address the risks posed by digital blackout and cloud collapse, digital infrastructure needs to be treated with the same seriousness as physical infrastructure such as ports, roads and power grids.
Control, maintenance and crisis preparedness of digital infrastructure should be seen as core public responsibilities, rather than something to be outsourced to global big tech firms, open for foreign influence.
To encourage greater focus on digital resilience among its member states, the EU has developed a cloud sovereignty framework to guide procurement of cloud services — with the intention of keeping European data under European control. The upcoming Cloud and AI Development Act is expected to bring more focus and resources to this area.
Governments and private companies should be encouraged to demand security, openness and interoperability when seeking bids for provision of their cloud services — not merely low prices. But in the same way, as individuals, we can all make a difference with the choices we make.
Just as it’s advisable to ensure your own access to food, water and medicine in a time of crisis, be mindful of what services you use personally and professionally. Consider where your emails, personal photos and conversations are stored. Who can access and use your data, and under what conditions? How easily can everything be backed up, retrieved and transferred to another service?
No country, let alone continent, will ever be completely digitally independent, and nor should they be. But by pulling together, Europe can ensure its digital systems remain accessible even in a crisis — just as is expected from its physical infrastructure.
...
Read the original on theconversation.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.