10 interesting stories served every morning and every evening.




1 707 shares, 0 trendiness

In Europe, Wind and Solar Power Overtakes Fossil Fuels

The shift is largely due to the rapid ex­pan­sion of so­lar en­ergy, which is grow­ing faster than any other source of elec­tric­ity. Together, wind and so­lar gen­er­ated 30 per­cent of E. U. power last year, while fos­sil fu­els pro­vided 29 per­cent, ac­cord­ing to the analy­sis from Ember, a think tank based in London. Including hy­dro, re­new­ables pro­vided nearly half of all E.U. power in 2025.

Last year, for the first time, wind and so­lar sup­plied more power than fos­sil fu­els to the E. U., ac­cord­ing to a new analy­sis.

The shift is largely due to the rapid ex­pan­sion of so­lar en­ergy, which is grow­ing faster than any other source of elec­tric­ity. Together, wind and so­lar gen­er­ated 30 per­cent of E. U. power last year, while fos­sil fu­els pro­vided 29 per­cent, ac­cord­ing to the analy­sis from Ember, a think tank based in London. Including hy­dro, re­new­ables pro­vided nearly half of all E.U. power in 2025.

Last year, for the first time, wind and so­lar sup­plied more power than fos­sil fu­els to the E. U., ac­cord­ing to a new analy­sis.

The analy­sis finds that so­lar is mak­ing gains in every E. U. coun­try, while coal is broadly in re­treat. Last year, so­lar alone sup­plied more than 20 per­cent of power in Hungary, Cyprus, Greece, Spain, and the Netherlands. Meanwhile, in 19 European coun­tries, coal ac­counted for less than 5 per­cent of power. In 2025, both Ireland and Finland joined the ranks of European coun­tries that have shut­tered their last re­main­ing coal plants.

Warming, how­ever, con­tin­ues to chal­lenge the shift to clean en­ergy as drought saps hy­dropower. Last year, hy­dro out­put dropped slightly in the E. U., and nat­ural gas power rose to com­pen­sate.

The next pri­or­ity for the E. U. should be to put a se­ri­ous dent in re­liance on ex­pen­sive, im­ported gas,” said Ember an­a­lyst Beatrice Petrovich. Gas not only makes the E.U. more vul­ner­a­ble to en­ergy black­mail, it’s also dri­ving up prices.”

In parts of Europe, there are signs that in­creas­ingly cheap bat­ter­ies are be­gin­ning to dis­place nat­ural gas in the early evening, when power de­mand is high, but so­lar out­put is wan­ing. Said Petrovich, As this trend ac­cel­er­ates it could limit how much gas is needed in evening hours, there­fore sta­bi­liz­ing prices.”

An E. U. Plan to Slash Micropollutants in Wastewater Is Under Attack

...

Read the original on e360.yale.edu »

2 535 shares, 27 trendiness

Minnesota reels after second fatal shooting by federal agents

The Department of Homeland Security said the man was armed with a gun and two mag­a­zines of am­mu­ni­tion and cir­cu­lated a photo of the weapon. DHS said a Border Patrol agent fired in self-de­fense. The Minnesota Bureau of Criminal Apprehension (BCA), the state’s chief in­ves­tiga­tive agency, said it was not al­lowed ac­cess to the scene.

...

Read the original on www.startribune.com »

3 497 shares, 28 trendiness

Microsoft will assist the FBI in unlocking your Windows PC data if asked

Windows PCs by de­fault will backup their en­cryp­tion keys to the cloud, and Microsoft is­n’t afraid to share those with the FBI if re­quested.

Windows PCs by de­fault will backup their en­cryp­tion keys to the cloud, and Microsoft is­n’t afraid to share those with the FBI if re­quested.

Microsoft has con­firmed in a state­ment to Forbes that the com­pany will pro­vide the FBI ac­cess to BitLocker en­cryp­tion keys if a valid le­gal or­der is re­quested. These keys en­able the abil­ity to de­crypt and ac­cess the data on a com­puter run­ning Windows, giv­ing law en­force­ment the means to break into a de­vice and ac­cess its data.

The news comes as Forbes re­ports that Microsoft gave the FBI the BitLocker en­cryp­tion keys to ac­cess a de­vice in Guam that law en­force­ment be­lieved to have evidence that would help prove in­di­vid­u­als han­dling the is­land’s Covid un­em­ploy­ment as­sis­tance pro­gram were part of a plot to steal funds” in early 2025.

This was pos­si­ble be­cause the de­vice in ques­tion had its BitLocker en­cryp­tion key saved in the cloud. By de­fault, Windows 11 forces the use of a Microsoft Account, and the OS will au­to­mat­i­cally tie your BitLocker en­cryp­tion key to your on­line ac­count so that users can eas­ily re­cover their data in sce­nar­ios where they might get locked out. This can be dis­abled, let­ting you choose where to save them lo­cally, but the de­fault be­hav­ior is to store the key in Microsoft’s cloud when set­ting up a PC with a Microsoft Account.

While key re­cov­ery of­fers con­ve­nience, it also car­ries a risk of un­wanted ac­cess, so Microsoft be­lieves cus­tomers are in the best po­si­tion to de­cide… how to man­age their keys,” Microsoft spokesper­son Charles Chamberlayne said in a state­ment to Forbes.

Microsoft told Forbes that it re­ceives around 20 re­quests for BitLocker en­cryp­tion keys from the FBI a year, but the ma­jor­ity of re­quests are un­able to be met be­cause the en­cryp­tion key was never up­loaded to the com­pa­ny’s cloud.

This is no­table as other tech com­pa­nies, such as Apple, have fa­mously re­fused to pro­vide law en­force­ment with ac­cess to en­crypted data stored on their prod­ucts. Apple has openly fought against the FBI in the past when it was asked to pro­vide a back­door into an iPhone. Other tech gi­ants, such as Meta, will store en­cryp­tion keys in the cloud, but use zero-knowl­edge ar­chi­tec­tures and en­crypt the keys server-side so that only the user can ac­cess them.

It’s frankly shock­ing that the en­cryp­tion keys that do get up­loaded to Microsoft aren’t en­crypted on the cloud side, too. That would pre­vent Microsoft from see­ing the keys, but it seems that, as things cur­rently stand, those keys are avail­able in an un­en­crypted state, and it is a pri­vacy night­mare for cus­tomers.

To see Microsoft so will­ingly hand over the keys to en­crypted Windows PCs is con­cern­ing, and should make every­body us­ing a mod­ern Windows com­puter think twice be­fore back­ing up their keys to the cloud. You can see which PCs have their BitLocker keys stored on Microsoft’s servers on the Microsoft Account web­site here, which will let you delete them if pre­sent.

Follow Windows Central on Google News to keep our lat­est news, in­sights, and fea­tures at the top of your feeds!

...

Read the original on www.windowscentral.com »

4 470 shares, 36 trendiness

BirdyChat Becomes Europe’s First WhatsApp-Interoperable Chat App

Today we are ex­cited to share a big mile­stone. BirdyChat is now the first chat app in Europe that can ex­change mes­sages with WhatsApp un­der the Digital Markets Act. This brings us closer to our mis­sion of giv­ing work con­ver­sa­tions a proper home.

WhatsApp is cur­rently rolling out in­ter­op­er­abil­ity sup­port across Europe. As this roll­out con­tin­ues, the fea­ture will be­come fully avail­able to both BirdyChat and WhatsApp users in the com­ing months.

...

Read the original on www.birdy.chat »

5 455 shares, 26 trendiness

How I estimate work as a staff software engineer

There’s a kind of po­lite fic­tion at the heart of the soft­ware in­dus­try. It goes some­thing like this:

Estimating how long soft­ware pro­jects will take is very hard, but not im­pos­si­ble. A skilled en­gi­neer­ing team can, with time and ef­fort, learn how long it will take for them to de­liver work, which will in turn al­low their or­ga­ni­za­tion to make good busi­ness plans.

This is, of course, false. As every ex­pe­ri­enced soft­ware en­gi­neer knows, it is not pos­si­ble to ac­cu­rately es­ti­mate soft­ware pro­jects. The ten­sion be­tween this po­lite fic­tion and its well-un­der­stood false­ness causes a lot of strange ac­tiv­ity in tech com­pa­nies.

For in­stance, many en­gi­neer­ing teams es­ti­mate work in t-shirt sizes in­stead of time, be­cause it just feels too ob­vi­ously silly to the en­gi­neers in ques­tion to give di­rect time es­ti­mates. Naturally, these t-shirt sizes are im­me­di­ately trans­lated into hours and days when the es­ti­mates make their way up the man­age­ment chain.

Alternatively, soft­ware en­gi­neers who are gen­uinely try­ing to give good time es­ti­mates have ridicu­lous heuris­tics like double your ini­tial es­ti­mate and add 20%“. This is ba­si­cally the same as giv­ing up and say­ing just es­ti­mate every­thing at a month”.

Should tech com­pa­nies just stop es­ti­mat­ing? One of my guid­ing prin­ci­ples is that when a tech com­pany is do­ing some­thing silly, they’re prob­a­bly do­ing it for a good rea­son. In other words, prac­tices that ap­pear to not make sense are of­ten serv­ing some more ba­sic, il­leg­i­ble role in the or­ga­ni­za­tion. So what is the ac­tual pur­pose of es­ti­ma­tion, and how can you do it well as a soft­ware en­gi­neer?

Before I get into that, I should jus­tify my core as­sump­tion a lit­tle more. People have writ­ten a lot about this al­ready, so I’ll keep it brief.

I’m also go­ing to con­cede that some­times you can ac­cu­rately es­ti­mate soft­ware work, when that work is very well-un­der­stood and very small in scope. For in­stance, if I know it takes half an hour to de­ploy a ser­vice, and I’m be­ing asked to up­date the text in a link, I can ac­cu­rately es­ti­mate the work at some­thing like 45 min­utes: five min­utes to push the change up, ten min­utes to wait for CI, thirty min­utes to de­ploy.

For most of us, the ma­jor­ity of soft­ware work is not like this. We work on poorly-un­der­stood sys­tems and can­not pre­dict ex­actly what must be done in ad­vance. Most pro­gram­ming in large sys­tems is re­search: iden­ti­fy­ing prior art, map­ping out enough of the sys­tem to un­der­stand the ef­fects of changes, and so on. Even for fairly small changes, we sim­ply do not know what’s in­volved in mak­ing the change un­til we go and look.

The pro-es­ti­ma­tion dogma says that these ques­tions ought to be an­swered dur­ing the plan­ning process, so that each in­di­vid­ual piece of work be­ing dis­cussed is scoped small enough to be ac­cu­rately es­ti­mated. I’m not im­pressed by this an­swer. It seems to me to be a throw­back to the bad old days of soft­ware ar­chi­tec­ture, where one ar­chi­tect would map every­thing out in ad­vance, so that in­di­vid­ual pro­gram­mers sim­ply had to me­chan­i­cally fol­low in­struc­tions. Nobody does that now, be­cause it does­n’t work: pro­gram­mers must be em­pow­ered to make ar­chi­tec­tural de­ci­sions, be­cause they’re the ones who are ac­tu­ally in con­tact with the code. Even if it did work, that would sim­ply shift the im­pos­si­ble-to-es­ti­mate part of the process back­wards, into the plan­ning meet­ing (where of course you can’t write or run code, which makes it near-im­pos­si­ble to ac­cu­rately an­swer the kind of ques­tions in­volved).

In short: soft­ware en­gi­neer­ing pro­jects are not dom­i­nated by the known work, but by the un­known work, which al­ways takes 90% of the time. However, only the known work can be ac­cu­rately es­ti­mated. It’s there­fore im­pos­si­ble to ac­cu­rately es­ti­mate soft­ware pro­jects in ad­vance.

Estimates do not help en­gi­neer­ing teams de­liver work more ef­fi­ciently. Many of the most pro­duc­tive years of my ca­reer were spent on teams that did no es­ti­ma­tion at all: we were ei­ther work­ing on pro­jects that had to be done no mat­ter what, and so did­n’t re­ally need an es­ti­mate, or on pro­jects that would de­liver a con­stant drip of value as we went, so we could just keep go­ing in­def­i­nitely.

In a very real sense, es­ti­mates aren’t even made by en­gi­neers at all. If an en­gi­neer­ing team comes up with a long es­ti­mate for a pro­ject that some VP re­ally wants, they will be pres­sured into low­er­ing it (or some other, more com­pli­ant en­gi­neer­ing team will be handed the work). If the es­ti­mate on an un­de­sir­able pro­ject - or a pro­ject that’s in­tended to hold space” for fu­ture un­planned work - is too short, the team will of­ten be en­cour­aged to in­crease it, or their man­ager will just add a 30% buffer.

One ex­cep­tion to this is pro­jects that are tech­ni­cally im­pos­si­ble, or just gen­uinely pro­hib­i­tively dif­fi­cult. If a man­ager con­sis­tently fails to pres­sure their teams into giv­ing the right” es­ti­mates, that can send a sig­nal up that maybe the work can’t be done af­ter all. Smart VPs and di­rec­tors will try to avoid tak­ing on tech­ni­cally im­pos­si­ble pro­jects.

Another ex­cep­tion to this is ar­eas of the or­ga­ni­za­tion that se­nior lead­er­ship does­n’t re­ally care about. In a sleepy back­wa­ter, of­ten the for­mal es­ti­ma­tion process does ac­tu­ally get fol­lowed to the let­ter, be­cause there’s no di­rec­tor or VP who wants to jump in and shape the es­ti­mates to their ends. This is one way that some parts of a tech com­pany can have dras­ti­cally dif­fer­ent en­gi­neer­ing cul­tures to other parts. I’ll let you imag­ine the con­se­quences when the com­pany is re-orged and these teams are pulled into the spot­light.

Estimates are po­lit­i­cal tools for non-en­gi­neers in the or­ga­ni­za­tion. They help man­agers, VPs, di­rec­tors, and C-staff de­cide on which pro­jects get funded and which pro­jects get can­celled.

The stan­dard way of think­ing about es­ti­mates is that you start with a pro­posed piece of soft­ware work, and you then go and fig­ure out how long it will take. This is en­tirely back­wards. Instead, teams will of­ten start with the es­ti­mate, and then go and fig­ure out what kind of soft­ware work they can do to meet it.

Suppose you’re work­ing on a LLM chat­bot, and your di­rec­tor wants to im­ple­ment talk with a PDF. If you have six months to do the work, you might im­ple­ment a ro­bust file up­load sys­tem, some pipeline to chunk and em­bed the PDF con­tent for se­man­tic search, a way to ex­tract PDF pages as im­age con­tent to cap­ture for­mat­ting and di­a­grams, and so on. If you have one day to do the work, you will nat­u­rally search for sim­pler ap­proaches: for in­stance, con­vert­ing the PDF to text client-side and stick­ing the en­tire thing in the LLM con­text, or of­fer­ing a plain-text grep the PDF tool.

This is true at even at the level of in­di­vid­ual lines of code. When you have weeks or months un­til your dead­line, you might spend a lot of time think­ing air­ily about how you could refac­tor the code­base to make your new fea­ture fit in as el­e­gantly as pos­si­ble. When you have hours, you will typ­i­cally be laser-fo­cused on find­ing an ap­proach that will ac­tu­ally work. There are al­ways many dif­fer­ent ways to solve soft­ware prob­lems. Engineers thus have quite a lot of dis­cre­tion about how to get it done.

So how do I es­ti­mate, given all that?

I gather as much po­lit­i­cal con­text as pos­si­ble be­fore I even look at the code. How much pres­sure is on this pro­ject? Is it a ca­sual ask, or do we have to find a way to do this? What kind of es­ti­mate is my man­age­ment chain look­ing for? There’s a huge dif­fer­ence be­tween the CTO re­ally wants this in one week” and we were look­ing for work for your team and this seemed like it could fit”.

Ideally, I go to the code with an es­ti­mate al­ready in hand. Instead of ask­ing my­self how long would it take to do this”, where this” could be any one of a hun­dred dif­fer­ent soft­ware de­signs, I ask my­self which ap­proaches could be done in one week?“.

I spend more time wor­ry­ing about un­knowns than knowns. As I said above, un­known work al­ways dom­i­nates soft­ware pro­jects. The more dark forests” in the code­base this fea­ture has to touch, the higher my es­ti­mate will be - or, more con­cretely, the tighter I need to con­strain the set of ap­proaches to the known work.

Finally, I go back to my man­ager with a risk as­sess­ment, not with a con­crete es­ti­mate. I don’t ever say this is a four-week pro­ject”. I say some­thing like I don’t think we’ll get this done in one week, be­cause X Y Z would need to all go right, and at least one of those things is bound to take a lot more work than we ex­pect. Ideally, I go back to my man­ager with a se­ries of plans, not just one:

* We tackle X Y Z di­rectly, which might all go smoothly but if it blows out we’ll be here for a month

* We by­pass Y and Z en­tirely, which would in­tro­duce these other risks but pos­si­bly al­low us to hit the dead­line

* We bring in help from an­other team who’s more fa­mil­iar with X and Y, so we just have to fo­cus on Z

In other words, I don’t break down the work to de­ter­mine how long it will take”. My man­age­ment chain al­ready knows how long they want it to take. My job is to fig­ure out the set of soft­ware ap­proaches that match that es­ti­mate.

Sometimes that set is empty: the pro­ject is just im­pos­si­ble, no mat­ter how you slice it. In that case, my man­age­ment chain needs to get to­gether and fig­ure out some way to al­ter the re­quire­ments. But if I al­ways said this is im­pos­si­ble”, my man­agers would find some­one else to do their es­ti­mates. When I do that, I’m draw­ing on a well of trust that I build up by mak­ing prag­matic es­ti­mates the rest of the time.

Many en­gi­neers find this ap­proach dis­taste­ful. One rea­son is that they don’t like es­ti­mat­ing in con­di­tions of un­cer­tainty, so they in­sist on hav­ing all the un­known ques­tions an­swered in ad­vance. I have writ­ten a lot about this in Engineers who won’t com­mit and How I pro­vide tech­ni­cal clar­ity to non-tech­ni­cal lead­ers, but suf­fice to say that I think it’s cow­ardly. If you refuse to es­ti­mate, you’re forc­ing some­one less tech­ni­cal to es­ti­mate for you.

Some en­gi­neers think that their job is to con­stantly push back against en­gi­neer­ing man­age­ment, and that help­ing their man­ager find tech­ni­cal com­pro­mises is be­tray­ing some kind of sa­cred en­gi­neer­ing trust. I wrote about this in Software en­gi­neers should be a lit­tle bit cyn­i­cal. If you want to spend your ca­reer do­ing that, that’s fine, but I per­son­ally find it more re­ward­ing to find ways to work with my man­agers (who have al­most ex­clu­sively been nice peo­ple).

Other en­gi­neers might say that they rarely feel this kind of pres­sure from their di­rec­tors or VPs to al­ter es­ti­mates, and that this is re­ally just the sign of a dys­func­tional en­gi­neer­ing or­ga­ni­za­tion. Maybe! I can only speak for the en­gi­neer­ing or­ga­ni­za­tions I’ve worked in. But my sus­pi­cion is that these en­gi­neers are re­ally just say­ing that they work out of the spot­light”, where there’s not much pres­sure in gen­eral and teams can adopt what­ever processes they want. There’s noth­ing wrong with that. But I don’t think it qual­i­fies you to give help­ful ad­vice to en­gi­neers who do feel this kind of pres­sure.

The com­mon view is that a man­ager pro­poses some tech­ni­cal pro­ject, the team gets to­gether to fig­ure out how long it would take to build, and then the man­ager makes staffing and plan­ning de­ci­sions with that in­for­ma­tion. In fact, it’s the re­verse: a man­ager comes to the team with an es­ti­mate al­ready in hand (though they might not come out and ad­mit it), and then the team must fig­ure out what kind of tech­ni­cal pro­ject might be pos­si­ble within that es­ti­mate.

This is be­cause es­ti­mates are not by or for en­gi­neer­ing teams. They are tools used for man­agers to ne­go­ti­ate with each other about planned work. Very oc­ca­sion­ally, when a pro­ject is lit­er­ally im­pos­si­ble, the es­ti­mate can serve as a way for the team to com­mu­ni­cate that fact up­wards. But that re­quires trust. A team that is al­ways push­ing back on es­ti­mates will not be be­lieved when they do en­counter a gen­uinely im­pos­si­ble pro­posal.

When I es­ti­mate, I ex­tract the range my man­ager is look­ing for, and only then do I go through the code and fig­ure out what can be done in that time. I never come back with a flat two weeks” fig­ure. Instead, I come back with a range of pos­si­bil­i­ties, each with their own risks, and let my man­ager make that trade­off.

It is not pos­si­ble to ac­cu­rately es­ti­mate soft­ware work. Software pro­jects spend most of their time grap­pling with un­known prob­lems, which by de­f­i­n­i­tion can’t be es­ti­mated in ad­vance. To es­ti­mate well, you must there­fore ba­si­cally ig­nore all the known as­pects of the work, and in­stead try and make ed­u­cated guesses about how many un­knowns there are, and how scary each un­known is.

edit: I should thank one of my read­ers, Karthik, who emailed me to ask about es­ti­mates, thus re­veal­ing to me that I had many more opin­ions than I thought.

edit: This post got a bunch of com­ments on Hacker News. Some non-en­gi­neers made the point that well-paid pro­fes­sion­als should be ex­pected to es­ti­mate their work, even if the es­ti­mate is com­pletely fic­tional. Sure, I agree, as long as we’re on the same page that it’s fic­tional!

A cou­ple of en­gi­neers ar­gued that es­ti­ma­tion was a solved prob­lem. I’m not con­vinced by their ex­am­ples. I agree you can prob­a­bly es­ti­mate build a user flow in Svelte”, but it’s much harder to es­ti­mate build a user flow in Svelte on top of an ex­ist­ing large code­base”. I should have been more clear in the post that I think that’s the hard part, for the nor­mal rea­sons that it’s very hard to work in large code­bases, which I write about end­lessly on this blog.

edit: There are also some com­ments on Lobste.rs, in­clud­ing a good note that the ca­pa­bil­ity of the team ob­vi­ously has a huge im­pact on any es­ti­mates. In my ex­pe­ri­ence, this is not com­monly un­der­stood: com­pa­nies ex­pect es­ti­mates to be fun­gi­ble be­tween en­gi­neers or teams, when in fact some en­gi­neers and teams can de­liver work ten times more quickly (and oth­ers can­not de­liver work at all, no mat­ter how much time they have).

Another com­menter po­litely sug­gested I read Software Estimation: Demystifying the Black Art, which I’ve never heard of. I’ll put it on my list.

...

Read the original on www.seangoedecke.com »

6 440 shares, 20 trendiness

Doing Gigabit Ethernet Over My British Phone Wires

Disclaimer: None of this is writ­ten by AI, I’m still a real per­son writ­ing my own blog like its 1999

I fi­nally fig­ured out how to do Gigabit Ethernet over my ex­ist­ing phone wires.

I’ve mostly lived with pow­er­line adapters over re­cent years. Some worked well, some did not (try few and re­turn what does­n’t work in your home). One I had for a while gave me sta­ble 30 Mbps, which was lit­tle but good enough for in­ter­net at the time. I care very much about hav­ing sta­ble low la­tency for gam­ing, more than band­width.

Fast for­ward to my cur­rent sit­u­a­tion, that pow­er­line adapter reg­u­larly lost con­nec­tion which was a ma­jor prob­lem. I got some new ones with the lat­est and great­est G.hn 2400 stan­dard. The fi­nal con­tender served around 180 Mbps to my of­fice (with high vari­ance 120 to 280 Mbps), or around 80 Mbps to the top floor. It’s good enough to watch YouTube/TV yet it’s far from im­pres­sive.

One pe­cu­liar thing from the UK: Internet providers don’t truly of­fer gi­ga­bit in­ter­net. They have a range of deals like 30 Mbps — 75 Mbps — 150 Mbps — 300 Mbps — 500 Mbps — 900 Mbps, each one cost­ing a few more pounds per month than the last. This makes the UK si­mul­ta­ne­ously one of the cheap­est and one of the most ex­pen­sive coun­tries to get Internet.

Long story short, new place, new hard­ware, new deals, the in­ter­net has been run­ning at 500 Mbps for some time now.

Every 50 GB of Helldivers 2 up­date (because these id­iots shipped the same con­tent in du­pli­cate 5 times) is a painful re­minder that the setup is not op­er­at­ing at ca­pac­ity.

Problem: How to get 500 Mbps to my room?

I’ve been look­ing for a way to reuse phone wires for a while, be­cause British houses are full of phone sock­ets. There are 2 sock­ets in my of­fice room.

I can’t stress enough how much we love our phone sock­ets. It’s not un­com­mon to have a one bed flat with 2 phone sock­ets in the liv­ing room and 2 phone sock­ets in the bed­room and a mas­ter socket in the tech­ni­cal room. It’s ridicu­lous.

A new house bought to­day could have 10 phone sock­ets and 0 Ethernet sock­ets. There is still no reg­u­la­tion that re­quires new build to get Ethernet wiring (as far as I know).

There’s got to be a way to use the ex­ist­ing phone in­fra­struc­ture.

I know the tech­nol­ogy ex­ists. It’s one of the rare cases where the tech­nol­ogy ex­ists and is ma­ture, but no­body can be both­ered to make prod­ucts for it.

The stan­dards that run pow­er­line adapters (HomePlug AV200, AV500, G.hn 2400) can work with any pair of wires. It should work ten times bet­ter on ded­i­cated phone wires in­stead of noisy power wires, if only man­u­fac­tur­ers could be both­ered to pull their fin­gers out of their arse and make the prod­ucts that are needed.

After count­less years of re­search, I fi­nally found one German man­u­fac­turer that’s mak­ing what needs to be made https://​www.gi­ga­cop­per.net/​wp/​en/​home-net­work­ing/

I was lazy so I or­dered on­line in self-ser­vice (which is def­i­nitely the wrong way to go about it). It’s avail­able on Ebay DE and Amazon DE, it’s pos­si­ble to or­der from ei­ther with a UK ac­count, make sure to en­ter a UK ad­dress for de­liv­ery (some items don’t al­low it).

The bet­ter ap­proach is al­most cer­tainly to speak to the seller to get a quote, with in­ter­na­tional ship­ping and the im­port in­voice ex­clud­ing VAT (to avoid pay­ing VAT on VAT).

The pack­age got the usual Royal Mail treat­ment:

* The pack­age was shipped by DHL Germany

* The pack­age was trans­ferred to Royal Mail when en­ter­ing the UK

* After some days, the DHL web­site said they tried to de­liver but no­body home, this is bull­shit

* Royal web­site said the pack­age reached the de­pot and was await­ing de­liv­ery, this is bull­shit

* In re­al­ity, the pack­age was stuck at the bor­der, as usual

* Google to find website to pay im­port fee on par­cel”

* Entered the DHL track­ing num­ber into the Royal Mail form for a Royal Mail track­ing num­ber

* The web­site said that the par­cel had im­port fees to pay, this is cor­rect

* Paid the fee on­line, 20% VAT + a few pounds of han­dling fees

* The pack­age will be sched­uled for de­liv­ery a few days later

* Royal Mail and DHL up­dated their sta­tus an­other two or three times with false in­for­ma­tion

* Royal Mail de­liv­ered a let­ter say­ing there was a pack­age wait­ing on fees, though it was paid

Basically, you need to fol­low the track­ing reg­u­larly un­til the pack­age is tagged as lost or failed de­liv­ery, which is the cue to pay im­port fees.

It’s the nor­mal pro­ce­dure to buy things from Europe since Brexit 2020. It’s ac­tu­ally quite shock­ing that Royal Mail still has­n’t up­dated their track­ing sys­tem to be able to give a sta­tus waiting on im­port fees to be paid on­line”. They had 6 years!

This is the gi­ga­cop­per G4201TM: 1 RJ11 phone line, 1 RJ45 gi­ga­bit Ethernet port, 1 power

* It came with a German to UK power adapter (unexpected and use­ful)

* It came with a stan­dard RJ11 ca­ble (expected and use­less)

* 3M re­mov­able hang­ing strip to stick to the wall, the de­vice is very light

There is a gi­ga­cop­per G4202TM: with an RJ45 to con­nect to the phone line in­stead of a RJ11 (not sure if it’s a newer model or just a vari­ant, as that one has two gi­ga­bit Ethernet ports). Don’t be con­fused by hav­ing a RJ45 port that is not a RJ45 port.

There is a gi­ga­cop­per G4201C (1 port) and G4204C (4 port) for Ethernet over coax­ial. Some coun­tries have coax in every room for TV/satellite. This may be of in­ter­est to some read­ers.

Plugged it and it works!

I dis­cov­ered soon af­ter­wards that I bought the wrong item. There is an InHome and a Client/Server vari­ant of the prod­uct. Make sure to buy the InHome vari­ant.

* The InHome vari­ant can have up to 16 de­vices, com­mu­ni­cat­ing to any peer on the medium, with sub mil­lisec­ond la­tency.

* The client-server vari­ant is pre­con­fig­ured as a pair, split­ting the band­width 70% down­load / 30% up­load, with few mil­lisec­onds la­tency. I think it’s a use case for ISP and long range con­nec­tions.

Thankfully the dif­fer­ence is only the firmware. I spoke to the ven­dor who was very help­ful and re­spon­sive. They sent me the firmware and the tools to patch.

I have a fetish for low la­tency. This screen­shot is oddly sat­is­fy­ing.

The web in­ter­face says 1713 Mbps on the phys­i­cal layer, the de­bug­ging tool says PHONE 200MHz — Connected 1385 Mbps.

I wanted to ver­ify whether the de­vice can do a full Gigabit. Unfortunately I re­al­ized I don’t have any de­vice that can test that.

Phones are wire­less, which is too slow to test any­thing. I checked out of cu­rios­ity, my phone did 100 Mbps to 400 Mbps right next to the router. Grabbed two lap­tops only to re­al­ize they did­n’t have any Ethernet port. I dug up an old lap­top from stor­age with an Ethernet port. The lap­top could­n’t boot, the CPU fan did­n’t start and the lap­top re­fused to boot with a dead fan.

There is a hard les­son here: 1 Gbps ought to be enough for any home. Using the phone line is as good as hav­ing Ethernet wiring through the house if it can de­liver a (shared) 1.7 Gbps link to mul­ti­ple rooms.

Still, I re­ally wanted to ver­ify that the de­vice can do a full Gbps, I pro­cured an USB-C to Ethernet adapter.

Full speed achieved, test­ing from a phone to a com­puter with iperf3.

Some read­ers might won­der about the wiring.

I did­n’t check the wiring be­fore buy­ing any­thing be­cause it’s point­less. British sock­ets are al­ways daisy chained in an in­com­pre­hen­si­ble maze.

Phone sock­ets need 2 wires and can be daisy chained. Ethernet sock­ets need 8 wires. They of­ten use the same Cat5 ca­ble be­cause it’s the most widely avail­able (8 wires ca­ble, the 6 ex­tra wires can re­main un­con­nected).

It’s pos­si­ble to swap the phone socket for an RJ45 socket, if you only have 2 sock­ets con­nected with the right ca­ble. It’s not pos­si­ble when sock­ets are daisy chained. (You could put a dou­ble or triple RJ45 socket with a switch to break a daisy chain, but it quickly be­comes im­prac­ti­cal in a British house with 5 to 10 sock­ets in an ar­bi­trary lay­out.)

I opened one socket in the of­fice room. There are two Cat5 ca­bles daisy chained. There are 3 wires con­nected.

It’s prob­a­bly daisy chained with the other socket in the room, or it’s daisy chained with the socket in the other room that’s closer. Who knows.

I opened the BT mas­ter socket in the tech­ni­cal room. It should have the ca­bles com­ing from the other rooms. It should con­nect the in­ter­nal phone wires with the ex­ter­nal phone line.

There is one sin­gle Cat5 ca­ble. There are 4 wires con­nected. It’s def­i­nitely not a mas­ter socket. WTF?!

It’s in­ter­est­ing that this socket has 4 wires con­nected but the socket in the of­fice has 3 wires con­nected. The id­iot who did the wiring was in­con­sis­tent. The gi­ga­cop­per de­vice can op­er­ate over 2 wires (200 MHz Phone SISO) or over 4 wires (100 MHz Phone MIMO). I can try the other modes if I fin­ish the job.

The search for the mas­ter socket con­tin­ues. The ca­bles from the other floors should all be com­ing down some­where around here. There is a blank plate next to it (right).

This might be the ex­ter­nal phone line? A bunch of wires are crimped to­gether, colours do not match. It’s the hell of a mess.

Only sure thing, they are dif­fer­ent ca­bles be­cause they are dif­fer­ent colours. They might be go­ing to a junc­tion box some­where else. Probably be­hind a wall that’s im­pos­si­ble to ac­cess!

Conclusion: There is zero chance to get proper Ethernet wiring out of this mess.

The gi­ga­cop­per de­vice to do gi­ga­bit Ethernet over phone line is a mir­a­cle!

There is an enor­mous un­tapped mar­ket for gi­ga­bit Ethernet over phone sock­ets in the UK.

...

Read the original on thehftguy.com »

7 303 shares, 16 trendiness

When employees feel slighted, they work less

Small slights from a man­ager may seem like no big deal, but new re­search from Wharton re­veals that even the mildest of mis­treat­ment at work can af­fect more than just em­ployee morale.

The study finds that when man­agers at a na­tional re­tail chain failed to de­liver birth­day greet­ings on time, it re­sulted in a 50% in­crease in ab­sen­teeism and a re­duc­tion of more than two work­ing hours per month. The lost pro­duc­tiv­ity was a form of re­venge, with slighted em­ploy­ees tak­ing more paid sick time, ar­riv­ing late, leav­ing early, and tak­ing longer breaks.

Insults are about a lack of re­spect, and that’s what this is re­ally all about. There are huge and small lacks of re­spect, but they all leave a mark,” says Wharton man­age­ment pro­fes­sor Peter Cappelli, who con­ducted the study with Liat Eldor and Michal Hodor, both as­sis­tant pro­fes­sors at Tel Aviv University’s Coller School of Management.

The study, The Lower Boundary of Workplace Mistreatment: Do Small Slights Matter?”, is pub­lished in the jour­nal Proceedings of the National Academy of Sciences. While there are a grow­ing num­ber of pa­pers that ex­am­ine the ef­fects of se­vere work­place mis­treat­ment such as sex­ual and phys­i­cal ha­rass­ment, the study is the first to mea­sure the cause and ef­fect of mi­nor in­frac­tions.

Read more at Knowledge at Wharton.

...

Read the original on penntoday.upenn.edu »

8 251 shares, 49 trendiness

Adoption of electric vehicles tied to real-world reductions in air pollution, study finds

When California neigh­bor­hoods in­creased their num­ber of zero-emis­sions ve­hi­cles (ZEV) be­tween 2019 and 2023, they also ex­pe­ri­enced a re­duc­tion in air pol­lu­tion. For every 200 ve­hi­cles added, ni­tro­gen diox­ide (NO₂) lev­els dropped 1.1%. The re­sults, ob­tained from a new analy­sis based on statewide satel­lite data, are among the first to con­firm the en­vi­ron­men­tal health ben­e­fits of ZEVs, which in­clude fully elec­tric and plug-in hy­brid cars, in the real world. The study was funded in part by the National Institutes of Health and just pub­lished in The Lancet Planetary Health.

While the shift to elec­tric ve­hi­cles is largely aimed at curb­ing cli­mate change in the fu­ture, it is also ex­pected to im­prove air qual­ity and ben­e­fit pub­lic health in the near term. But few stud­ies have tested that as­sump­tion with ac­tual data, partly be­cause ground-level air pol­lu­tion mon­i­tors have lim­ited spa­tial cov­er­age. A 2023 study from the Keck School of Medicine of USC us­ing these ground-level mon­i­tors sug­gested that ZEV adop­tion was linked to lower air pol­lu­tion, but the re­sults were not de­fin­i­tive.

Now, the same re­search team has con­firmed the link with high-res­o­lu­tion satel­lite data, which can de­tect NO₂ in the at­mos­phere by mea­sur­ing how the gas ab­sorbs and re­flects sun­light. The pol­lu­tant, re­leased from burn­ing fos­sil fu­els, can trig­ger asthma at­tacks, cause bron­chi­tis, and in­crease the risk of heart dis­ease and stroke.

This im­me­di­ate im­pact on air pol­lu­tion is re­ally im­por­tant be­cause it also has an im­me­di­ate im­pact on health. We know that traf­fic-re­lated air pol­lu­tion can harm res­pi­ra­tory and car­dio­vas­cu­lar health over both the short and long term,” said Erika Garcia, PhD, MPH, as­sis­tant pro­fes­sor of pop­u­la­tion and pub­lic health sci­ences at the Keck School of Medicine and the study’s se­nior au­thor.

The find­ings of­fer sup­port for the con­tin­ued adop­tion of elec­tric ve­hi­cles. Over the study pe­riod, ZEV reg­is­tra­tions in­creased from 2% to 5% of all light-duty ve­hi­cles (a cat­e­gory that in­cludes cars, SUVs, pickup trucks and vans) across California, sug­gest­ing that the po­ten­tial for im­prov­ing air pol­lu­tion and pub­lic health re­mains largely un­tapped.

We’re not even fully there in terms of elec­tri­fy­ing, but our re­search shows that California’s tran­si­tion to elec­tric ve­hi­cles is al­ready mak­ing mea­sur­able dif­fer­ences in the air we breathe,” said the study’s lead au­thor, San­drah Eckel, PhD, as­so­ci­ate pro­fes­sor of pop­u­la­tion and pub­lic health sci­ences at the Keck School of Medicine.

For the analy­sis, the re­searchers di­vided California into 1,692 neigh­bor­hoods, us­ing a ge­o­graphic unit sim­i­lar to zip codes. They ob­tained pub­licly avail­able data from the state’s Department of Motor Vehicles on the num­ber of ZEVs reg­is­tered in each neigh­bor­hood. ZEVs in­clude full-bat­tery elec­tric cars, plug-in hy­brids and fuel-cell cars, but not heav­ier duty ve­hi­cles like de­liv­ery trucks and semi trucks.

Next, the re­search team ob­tained data from the Tropospheric Monitoring Instrument (TROPOMI), a high-res­o­lu­tion satel­lite sen­sor that pro­vides daily, global mea­sure­ments of NO₂ and other pol­lu­tants. They used this data to cal­cu­late an­nual av­er­age NO₂ lev­els in each California neigh­bor­hood from 2019 to 2023.

Over the study pe­riod, a typ­i­cal neigh­bor­hood gained 272 ZEVs, with most neigh­bor­hoods adding be­tween 18 and 839. For every 200 new ZEVs reg­is­tered, NO₂ lev­els dropped 1.1%, a mea­sur­able im­prove­ment in air qual­ity.

These find­ings show that cleaner air is­n’t just a the­ory—it’s al­ready hap­pen­ing in com­mu­ni­ties across California,” Eckel said.

To con­firm that these re­sults were re­li­able, the re­searchers con­ducted sev­eral ad­di­tional analy­ses. They ac­counted for pan­demic-re­lated changes as a con­trib­u­tor to NO₂ de­cline, such as ex­clud­ing the year 2020 and con­trol­ling for chang­ing gas prices and work-from-home pat­terns. The re­searchers also con­firmed that neigh­bor­hoods that added more gas-pow­ered cars saw the ex­pected rise in pol­lu­tion. Finally, they repli­cated their re­sults us­ing up­dated data from ground-level mon­i­tors from 2012 to 2023.

We tested our analy­sis in many dif­fer­ent ways, and the re­sults con­sis­tently sup­port our main find­ing,” Garcia said.

These re­sults show that TROPOMI satel­lite data—which cov­ers nearly the en­tire planet—can re­li­ably track changes in com­bus­tion-re­lated air pol­lu­tion, of­fer­ing a new way to study the ef­fects of the tran­si­tion to elec­tric ve­hi­cles and other en­vi­ron­men­tal in­ter­ven­tions.

Next, Garcia, Eckel and their team are com­par­ing data on ZEV adop­tion with data on asthma-re­lated emer­gency room vis­its and hos­pi­tal­iza­tions across California. The study could be one of the first to doc­u­ment real-world health im­prove­ments as California con­tin­ues to em­brace elec­tric ve­hi­cles.

In ad­di­tion to Garcia and Eckel, the study’s other au­thors are Futu Chen, Sam J. Silva and Jill Johnston from the Department of Population and Public Health Sciences, Keck School of Medicine of USC, University of Southern California; Daniel L. Goldberg from the Milken Institute School of Public Health, The George Washington University; Lawrence A. Palinkas from the Herbert Wertheim School of Public Health and Human Longevity Science, University of California, San Diego; and Alberto Campos and Wilma Franco from the Southeast Los Angeles Collaborative.

This work was sup­ported by the National Institutes of Health/National Institute of Environmental Health Sciences [R01ES035137, P30ES007048]; the National Aeronautics and Space Administration Health and Air Quality Applied Sciences Team [80NSSC21K0511]; and the National Aeronautics and Space Administration Atmospheric Composition Modeling and Analysis Program [80NSSC23K1002].

...

Read the original on keck.usc.edu »

9 241 shares, 14 trendiness

I added a Bluesky comment section to my blog

You can now view replies to this blog post made on Bluesky di­rectly on this web­site. Check it out here!

I’ve al­ways wanted to host a com­ment sec­tion on my site, but it’s dif­fi­cult be­cause the con­tent is sta­t­i­cally gen­er­ated and hosted on a CDN. I could host com­ments on a sep­a­rate VPS or cloud ser­vice. But main­tain­ing a dy­namic web ser­vice like this can be ex­pen­sive and time-con­sum­ing — in gen­eral, I’m not in­ter­ested in be­ing an un­paid, part-time DevOps en­gi­neer.

Recently, how­ever, I read a blog post by Cory Zue about how he em­bed­ded a com­ment sec­tion from Bluesky on his blog. I im­me­di­ately un­der­stood to ben­e­fits of this ap­proach. With this ap­proach, Bluesky could han­dle all of the dif­fi­cult work in­volved in man­ag­ing a so­cial me­dia like ac­count ver­i­fi­ca­tion, host­ing, stor­age, spam, and mod­er­a­tion. Meanwhile be­cause Bluesky is an open plat­form with a pub­lic API, it’s easy to di­rectly em­bed com­ments on my own site.

There are other ser­vices that could be used for this pur­pose in­stead. Notably, I could em­bed replies from the so­cial me­dia for­merly known as Twitter. Or I could use a plat­form like Disqus or even gis­cus, which hosts com­ments on GitHub Discussions. But I see Bluesky as a clearly su­pe­rior choice among these op­tions. For one, Bluesky is built on top of an open so­cial me­dia plat­form in AT Proto, mean­ing it can’t eas­ily be taken over by an au­thor­i­tar­ian bil­lion­aire creep. Moreover, Bluesky is a full-fledged so­cial me­dia plat­form, which nat­u­rally makes it a bet­ter op­tion for host­ing a con­ver­sa­tion than GitHub.

Zue pub­lished a stand­alone pack­age called bluesky-com­ments that al­lows em­bed­ding com­ments in a React com­po­nent as he did. But I de­cided to build this fea­ture my­self in­stead. Mainly this is be­cause I wanted to make a few styling changes any­way to match the rest of my site. But I also wanted to leave the op­tion open to adding more fea­tures in the fu­ture, which would be eas­ier to do if I wrote the code my­self. The en­tire im­ple­men­ta­tion is small re­gard­less, amount­ing to only ~200 LOC be­tween the UI com­po­nents and API func­tions.

Initially, I planned to al­low peo­ple to di­rectly post on Bluesky via my site. This would work by pro­vid­ing an OAuth flow that gives my site per­mis­sion to post on Bluesky on be­half of the user. I ac­tu­ally did get the auth flow work­ing, but build­ing out a UI for post­ing and re­ply­ing to ex­ist­ing com­ments is dif­fi­cult to do well. Going down this path quickly leads to build­ing what is es­sen­tially a cus­tom Bluesky client, which I did­n’t have the time or in­ter­est in do­ing right now. Moreover, be­cause the user needs to go through the auth flow and sign-in to their Bluesky ac­count, the process is not re­ally much eas­ier than post­ing di­rectly on a linked Bluesky post.

Without the re­quire­ment of al­low­ing oth­ers to di­rectly post on my site, the im­ple­men­ta­tion be­came much sim­pler. Essentially, my task was to spec­ify a Bluesky post that cor­re­sponds to the ar­ti­cle in the site’s meta­data. Then, when the page loads I fetch the replies to that post from Bluesky, parse the re­sponse, and dis­play the re­sults in a sim­ple com­ment sec­tion UI.

As ex­plained in my last post, this site is built us­ing React Server Components and Parcel. The con­tent of my ar­ti­cles are writ­ten us­ing MDX, an ex­ten­sion to Markdown that al­lows di­rectly em­bed­ding JavaScript and JSX. In each post, I ex­port a meta­data ob­ject that I val­i­date us­ing a Zod schema. For in­stance, the meta­data for this post looks like this:

The value of bsky­PostId ref­er­ences the Bluesky post from which I’ll pull replies to dis­play in the com­ment sec­tion. Because my pro­ject is built in TypeScript, it was easy to in­te­grate with the Bluesky TypeScript SDK (@bluesky/api on NPM). Reading the Bluesky API doc­u­men­ta­tion and Zue’s im­ple­men­ta­tion led me to the get­Post­Thread end­point. Given an AT Protocol URI, this end­point re­turns an ob­ject with data on the given post and its replies.

I could have in­ter­acted di­rectly with the Bluesky API from my React com­po­nent us­ing fetch and use­Ef­fect. However, it can be a bit tricky to cor­rectly han­dle load­ing and a er­ror states, even for a sim­ple fea­ture like this. Because of this, I de­cided to use the Tanstack re­act-query pack­age to man­age the API re­quest/​re­sponse cy­cle. This li­brary takes care of the messy work of han­dling er­rors, re­tries, and load­ing states while I sim­ply pro­vide it a func­tion to fetch the post data.

Once I ob­tain the Bluesky re­sponse, the next task is pars­ing out the con­tent and meta­data for the replies. Bluesky sup­ports a rich con­tent struc­ture in its posts for rep­re­sent­ing markup, ref­er­ences, and at­tach­ments. Building out a UI that fully re­spects this rich con­tent would be dif­fi­cult. Instead, I de­cided to keep it sim­ple by just pulling out the text con­tent from each re­ply.

Even so, build­ing a UI that prop­erly dis­plays threaded com­ments, par­tic­u­larly one that is for­mat­ted well on small mo­bile de­vices, can be tricky. For now, my ap­proach was to again keep it sim­ple. I in­dented each re­ply and added a left bor­der to make it eas­ier to fol­low re­ply threads. Otherwise, I mostly copied de­sign el­e­ments for lay­out of the pro­file pic­ture and post date from Bluesky.

Lastly, I added a UI com­po­nent link­ing to the par­ent post on Bluesky, and en­cour­ag­ing peo­ple to add to the con­ver­sa­tion there. With this, the read-only com­ment sec­tion im­ple­men­ta­tion was com­plete. If there’s in­ter­est, I could pub­lish my ver­sion of Bluesky com­ments as a stand­alone pack­age. But sev­eral of the choices I made were rel­a­tively spe­cific to my own site. Moreover, the im­ple­men­ta­tion is sim­ple enough that oth­ers could prob­a­bly build their own ver­sion from read­ing the source code, just as I did us­ing Zue’s ver­sion.

Let me know what you think by re­ply­ing on Bluesky. Hopefully this can help in­crease en­gage­ment with my blog posts, but then again, my last ar­ti­cle gen­er­ated no replies, so maybe not 😭.

Join the con­ver­sa­tion by re­ply­ing on Bluesky…

...

Read the original on micahcantor.com »

10 198 shares, 22 trendiness

Europe wants to end its dangerous reliance on US internet technology

Imagine the in­ter­net sud­denly stops work­ing. Payment sys­tems in your lo­cal food store go down. Healthcare sys­tems in the re­gional hos­pi­tal flat­line. Your work soft­ware tools, and all the in­for­ma­tion they con­tain, dis­ap­pear.

You reach out for in­for­ma­tion but strug­gle to com­mu­ni­cate with fam­ily and friends, or to get the lat­est up­dates on what is hap­pen­ing, as so­cial me­dia plat­forms are all down. Just as some­one can pull the plug on your com­puter, it’s pos­si­ble to shut down the sys­tem it con­nects to.

This is­n’t an out­landish sce­nario. Technical fail­ures, cy­ber-at­tacks and nat­ural dis­as­ters can all bring down key parts of the in­ter­net. And as the US gov­ern­ment makes in­creas­ing de­mands of European lead­ers, it is pos­si­ble to imag­ine Europe los­ing ac­cess to the dig­i­tal in­fra­struc­ture pro­vided by US firms as part of the geopo­lit­i­cal bar­gain­ing process.

At the World Economic Forum in Davos, Switzerland, the EUs pres­i­dent, Ursula von der Leyen, has high­lighted the structural im­per­a­tive” for Europe to build a new form of in­de­pen­dence” — in­clud­ing in its tech­no­log­i­cal ca­pac­ity and se­cu­rity. And, in fact, moves are al­ready be­ing made across the con­ti­nent to start re­gain­ing some in­de­pen­dence from US tech­nol­ogy.

A small num­ber of US-headquartered big tech com­pa­nies now con­trol a large pro­por­tion of the world’s cloud com­put­ing in­fra­struc­ture, that is the global net­work of re­mote servers that store, man­age and process all our apps and data. Amazon Web Services (AWS), Microsoft Azure and Google Cloud are re­ported to hold about 70% of the European mar­ket, while European cloud providers have only 15%.

My re­search sup­ports the idea that re­ly­ing on a few global providers in­creases vul­ner­a­bilty for Europe’s pri­vate and pub­lic sec­tors — in­clud­ing the risk of cloud com­put­ing dis­rup­tion, whether caused by tech­ni­cal is­sues, geopo­lit­i­cal dis­putes or ma­li­cious ac­tiv­ity.

Two re­cent ex­am­ples — both the re­sult of ap­par­ent tech­ni­cal fail­ures — were the hours‑long AWS in­ci­dent in October 2025, which dis­rupted thou­sands of ser­vices such as bank­ing apps across the world, and the ma­jor Cloudflare in­ci­dent two months later, which took LinkedIn, Zoom and other com­mu­ni­ca­tion plat­forms of­fline.

The im­pact of a ma­jor power dis­rup­tion on cloud com­put­ing ser­vices was also demon­strated when Spain, Portugal and some of south-west France en­dured a mas­sive power cut in April 2025.

There are signs that Europe is start­ing to take the need for greater dig­i­tal in­de­pen­dence more se­ri­ously. In the Swedish coastal city of Helsingborg, for ex­am­ple, a one-year pro­ject is test­ing how var­i­ous pub­lic ser­vices would func­tion in the sce­nario of a dig­i­tal black­out.

Would el­derly peo­ple still re­ceive their med­ical pre­scrip­tions? Can so­cial ser­vices con­tinue to pro­vide care and ben­e­fits to all the city’s res­i­dents?

This pi­o­neer­ing pro­ject seeks to quan­tify the full range of hu­man, tech­ni­cal and le­gal chal­lenges that a col­lapse of tech­ni­cal ser­vices would cre­ate, and to un­der­stand what level of risk is ac­cept­able in each sec­tor. The aim is to build a model of cri­sis pre­pared­ness that can be shared with other mu­nic­i­pal­i­ties and re­gions later this year.

Elsewhere in Europe, other fore­run­ners are tak­ing ac­tion to strengthen their dig­i­tal sov­er­eignty by wean­ing them­selves off re­liance on global big tech com­pa­nies — in part through col­lab­o­ra­tion and adop­tion of open source soft­ware. This tech­nol­ogy is treated as a dig­i­tal pub­lic good that can be moved be­tween dif­fer­ent clouds and op­er­ated un­der sov­er­eign con­di­tions.

In north­ern Germany, the state of Schleswig-Holstein has made per­haps the clear­est break with dig­i­tal de­pen­dency. The state gov­ern­ment has re­placed most of its Microsoft-powered com­puter sys­tems with open-source al­ter­na­tives, can­celling nearly 70% of its li­censes. Its tar­get is to use big tech ser­vices only in ex­cep­tional cases by the end of the decade.

Across France, Germany, the Netherlands and Italy, gov­ern­ments are in­vest­ing both na­tion­ally and transna­tion­ally in the de­vel­op­ment of dig­i­tal open-source plat­forms and tools for chat, video and doc­u­ment man­age­ment — akin to dig­i­tal Lego bricks that ad­min­is­tra­tions can host on their own terms.

In Sweden, a sim­i­lar sys­tem for chat, video and on­line col­lab­o­ra­tion, de­vel­oped by the National Insurance Agency, runs in do­mes­tic data cen­tres rather than for­eign clouds. It is be­ing of­fered as a ser­vice for Swedish pub­lic au­thor­i­ties look­ing for sov­er­eign dig­i­tal al­ter­na­tives.

For Europe — and any na­tion — to mean­ing­fully ad­dress the risks posed by dig­i­tal black­out and cloud col­lapse, dig­i­tal in­fra­struc­ture needs to be treated with the same se­ri­ous­ness as phys­i­cal in­fra­struc­ture such as ports, roads and power grids.

Control, main­te­nance and cri­sis pre­pared­ness of dig­i­tal in­fra­struc­ture should be seen as core pub­lic re­spon­si­bil­i­ties, rather than some­thing to be out­sourced to global big tech firms, open for for­eign in­flu­ence.

To en­cour­age greater fo­cus on dig­i­tal re­silience among its mem­ber states, the EU has de­vel­oped a cloud sov­er­eignty frame­work to guide pro­cure­ment of cloud ser­vices — with the in­ten­tion of keep­ing European data un­der European con­trol. The up­com­ing Cloud and AI Development Act is ex­pected to bring more fo­cus and re­sources to this area.

Governments and pri­vate com­pa­nies should be en­cour­aged to de­mand se­cu­rity, open­ness and in­ter­op­er­abil­ity when seek­ing bids for pro­vi­sion of their cloud ser­vices — not merely low prices. But in the same way, as in­di­vid­u­als, we can all make a dif­fer­ence with the choices we make.

Just as it’s ad­vis­able to en­sure your own ac­cess to food, wa­ter and med­i­cine in a time of cri­sis, be mind­ful of what ser­vices you use per­son­ally and pro­fes­sion­ally. Consider where your emails, per­sonal pho­tos and con­ver­sa­tions are stored. Who can ac­cess and use your data, and un­der what con­di­tions? How eas­ily can every­thing be backed up, re­trieved and trans­ferred to an­other ser­vice?

No coun­try, let alone con­ti­nent, will ever be com­pletely dig­i­tally in­de­pen­dent, and nor should they be. But by pulling to­gether, Europe can en­sure its dig­i­tal sys­tems re­main ac­ces­si­ble even in a cri­sis — just as is ex­pected from its phys­i­cal in­fra­struc­ture.

...

Read the original on theconversation.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.