10 interesting stories served every morning and every evening.




1 738 shares, 33 trendiness

Euro firms must ditch Uncle Sam's clouds and go EU-native

Opinion I’m an eighth-gen­er­a­tion American, and let me tell you, I would­n’t trust my data, se­crets, or ser­vices to a US com­pany these days for love or money. Under our cur­rent gov­ern­ment, we’re sim­ply not trust­wor­thy.

In the Trump‑redux era of 2026, European en­ter­prises are fi­nally tak­ing data se­ri­ously, and that means pack­ing up from Redmond-by-Seattle and mov­ing their most sen­si­tive work­loads home. This is­n’t just com­pli­ance the­ater; it’s a straight‑up na­tional eco­nomic se­cu­rity play.

Europe’s dig­i­tal sov­er­eignty para­noia, long waved off as reg­u­la­tory chat­ter, is now feed­ing di­rectly into pro­cure­ment de­ci­sions. Gartner told The Reg last year that IT spend­ing in Europe is set to grow by 11 per­cent in 2026, hit­ting $1.4 tril­lion, with a big chunk rolling into sovereign cloud” op­tions and on‑prem/​edge ar­chi­tec­tures.

The kicker? Fully 61 per­cent of European CIOs and tech lead­ers say they want to in­crease their use of lo­cal cloud providers. More than half say geopol­i­tics will pre­vent them from lean­ing fur­ther on US‑based hy­per­scalers.

The American hy­per­cloud ven­dors have fig­ured this out. AWS re­cently made its European Sovereign Cloud avail­able. This AWS cloud, Amazon claims, is entirely lo­cated within the EU, and phys­i­cally and log­i­cally sep­a­rate from other AWS Regions.” On top of that, EU res­i­dents will independently op­er­ate it” and be backed by strong tech­ni­cal con­trols, sov­er­eign as­sur­ances, and le­gal pro­tec­tions de­signed to meet the needs of European gov­ern­ments and en­ter­prises for sen­si­tive data.”

Many EU-based com­pa­nies aren’t pleased with this Euro-washing of American hy­per­cloud ser­vices. The Cloud Infrastructure Service Providers in Europe (CISPE) trade as­so­ci­a­tion ac­cuses the EU Cloud Sovereignty Framework of be­ing set up to fa­vor the in­cum­bent (American) hy­per­cloud providers.

You don’t need a DEA war­rant or a Justice Department sub­poena to see the trend: Europe’s 90‑plus‑percent de­pen­dency on US cloud in­fra­struc­ture, as for­mer European Commission ad­vi­sor Cristina Caffarra put it, is a sin­gle‑shock‑event se­cu­rity night­mare wait­ing to rup­ture the EUs dig­i­tal sta­bil­ity.

Seriously. What will you do if Washington de­cides to un­plug you? Say Trump gets up on the wrong side of the bed and de­cides to in­vade Greenland. There goes NATO, and in all the saber-rat­tling lead­ing up to the 10th Mountain Division be­ing shipped to Nuuk, he or­ders American com­pa­nies to cut their ser­vices to all EU coun­tries and the UK.

With the way things are go­ing, they’re not go­ing to say no. I mean, CEOs Tim Cook of Apple, Eric Yuan of Zoom, Lisa Su of AMD, and — pay at­ten­tion — Amazon’s Andy Jassy all went obe­di­ently to watch a fea­ture-length White House screen­ing of Melania, the uni­ver­sally-loathed, 104‑minute Amazon‑produced doc­u­men­tary about First Lady Melania Trump.

Sure, that’s a silly ex­am­ple, but for American com­pa­nies to do busi­ness to­day, they’re kow­tow­ing to Trump. Or, take a far more se­ri­ous ex­am­ple, when Minnesota com­pany CEOs called for de-escalation” in the state, there was not one word about ICE or the gov­ern­men­t’s role in the blood­shed. It was the cor­po­rate equiv­a­lent of the mealy-mouthed thoughts and prayers” American right-wingers al­ways say af­ter a US school shoot­ing.

Some com­pa­nies have al­ready fig­ured out which way the wind is blow­ing. Airbus, the European aero­space ti­tan, has put out a €50 mil­lion, decade‑long ten­der to mi­grate its mis­sion‑crit­i­cal ap­pli­ca­tions to a sovereign European cloud.” Airbus wants its whole stack — data at rest, data in tran­sit, log­ging, IAM, and se­cu­rity‑mon­i­tor­ing in­fra­struc­ture — all rooted in EU law and over­seen by EU op­er­a­tors. As Catherine Jestin, Airbus’s ex­ec­u­tive vice pres­i­dent of dig­i­tal, told The Register: We want to en­sure this in­for­ma­tion re­mains un­der European con­trol.”

Who can blame them? Thanks to the American CLOUD Act and re­lated US sur­veil­lance statutes, US‑headquartered providers must hand over European data re­gard­less of where the bytes sit. Exhibit A is that Microsoft has al­ready con­ceded that it can­not guar­an­tee data in­de­pen­dence from US law en­force­ment. Airbus is bet­ting that data res­i­dency on pa­per” from AWS‑styled EU sec­tions” is not enough. Real sov­er­eignty de­mands EU‑owned and run op­er­a­tions with full con­trac­tual and le­gal fire­walls. Sure, your data may live in Frankfurt, but your fate still rests in Seattle, Redmond, or Mountain View if an American com­pany owns your cloud provider.

Besides, do you re­ally want some Trump ap­pa­ratchik get­ting their hands on your data? I mean, this is a gov­ern­ment where Madhu Gottumukkala, the act­ing di­rec­tor of the US Cybersecurity and Infrastructure Security Agency, up­loaded sen­si­tive data into ChatGPT!

In re­sponse, Brussels is push­ing an open source‑led exit from hy­per­scaler lock‑in. Ministries are stan­dard­iz­ing on Nextcloud‑style col­lab­o­ra­tion stacks in­stead of Microsoft 365 to fund Euro‑native clouds via the European Cloud Alliance. Some coun­tries, like France, are al­ready shov­ing Zoom, Teams, and other US video­con­fer­enc­ing plat­forms out the door in fa­vor of a lo­cal ser­vice.

If you’re run­ning an EU‑based firm in 2026, the take­away is­n’t that AWS‑in‑Frankfurt is evil; it’s that for cer­tain work­loads, es­pe­cially na­tional se­cu­rity, in­dus­trial IP, or high‑pro­file con­sumer data fran­chises, EU‑native cloud and ser­vices are no longer a nice‑to‑have but a busi­ness con­ti­nu­ity plan re­quire­ment.

It’s time to get se­ri­ous about dig­i­tal sov­er­eignty. The clock is tick­ing, and there’s no telling when Trump will go off. ®

...

Read the original on www.theregister.com »

2 639 shares, 38 trendiness

Mobile carriers can get your GPS location

In iOS 26.3, Apple in­tro­duced a new pri­vacy fea­ture which lim­its precise lo­ca­tion” data made avail­able to cel­lu­lar net­works via cell tow­ers. The fea­ture is only avail­able to de­vices with Apple’s in-house mo­dem in­tro­duced in 2025. The an­nounce­ment says

Cellular net­works can de­ter­mine your lo­ca­tion based on which cell tow­ers your de­vice con­nects to.

This is well-known. I have served on a jury where the pros­e­cu­tion ob­tained lo­ca­tion data from cell tow­ers. Since cell tow­ers are sparse (especially be­fore 5G), the ac­cu­racy is in the range of tens to hun­dreds of me­tres.

But this is not the whole truth, be­cause cel­lu­lar stan­dards have built-in pro­to­cols that make your de­vice silently send GNSS (i.e. GPS, GLONASS, Galileo, BeiDou) lo­ca­tion to the car­rier. This would have the same pre­ci­sion as what you see in your Map apps, in sin­gle-digit me­tres.

In 2G and 3G this is called Radio Resources LCS Protocol (RRLP)

So the net­work sim­ply asks tell me your GPS co­or­di­nates if you know them” and the phone will re­spond.

In 4G and 5G this is called LTE Positioning Protocol (LPP)

RRLP, RRC, and LPP are na­tively con­trol-plane po­si­tion­ing pro­to­cols. This means that they are trans­ported in the in­ner work­ings of cel­lu­lar net­works and are prac­ti­cally in­vis­i­ble to end users.

It’s worth not­ing that GNSS lo­ca­tion is never meant to leave your de­vice. GNSS co­or­di­nates are cal­cu­lated en­tirely pas­sively, your de­vice does­n’t need to send a sin­gle bit of in­for­ma­tion. Using GNSS is like find­ing out where you are by read­ing a road sign: you don’t have to tell any­one else you read a road sign, any­one can read a road sign, and the peo­ple who put up road signs don’t know who read which road sign when.

These ca­pa­bil­i­ties are not se­crets but some­how they have mostly slid un­der the radar of the pub­lic con­scious­ness. They have been used in the wild for a long time, such as by the DEA in the US in 2006:

[T]he DEA agents pro­cured a court or­der (but not a search war­rant) to ob­tain GPS co­or­di­nates from the couri­er’s phone via a ping, or sig­nal re­quest­ing those co­or­di­nates, sent by the phone com­pany to the phone.

And by Shin Bet in Israel, which tracks every­one every­where all the time:

The GSS Tool was based on cen­tral­ized cel­lu­lar track­ing op­er­ated by Israel’s General Security Services (GSS). The tech­nol­ogy was based on a frame­work that tracks all the cel­lu­lar phones run­ning in Israel through the cel­lu­lar com­pa­nies’ data cen­ters. According to news sources, it rou­tinely col­lects in­for­ma­tion from cel­lu­lar com­pa­nies and iden­ti­fies the lo­ca­tion of all phones through cel­lu­lar an­tenna tri­an­gu­la­tion and GPS data.

Notably, the Israeli gov­ern­ment started us­ing the data for con­tact trac­ing in March 2020, only a few weeks af­ter the first Israeli COVID-19 case. An in­di­vid­ual would be sent an SMS mes­sage in­form­ing them of close con­tact with a COVID pa­tient and re­quired to quar­an­tine. This is good ev­i­dence that the lo­ca­tion data Israeli car­ri­ers are col­lect­ing are far more pre­cise than what cell tow­ers alone can achieve.

A ma­jor caveat is that I don’t know if RRLP and LPP are the ex­act tech­niques, and the only tech­niques, used by DEA, Shin Bet, and pos­si­bly oth­ers to col­lect GNSS data; there could be other pro­to­cols or back­doors we’re not privy to.

Another un­known is whether these pro­to­cols can be ex­ploited re­motely by a for­eign car­rier. Saudi Arabia has abused SS7 to spy on peo­ple in the US, but as far as I know this only lo­cates a de­vice to the cov­er­age area of a Mobile Switching Center, which is less pre­cise than cell tower data. Nonetheless, given the abysmal cul­ture, com­pe­tency, and in­tegrity in the tele­com in­dus­try, I would not be shocked if it’s pos­si­ble for a state ac­tor to ob­tain the pre­cise GNSS co­or­di­nates of any­one on earth us­ing a phone num­ber/​IMEI.

Apple made a good step in iOS 26.3 to limit at least one vec­tor of mass sur­veil­lance, en­abled by hav­ing full con­trol of the mo­dem sil­i­con and firmware. They must now al­low users to dis­able GNSS lo­ca­tion re­sponses to mo­bile car­ri­ers, and no­tify the user when such at­tempts are made to their de­vice.

...

Read the original on an.dywa.ng »

3 602 shares, 38 trendiness

Finland looks to end "uncontrolled human experiment" with Australia-style ban on social media

Children un­der the age of 15 might be delet­ing their apps if the gov­ern­men­t’s plans are passed into law.

Prime Minister Petteri Orpo (NCP), the Finnish pub­lic health au­thor­ity THL and two-thirds of Finns are in favour of ban­ning or re­strict­ing the use of so­cial me­dia by un­der-15s.

Children un­der the age of 15 might be delet­ing their apps if the gov­ern­men­t’s plans are passed into law.

Prime Minister Petteri Orpo (NCP), the Finnish pub­lic health au­thor­ity THL and two-thirds of Finns are in favour of ban­ning or re­strict­ing the use of so­cial me­dia by un­der-15s.

Lunch break at the Finnish International School of Tampere (FISTA) is a bois­ter­ous time.

The yard is filled with chil­dren — rang­ing from grades 1 to 9, or ages 6 to 16 — run­ning around, shout­ing, play­ing foot­ball, shoot­ing bas­ket­ball hoops, do­ing what kids do.

And there’s not a sin­gle screen in sight.

FISTA has taken ad­van­tage of the law change, brought in last August, which al­lows schools to re­strict or com­pletely ban the use of mo­bile phones dur­ing school hours. At FISTA, this means no phones at all un­less specif­i­cally used for learn­ing in the class­room.

We’ve seen that cut­ting down on the pos­si­bil­i­ties for stu­dents to use their phones, dur­ing the breaks for in­stance, has spurred a lot of cre­ativ­ity,” FISTA vice prin­ci­pal Antti Koivisto notes.

They’re more ac­tive, do­ing more phys­i­cal things like play­ing games out­doors or tak­ing part in the or­gan­ised break ac­tiv­i­ties or just so­cial­is­ing with each other.”

With the smart­phone re­stric­tion in schools widely con­sid­ered to have been a suc­cess, Finland’s gov­ern­ment has now set its sights on so­cial me­dia plat­forms.

Prime Minister Petteri Orpo (NCP) said ear­lier this month that he sup­ports ban­ning the use of so­cial me­dia by chil­dren un­der the age of 15.

I am deeply con­cerned about the lack of phys­i­cal ac­tiv­ity among chil­dren and young peo­ple, and the fact that it is in­creas­ing,” Orpo said at the time.

And there is a grow­ing groundswell of sup­port for Finland in­tro­duc­ing such a ban. Two-thirds of re­spon­dents to a sur­vey pub­lished ear­lier this week said they back a ban on so­cial me­dia for un­der-15s. This is a near 10 per­cent­age point jump com­pared to a sim­i­lar sur­vey car­ried out just last sum­mer.

The con­cerns over so­cial me­dia, and in par­tic­u­lar the ef­fects on chil­dren, have been well-doc­u­mented — but Finnish re­searcher Silja Kosola’s re­cent de­scrip­tion of the phe­nom­e­non as an uncontrolled hu­man ex­per­i­ment” has grabbed peo­ple’s at­ten­tion once again.

Kosola, an as­so­ci­ate pro­fes­sor in ado­les­cent med­i­cine, has re­searched the im­pact of so­cial me­dia on young peo­ple, and tells Yle News that the con­se­quences are not very well un­der­stood.

We see a rise in self-harm and es­pe­cially eat­ing dis­or­ders. We see a big sep­a­ra­tion in the val­ues of young girls and boys, which is also a big prob­lem in so­ci­ety,” Kosola ex­plains.

In the video be­low, Silja Kosola ex­plains the detri­men­tal ef­fects that ex­ces­sive use of so­cial me­dia can have on young peo­ple.

She fur­ther notes that cer­tain as­pects of Finnish cul­ture — such as the in­de­pen­dence and free­dom granted to chil­dren from a young age — have un­wit­tingly ex­ac­er­bated the ill ef­fects of so­cial me­dia use.

We have given smart­phones to younger peo­ple more than any­where else in the world. Just a cou­ple of years ago, about 95 per­cent of first graders had their own smart­phone, and that has­n’t hap­pened any­where else,” she says.

Since 10 December last year, chil­dren un­der the age of 16 in Australia have been banned from us­ing so­cial me­dia plat­forms such as TikTok, Snapchat, Facebook, Instagram and YouTube.

Prime Minister Anthony Albanese be­gan draft­ing the leg­is­la­tion af­ter he re­ceived a heart­felt let­ter from a griev­ing mother who lost her 12-year-old daugh­ter to sui­cide.

Although Albanese has never re­vealed the de­tails of the let­ter, he told pub­lic broad­caster ABC that it was obvious so­cial me­dia had played a key role” in the young girl’s death.

The leg­is­la­tion aims to shift the bur­den away from par­ents and chil­dren and onto the so­cial me­dia com­pa­nies, who face fines of up to 49.5 mil­lion Australian dol­lars (29 mil­lion eu­ros) if they con­sis­tently fail to keep kids off their plat­forms.

Clare Armstrong, ABCs chief dig­i­tal po­lit­i­cal cor­re­spon­dent, told Yle News that the ini­tial re­ac­tion to the roll-out has been some con­fu­sion but no lit­tle relief”.

The gov­ern­ment of­ten talks about this law as be­ing a tool to help par­ents and other in­sti­tu­tions en­force and start con­ver­sa­tions about tech and so­cial me­dia in ways that be­fore, they could­n’t,” she says.

Although it is still early days, as the ban has only been in force for about six weeks, Armstrong adds that the early in­di­ca­tors have been good.

ABC jour­nal­ist Clare Armstrong ex­plains in the video be­low how chil­dren in Australia have been spend­ing their time since the so­cial me­dia ban was in­tro­duced.

However, she adds a note of cau­tion to any coun­tries — such as Finland — look­ing to em­u­late the Australian model, not­ing that com­mu­ni­ca­tion is key.

Because you can write a very good law, but if the pub­lic does­n’t un­der­stand it, and if it can’t be en­forced at that house­hold level eas­ily, then it’s bound to fail,” Armstrong says.

Seona Candy, an Australian liv­ing in Helsinki for over eight years, has been keenly fol­low­ing the events in her home­land since the so­cial me­dia ban came into ef­fect in December.

She has heard anec­do­tally that if kids find them­selves blocked from one plat­form, they just set up an ac­count on an­other, ones that maybe their par­ents don’t even know ex­ist”.

And this is then much, much harder, be­cause those plat­forms don’t have parental con­trols, so they don’t have those things al­ready de­signed into them that the more main­stream plat­forms do,” Candy says.

Because of this is­sue, and oth­ers she has heard about, she warns against Finland in­tro­duc­ing like-for-like leg­is­la­tion based around Australia’s reactive, knee-jerk” law change.

I think the Finnish gov­ern­ment should re­ally in­vest in dig­i­tal ed­u­ca­tion, and dig­i­tal lit­er­acy, and teach kids about dig­i­tal safety. Finland is world-fa­mous for ed­u­ca­tion, and for me­dia lit­er­acy. Play to your strengths, right?”

The All Points North pod­cast asked if Finland should in­tro­duce a sim­i­lar ban on so­cial me­dia as in Australia. You can lis­ten to the episode via this em­bed­ded player, on Yle Areena, via Apple, Spotify or wher­ever you get your pod­casts.

...

Read the original on yle.fi »

4 279 shares, 18 trendiness

Swift is a more convenient Rust

Rust is one of the most loved lan­guages out there, is fast, and has an amaz­ing com­mu­nity. Rust in­vented the con­cept of own­er­ship as a so­lu­tion mem­ory man­age­ment is­sues with­out re­sort­ing to some­thing slower like Garbage Collection or Reference Counting. But, when you don’t need to be quite as low level, it gives you util­i­ties such as Rc, Arc and Cow to do ref­er­ence count­ing and clone-on-right” in your code. And, when you need to go lower-level still, you can use the un­safe sys­tem and ac­cess raw C point­ers.

Rust also has a bunch of awe­some fea­tures from func­tional lan­guages like tagged enums, match ex­pres­sions, first class func­tions and a pow­er­ful type sys­tem with gener­ics.

Rust has an LLVM-based com­piler which lets it com­pile to na­tive code and WASM.

I’ve also been do­ing a bit of Swift pro­gram­ming for a cou­ple of years now. And the more I learn Rust, the more I see a re­flec­tion of Swift. (I know that Swift stole a lot of ideas from Rust, I’m talk­ing about my own per­spec­tive here).

Swift, too, has awe­some fea­tures from func­tional lan­guages like tagged enums, match ex­pres­sions and first-class func­tions. It too has a very pow­er­ful type sys­tem with gener­ics.

Swift too gives you com­plete type-safety with­out a garbage col­lec­tor. By de­fault, every­thing is a value type with copy-on-write” se­man­tics. But when you need ex­tra speed you can opt into an own­er­ship sys­tem and move” val­ues to avoid copy­ing. And if you need to go even lower level, you can use the un­safe sys­tem and ac­cess raw C point­ers.

Swift has an LLVM-based com­piler which lets it com­pile to na­tive code and WASM.

You’re prob­a­bly feel­ing like you just read the same para­graphs twice. This is no ac­ci­dent. Swift is ex­tremely sim­i­lar to Rust and has most of the same fea­ture-set. But there is a very big dif­fer­ence is per­spec­tive. If you con­sider the de­fault mem­ory model, this will start to make a lot of sense.

Rust is a low-level sys­tems lan­guage at heart, but it gives you the tools to go higher level. Swift starts at a high level and gives you the abil­ity to go low-level.

The most ob­vi­ous ex­am­ple of this is the mem­ory man­age­ment model. Swift use value-types by de­fault with copy-on-write se­man­tics. This is the equiv­a­lent of us­ing Cow<> for all your val­ues in Rust. But de­faults mat­ter. Rust makes it easy to use moved” and borrowed” val­ues but re­quires ex­tra cer­e­mony to use Cow<> val­ues as you need to unwrap” them .as_mutable() to ac­tu­ally use the value within. Swift makes these Copy-on-Write val­ues easy to use and in­stead re­quires ex­tra cer­e­mony to use bor­row­ing and mov­ing in­stead. Rust is faster by de­fault, Swift is sim­pler and eas­ier by de­fault.

Swift’s syn­tax is a mas­ter­class in tak­ing awe­some func­tional lan­guage con­cepts and hid­ing them in C-like syn­tax to trick the de­vel­op­ers into ac­cept­ing them.

Consider match state­ments. This is what a match state­ment looks like in Rust:

Here’s how that same code would be writ­ten in Swift:

Swift does­n’t have a match state­ment or ex­pres­sion. It has a switch state­ment that de­vel­op­ers are al­ready fa­mil­iar with. Except this switch state­ment is ac­tu­ally not a switch state­ment at all. It’s an ex­pres­sion. It does­n’t fallthrough”. It does pat­tern match­ing. It’s just a match ex­pres­sion with a dif­fer­ent name and syn­tax.

In fact, Swift treats enums as more than just types and lets you put meth­ods di­rectly on it:

Rust does­n’t have null, but it does have None. Swift has a nil, but it’s re­ally just a None in hid­ing. Instead of an Option, Swift let’s you use T?, but the com­piler still forces you to check that the value is not nil be­fore you can use it.

You get the same safety with more con­ve­nience since you can do this in Swift with an op­tional type:

let val: T?if let val { // val is now of type `T`.}

Also, you’re not forced to wrap every value with a Some(val) be­fore re­turn­ing it. The Swift com­piler takes care of that for you. A T will trans­par­ently be con­verted into a T? when needed.

Rust does­n’t have try-catch. Instead it has a Result type which con­tains the suc­cess and er­ror types.

Swift does­n’t have a try-catch ei­ther, but it does have do-catch and you have to use try be­fore call­ing a func­tion that could throw. Again, this is just de­cep­tion for those de­vel­op­ers com­ing from C-like lan­guages. Swift’s er­ror han­dling works ex­actly like Rust’s be­hind the scenes, but it is hid­den in a clever, fa­mil­iar syn­tax.

func us­esEr­rorThrow­ing­Func­tion() throws { let x = try th­isFn­Can­Throw()}func han­dle­sEr­rors() { do { let x = try th­isFn­Can­Throw() } catch err { // han­dle the `err` here. }}

This is very sim­i­lar to how Rust let’s you use ? at the end of state­ments to au­to­mat­i­cally for­ward er­rors, but you don’t have to wrap your suc­cess val­ues in Ok().

There are many com­mon prob­lems that Rust’s com­piler will catch at com­pile time and even sug­gest so­lu­tions for you. The ex­am­ple that por­trays this well is self-ref­er­enc­ing enums.

Consider an enum that rep­re­sents a tree. Since, it is a re­cur­sive type, Rust will force you to use some­thing like Box<> for ref­er­enc­ing a type within it­self.

This makes the prob­lem ex­plicit and forces you to deal with it di­rectly. Swift is a lit­tle more, au­to­matic.

Note: that you still have to an­no­tate this enum with the in­di­rect key­word to in­di­cate that it is re­cur­sive. But once you’ve done that, Swift’s com­piler takes care of the rest. You don’t have to think about Box<> or Rc<>. The val­ues just work nor­mally.

Swift was de­signed to re­place Objective-C and needed to be able to in­ter­face with ex­ist­ing code. So, it has made a lot of prag­matic choices that makes it a much less pure” and minimalist” lan­guage. Swift is a pretty big lan­guage com­pared to Rust and has many more fea­tures built-in. However, Swift is de­signed with progressive dis­clo­sure” in mind which means that just as soon as you think you’ve learned the lan­guage a lit­tle more of the ice­berg pops out of the wa­ter.

Here are just some of the lan­guage fea­tures:

Swift is a far eas­ier lan­guage to get started and pro­duc­tive with. The syn­tax is more fa­mil­iar and a lot more is done for you au­to­mat­i­cally. But this re­ally just makes Swift a higher-level lan­guage and it comes with the same trade­offs.

By de­fault, a Rust pro­gram is much faster than a Swift pro­gram. This is be­cause Rust is fast by de­fault, and lets you be slow, while Swift is easy by de­fault and lets you be fast.

Based on this, I would say both lan­guages have their uses. Rust is bet­ter for sys­tems and em­bed­ded pro­gram­ming. It’s bet­ter for writ­ing com­pil­ers and browser en­gines (Servo) and it’s bet­ter for writ­ing en­tire op­er­at­ing sys­tems.

Swift is bet­ter for writ­ing UI and servers and some parts of com­pil­ers and op­er­at­ing sys­tems. Over time I ex­pect to see the over­lap get big­ger.

There is a per­cep­tion that Swift is only a good lan­guage for Apple plat­forms. While this was once true, this is no longer the case and Swift is be­com­ing in­creas­ingly a good cross-plat­form lan­guage. Hell, Swift even com­piles to wasm, and the forks made by the swift-wasm team were merged back into Swift core ear­lier this year.

Swift on Windows is be­ing used by The Browser Company to share code and bring the Arc browser to win­dows. Swift on Linux has long been sup­ported by Apple them­selves in or­der to push Swift on Server”. Apple is di­rectly spon­sor­ing the Swift on Server con­fer­ence.

This year Embedded Swift was also an­nounced which is al­ready be­ing used on small de­vices like the Panic Playdate.

Swift web­site has been high­light­ing many of these pro­jects:

The browser com­pany says that Interoperability is Swift’s su­per power.

And the Swift pro­ject has been try­ing make work­ing with Swift a great ex­pe­ri­ence out­side of XCode with pro­jects like an open source LSP and fund­ing the the VSCode ex­ten­sion.

Compile times are (like Rust) quite bad. There is some amount of fea­ture creep and the lan­guage is larger than it should be. Not all syn­tax feels fa­mil­iar. The pack­age ecosys­tem is­n’t nearly as rich as Rust.

But the Swift is only for Apple plat­forms” is an old and tired cliche at this point. Swift is al­ready a cross-plat­form, ABI-stable lan­guage with no GC, au­to­matic Reference Counting and the op­tion to opt into own­er­ship for even more per­for­mance. Swift pack­ages in­creas­ingly work on Linux. Foundation was ported to Swift, open sourced and made open source. It’s still early days for Swift as a good, more con­ve­nient, Rust al­ter­na­tive for cross-plat­form de­vel­op­ment, but it is here now. It’s no longer a fu­ture to wait for.

...

Read the original on nmn.sh »

5 220 shares, 8 trendiness

Automatic programming

In my YouTube chan­nel, for some time now I started to re­fer to the process of writ­ing soft­ware us­ing AI as­sis­tance (soon to be­come just the process of writ­ing soft­ware”, I be­lieve) with the term Automatic Programming”.

In case you did­n’t no­tice, au­to­matic pro­gram­ming pro­duces vastly dif­fer­ent re­sults with the same LLMs de­pend­ing on the hu­man that is guid­ing the process with their in­tu­ition, de­sign, con­tin­u­ous steer­ing and idea of soft­ware.

Please, stop say­ing Claude vibe coded this soft­ware for me”. Vibe cod­ing is the process of gen­er­at­ing soft­ware us­ing AI with­out be­ing part of the process at all. You de­scribe what you want in very gen­eral terms, and the LLM will pro­duce what­ever hap­pens to be the first idea/​de­sign/​code it would spon­ta­neously, given the train­ing, the spe­cific sam­pling that hap­pened to dom­i­nate in that run, and so forth. The vibe coder will, at most, re­port things not work­ing or not in line with what they ex­pected.

When the process is ac­tual soft­ware pro­duc­tion where you know what is go­ing on, re­mem­ber: it is the soft­ware *you* are pro­duc­ing. Moreover re­mem­ber that the pre-train­ing data, while not the only part where the LLM learns (RL has its big weight) was pro­duced by hu­mans, so we are not ap­pro­pri­at­ing some­thing else. We can pre­tend AI gen­er­ated code is ours”, we have the right to do so. Pre-training is, ac­tu­ally, our col­lec­tive gift that al­lows many in­di­vid­u­als to do things they could oth­er­wise never do, like if we are now linked in a col­lec­tive mind, in a cer­tain way.

That said, if vibe cod­ing is the process of pro­duc­ing soft­ware with­out much un­der­stand­ing of what is go­ing on (which has a place, and de­moc­ra­tizes soft­ware pro­duc­tion, so it is to­tally ok with me), au­to­matic pro­gram­ming is the process of pro­duc­ing soft­ware that at­tempts to be high qual­ity and strictly fol­low­ing the pro­duc­er’s vi­sion of the soft­ware (this vi­sion is multi-level: can go from how to do, ex­actly, cer­tain things, at a higher level, to step­ping in and tell the AI how to write a cer­tain func­tion), with the help of AI as­sis­tance. Also a fun­da­men­tal part of the process is, of course, *what* to do.

I’m a pro­gram­mer, and I use au­to­matic pro­gram­ming. The code I gen­er­ate in this way is mine. My code, my out­put, my pro­duc­tion. I, and you, can be proud.

If you are not com­pletely con­vinced, think to Redis. In Redis there is not much tech­ni­cal nov­elty, es­pe­cially at its start it was just a sum of ba­sic data struc­tures and net­work­ing code that every com­pe­tent sys­tem pro­gram­mer could write. So, why it be­came a very use­ful piece of soft­ware? Because of the ideas and vi­sions it con­tained.

Programming is now au­to­matic, vi­sion is not (yet).

Please en­able JavaScript to view the com­ments pow­ered by Disqus.

blog com­ments pow­ered by

...

Read the original on antirez.com »

6 215 shares, 10 trendiness

We have ipinfo at home or how to geolocate IPs in your CLI using latency

TLDR: I made a CLI tool that can re­solve an IP ad­dress to a coun­try, US state and even a city. https://​github.com/​ji­maek/​ge­olo­ca­tion-tool

It works well and con­firms ip­in­fo’s find­ings.

Recently, I read how ip­info fi­nally proved what most tech­ni­cal peo­ple as­sumed: VPN providers don’t ac­tu­ally main­tain a crazy amount of in­fra­struc­ture in hun­dreds of coun­tries. They sim­ply fake the IP ge­olo­ca­tion by in­ten­tion­ally pro­vid­ing wrong lo­ca­tion data to ARIN, RIPE, and Geo DB providers via ge­ofeeds.

They achieved their re­sults us­ing a novel ap­proach com­pared to other geo IP providers. Based on their blog and HackerNews com­ments, they built a large probe net­work and used it to trace and ping every (or most) IP ad­dresses on the in­ter­net.

This la­tency and hop data, most likely along with ad­vanced al­go­rithms and data cross-ref­er­ence, pro­vides a re­li­able way of cor­rectly de­tect­ing the phys­i­cal ge­olo­ca­tion of an IP ad­dress, with­out re­ly­ing on faked data avail­able in pub­lic sources.

This is a very in­ter­est­ing ap­proach that makes to­tal sense, and I’m sure their clients ap­pre­ci­ate it and heav­ily rely on it.

While I can’t ping every sin­gle IP ad­dress on the in­ter­net from hun­dreds of lo­ca­tions just yet, I can do it to a lim­ited sub­set us­ing Globalping. So I de­cided to try it out and see if I can repli­cate their re­sults and build a small tool to al­low any­one to do the same.

Globalping is an open-source, com­mu­nity-pow­ered pro­ject that al­lows users to self-host con­tainer-based probes. These probes then be­come part of our pub­lic net­work, which al­lows any­one to use them to run net­work test­ing tools such as ping and tracer­oute.

At the mo­ment, the net­work has more than 3000 probes, which in the­ory should be plenty to ge­olo­cate al­most any IP ad­dress down to a coun­try and even a US state level.

To au­to­mate and sim­plify this process, I made a lit­tle CLI tool us­ing the glob­alp­ing-ts li­brary. My orig­i­nal idea was sim­ple:

Ping it a few times per con­ti­nent to se­lect the con­ti­nent

Then ping the IP from many dif­fer­ent probes on that con­ti­nent

Group and sort the re­sults; the coun­try with the low­est la­tency should be the cor­rect one

And as a bonus, re­peat the same process for USA states if the win­ning coun­try was the US

Essentially, what I had to do was sim­ply cre­ate a few mea­sure­ments and pass the lo­ca­tion I needed us­ing Globalping’s magic field, which would au­to­mat­i­cally fig­ure out what I was look­ing for and se­lect a few pseudo-ran­dom probes that fit the lo­ca­tion and limit.

Now ini­tially, I used ping with 2 pack­ets to run all mea­sure­ments as quickly as pos­si­ble, but I quickly re­al­ized it was­n’t a good idea as most net­works block ICMP traf­fic. Next, I tried switch­ing to TCP-based ping, which re­quired try­ing a few pop­u­lar ports to get it to work. I quickly re­al­ized this was too com­pli­cated and un­re­li­able and switched to tracer­oute.

It worked per­fectly. Even though tracer­oute uses ICMP by de­fault, it did not mat­ter to me if the tar­get IPs net­work al­lowed ICMP or not, I sim­ply an­a­lyzed the la­tency of the last avail­able hop. Even if you block ICMP, your up­stream most likely al­lows it, and in most cases, it’s lo­cated in the same coun­try.

Of course, this means the re­sult­ing data is not 100% per­fect. A bet­ter ap­proach would be to an­a­lyze each IP us­ing dif­fer­ent meth­ods, in­clud­ing TCP and UDP-based tracer­oute on dif­fer­ent ports, and ex­pand to the last few hops in­stead of just one. Maybe even try to fig­ure out the lo­ca­tion of the reg­is­tered ASNs and use a weights sys­tem in com­bi­na­tion with pub­lic whois info in or­der to vote” for the right lo­ca­tion based on dif­fer­ent in­puts. Probably even mark low cer­tainty IPs to be retested with a dou­ble amount of probes. (end of rant)

But that’s some­thing for a com­mer­cial provider to fig­ure out, which it seems they did.

For con­ti­nent de­tec­tion, I de­cided to use just 5 probes per con­ti­nent; the re­sults were ex­tremely ac­cu­rate. Although for IPs just on the border” of con­ti­nents it might be in­ef­fec­tive, a higher amount of probes would gen­er­ate bet­ter re­sults. For this use case, it was good enough.

My home IP in cen­tral Europe was too easy to de­tect:

Phase 1: Detecting con­ti­nent…

North America: 137.18 ms

Europe: 32.39 ms

Asia: 174.54 ms

South America: 215.08 ms

Oceania: 244.15 ms

Africa: 156.83 ms

In phase 2, all we need to do is run a sin­gle mea­sure­ment with the win­ning con­ti­nent as the lo­ca­tion and a higher limit. Initially, I started with 250 probes with great ac­cu­racy.

Eventually, I de­cided to drop down to 50 as the de­fault. Based on my tests, the re­sults con­tin­ued to look re­ally good, and it would al­low the tool to be run even with­out au­then­ti­ca­tion, as the Globalping API al­lows 250 tests per hour per IP and 50 probes per mea­sure­ment.

Although I rec­om­mend reg­is­ter­ing for a free ac­count at https://​dash.glob­alp­ing.io/ and au­then­ti­cat­ing with a to­ken to get up to 500 tests per hour and run more tests.

Note: If you need more tests than that, you can ei­ther host a probe to gen­er­ate pas­sive cred­its to be used as tests, or do­nate via GitHub Sponsors. We will au­to­mat­i­cally de­tect it and credit your ac­count.

Phase 2: Detecting coun­try…

Measuring from 50 probes…

[████████████████████████████████████████] 100.0% 50/50 - Best: PL (7.29 ms)

Top 3 Locations:

1.. Poland, EU 7.29 ms

2.. Germany, EU 13.42 ms

3.. Lithuania, EU 17.65 ms

SUMMARY

Location: Poland, EU

Minimum Latency: 7.29 ms

Confidence: Medium

Great, now we have a ba­sic IP-to-country re­solver that only takes a few sec­onds to pro­vide a re­sponse, and I did­n’t even have to un­der­stand or write any com­pli­cated math. Although I’m sure some­one smarter could use a for­mula to ge­olo­cate IPs with even fewer probes and higher ac­cu­racy.

For phase 3, we want to re­solve the US to a spe­cific state or ter­ri­tory, just like ip­info did, and luck­ily they even pro­vided a few sam­ple IPs and lo­ca­tions to bench­mark against dur­ing test­ing.

Again, this was as sim­ple as cre­at­ing a new mea­sure­ment with the USA as the lo­ca­tion. I used 50 probes as the de­fault limit and tested the NordVPN IP ad­ver­tised as Bahamas but re­solved to Miami by ip­info.

Phase 3: Detecting US state…

Measuring from 50 probes…

[████████████████████████████████████████] 100.0% 50/50 - Best: FL (0.45 ms)

Top 3 Locations:

1. Florida, USA 0.45 ms

2. South Carolina, USA 12.23 ms

3. Georgia, USA 15.01 ms

SUMMARY

Location: Florida, United States

Minimum Latency: 0.45 ms

Confidence: Very High

The tool agrees, Florida is the cor­rect lo­ca­tion. But how ac­cu­rate can this sys­tem be? Can we ex­pand it to show the city too?

Let’s make a new phase, which again, will sim­ply set the re­sult­ing coun­try or state as the lo­ca­tion and ex­tract the city of the probe with the low­est la­tency. Here, since there are too many pos­si­ble cities and towns per state and coun­try, I ex­pect the ac­cu­racy to be low and only point to the clos­est ma­jor hub. But in the­ory, this should be more than enough for use cases like rout­ing or per­for­mance de­bug­ging.

And here we go, the same re­sult ip­info got

Phase 4: Detecting city…

Measuring from 36 probes…

[████████████████████████████████████████] 100.0% 36/36 - Best: Miami (0.00 ms)

Top 3 Locations:

1. Miami, Florida, USA 0.00 ms

2. West Palm Beach, Florida, USA 4.36 ms

3. Tampa, Florida, USA 5.85 ms

SUMMARY

Location: Miami, Florida, United States

Minimum Latency: 0.00 ms

Confidence: Very High

The cur­rent re­sults are good but could be bet­ter. The main prob­lem is with how the magic field works: when set­ting, for ex­am­ple, Europe’ as the lo­ca­tion, it tries to spread the tests across all European probes but does not guar­an­tee that every sin­gle coun­try is go­ing to be in­cluded.

This re­sults in in­con­sis­ten­cies where a probe in the same coun­try as the tar­get IP was not se­lected, and so the tool as­sumes the IP is lo­cated in a dif­fer­ent neigh­bour­ing coun­try.

To fix this and make the re­sults more con­sis­tent, you would need to change the se­lec­tion logic and man­u­ally set every coun­try per con­ti­nent and US state. By pass­ing the full list of coun­tries/​states to the Globalping API, you en­sure that at least one probe in that lo­ca­tion is go­ing to be se­lected. Additionally, you fully con­trol the num­ber of probes per lo­ca­tion, which is very im­por­tant to con­trol the ac­cu­racy.

For ex­am­ple, North America tech­ni­cally con­tains 43 coun­tries and ter­ri­to­ries. This means you can’t just set a limit of one probe per coun­try, it is not enough to prop­erly un­der­stand the la­tency to the tar­get IP from the dis­pro­por­tion­ately larger USA. A bet­ter limit would be around 200 probes for the USA, 20 for Canada, and 10 for Mexico.

But the goal of this tool was to use a min­i­mum amount of probes to al­low unau­then­ti­cated users to test it out. The cur­rent ap­proach works great, it is sim­ple to im­ple­ment and it is very easy to con­trol the ac­cu­racy by sim­ply set­ting a higher limit of probes.

Overall, la­tency-based ge­olo­ca­tion de­tec­tion seems to be a great way to ver­ify the lo­ca­tion of any IP as long as you have enough van­tage points. It will most likely fall apart in re­gions with min­i­mal or no cov­er­age.

The tool it­self is open source and you can run it like this:

You can also use the –limit pa­ra­me­ter to use more probes per phase. But be care­ful as it ap­plies the set value to all phases and this will very quickly eat through your limit. Check the full docs in GitHub.

Pull re­quests with im­prove­ments are wel­come!

Feel free to email me if you need some free cred­its to play around with d@glob­alp­ing.io

And of course con­sider host­ing a probe, it’s as sim­ple as run­ning a con­tainer https://​github.com/​js­de­livr/​glob­alp­ing-probe

...

Read the original on blog.globalping.io »

7 210 shares, 22 trendiness

Scientist who helped eradicate smallpox dies at age 89

A leader in the global fight against small­pox and a cham­pion of vac­cine sci­ence, William Foege died last SaturdayThe late physi­cians and health ad­min­is­tra­tors William Foege (middle), J. Donald Millar (left) and J. Michael Lane (right), all of whom served in the Global Smallpox Eradication Program, in 1980. Sign Up for Our Free Daily NewsletterI agree my in­for­ma­tion will be processed in ac­cor­dance with the Scientific American and Springer Nature Limited Privacy Policy . We lever­age third party ser­vices to both ver­ify and de­liver email. By pro­vid­ing your email ad­dress, you also con­sent to hav­ing the email ad­dress shared with third par­ties for those pur­poses. William Foege, a leader in the global fight to elim­i­nate small­pox, has died. Foege passed away on Saturday at the age of 89, ac­cord­ing to the Task Force for Global Health, a pub­lic health or­ga­ni­za­tion he co-founded.Foege headed the U.S. Centers for Disease Control and Prevention’s Smallpox Eradication Program in the 1970s. Before the dis­ease was of­fi­cially erad­i­cated in 1980, it killed around one in three peo­ple who were in­fected. According to the CDC, there have been no new small­pox cases since 1977.“If you look at the sim­ple met­ric of who has saved the most lives, he is right up there with the pan­theon,” said for­mer CDC di­rec­tor Tom Frieden to the Associated Press. Smallpox erad­i­ca­tion has pre­vented hun­dreds of mil­lions of deaths.”If you’re en­joy­ing this ar­ti­cle, con­sider sup­port­ing our award-win­ning jour­nal­ism by sub­scrib­ing. By pur­chas­ing a sub­scrip­tion you are help­ing to en­sure the fu­ture of im­pact­ful sto­ries about the dis­cov­er­ies and ideas shap­ing our world to­day.Foege went on to lead the CDC and served as a se­nior med­ical ad­viser and se­nior fel­low at the Bill & Melinda Gates Foundation. In 2012 then pres­i­dent Barack Obama awarded him the Presidential Medal of Freedom.Foege was a vo­cal pro­po­nent of vac­cines for pub­lic health, writ­ing with epi­demi­ol­o­gist Larry Brilliant in Scientific American in 2013 that the ef­fort to elim­i­nate po­lio has never been closer” to suc­cess. By work­ing to­gether,” they wrote, we will soon rel­e­gate po­lio—along­side small­pox—to the his­tory books.” Polio re­mains a candidate for erad­i­ca­tion,” ac­cord­ing to the World Health Assembly.And in 2025 Foege, along­side sev­eral other for­mer CDC di­rec­tors, spoke out against the poli­cies of the cur­rent sec­re­tary of health and hu­man ser­vices Robert F. Kennedy, Jr. In a New York Times op-ed, they wrote that the top health of­fi­cial’s tenure was unlike any­thing we had ever seen at the agency.”In a state­ment, Task Force for Global Health CEO Patrick O’Carroll re­mem­bered Foege as an inspirational” fig­ure, both for early-ca­reer pub­lic health work­ers and vet­er­ans of the field. Whenever he spoke, his vi­sion and com­pas­sion would reawaken the op­ti­mism that prompted us to choose this field, and re-en­er­gize our ef­forts to make this world a bet­ter place,” O’Carroll said.It’s Time to Stand Up for ScienceIf you en­joyed this ar­ti­cle, I’d like to ask for your sup­port. Scientific American has served as an ad­vo­cate for sci­ence and in­dus­try for 180 years, and right now may be the most crit­i­cal mo­ment in that two-cen­tury his­tory.I’ve been a Scientific American sub­scriber since I was 12 years old, and it helped shape the way I look at the world. SciAm al­ways ed­u­cates and de­lights me, and in­spires a sense of awe for our vast, beau­ti­ful uni­verse. I hope it does that for you, too.If you sub­scribe to Scientific American, you help en­sure that our cov­er­age is cen­tered on mean­ing­ful re­search and dis­cov­ery; that we have the re­sources to re­port on the de­ci­sions that threaten labs across the U.S.; and that we sup­port both bud­ding and work­ing sci­en­tists at a time when the value of sci­ence it­self too of­ten goes un­rec­og­nized.In re­turn, you get es­sen­tial news, cap­ti­vat­ing pod­casts, bril­liant in­fo­graph­ics, can’t-miss newslet­ters, must-watch videos, chal­leng­ing games, and the sci­ence world’s best writ­ing and re­port­ing. You can even gift some­one a sub­scrip­tion.There has never been a more im­por­tant time for us to stand up and show why sci­ence mat­ters. I hope you’ll sup­port us in that mis­sion.

...

Read the original on www.scientificamerican.com »

8 195 shares, 10 trendiness

zpoint/CPython-Internals: Dive into CPython internals, trying to illustrate every detail of CPython implementation

* Watch this repo if you need to be no­ti­fied when there’s up­date

This repos­i­tory is my notes/​blog for cpython source code

Trying to il­lus­trate every de­tail of cpython im­ple­men­ta­tion

# based on ver­sion 3.8.0a0

cd cpython

git re­set –hard ab54b9a130c88f708077c2e­f6c4963b632c132b3

The fol­low­ing con­tents are suit­able for those who have python pro­gram­ming ex­pe­ri­ence and in­ter­ested in in­ter­nal of python in­ter­preter, for those who needs be­gin­ner or ad­vanced ma­te­r­ial please re­fer to awe­some-python-books

I will only rec­om­mend what I’ve read

All kinds of con­tri­bu­tions are wel­come

* sub­mit a pull re­quest

if you want to share any knowl­edge you know

* if you want to share any knowl­edge you know

...

Read the original on github.com »

9 172 shares, 7 trendiness

US authorities reportedly investigate claims that Meta can read encrypted WhatsApp messages

US au­thor­i­ties have re­port­edly in­ves­ti­gated claims that Meta can read users’ en­crypted chats on the WhatsApp mes­sag­ing plat­form, which it owns.

The re­ports fol­low a law­suit filed last week, which claimed Meta can ac­cess vir­tu­ally all of WhatsApp users’ pur­port­edly private’ com­mu­ni­ca­tions”.

Meta has de­nied the al­le­ga­tion, re­ported by Bloomberg, call­ing the law­suit’s claim categorically false and ab­surd”. It sug­gested the claim was a tac­tic to sup­port the NSO Group, an Israeli firm that de­vel­ops spy­ware used against ac­tivists and jour­nal­ists, and which re­cently lost a law­suit brought by WhatsApp.

The firm that filed last week’s law­suit against Meta, Quinn Emanuel Urquhart & Sullivan, at­trib­utes the al­le­ga­tion to un­named courageous” whistle­blow­ers from Australia, Brazil, India, Mexico and South Africa.

Quinn Emanuel is, in a sep­a­rate case, help­ing to rep­re­sent the NSO Group in its ap­peal against a judg­ment from a US fed­eral court last year, which or­dered it to pay $167m to WhatsApp for vi­o­lat­ing its terms of ser­vice in its de­ploy­ment of Pegasus spy­ware against more than 1,400 users.

We’re pur­su­ing sanc­tions against Quinn Emanuel for fil­ing a mer­it­less law­suit that was de­signed purely to grab head­lines,” said Carl Woog, a Meta spokesper­son, in a state­ment. This is the same firm that is try­ing to help NSO over­turn an in­junc­tion that barred their op­er­a­tions for tar­get­ing jour­nal­ists and gov­ern­ment of­fi­cials with spy­ware.”

Adam Wolfson, a part­ner at Quinn Emanuel said: Our col­leagues’ de­fence of NSO on ap­peal has noth­ing to do with the facts dis­closed to us and which form the ba­sis of the law­suit we brought for world­wide WhatsApp users.

We look for­ward to mov­ing for­ward with those claims and note WhatsApp’s de­nials have all been care­fully worded in a way that stops short of deny­ing the cen­tral al­le­ga­tion in the com­plaint — that Meta has the abil­ity to read WhatsApp mes­sages, re­gard­less of its claims about end-to-end en­cryp­tion.”

Steven Murdoch, pro­fes­sor of se­cu­rity en­gi­neer­ing at UCL, said the law­suit was a bit strange”. It seems to be go­ing mostly on whistle­blow­ers, and we don’t know much about them or their cred­i­bil­ity,” he said. I would be very sur­prised if what they are claim­ing is ac­tu­ally true.”

If WhatsApp were, in­deed, read­ing users’ mes­sages, this was likely to have been dis­cov­ered by staff and would end the busi­ness, he said. It’s very hard to keep se­crets in­side a com­pany. If there was some­thing as scan­dalous as this go­ing on, I think it’s very likely that it would have leaked out from some­one within WhatsApp.”

The Bloomberg ar­ti­cle cites re­ports and in­ter­views from of­fi­cials within the US Department of Commerce in claim­ing that the US has in­ves­ti­gated whether Meta could read WhatsApp mes­sages. However, a spokesper­son for the de­part­ment called these as­ser­tions unsubstantiated”.

WhatsApp bills it­self as an end-to-end en­crypted plat­form, which means that mes­sages can be read only by their sender and re­cip­i­ent, and are not de­coded by a server in the mid­dle.

This con­trasts with some other mes­sag­ing apps, such as Telegram, which en­crypt mes­sages be­tween a sender and its own servers, pre­vent­ing third par­ties from read­ing the mes­sages, but al­low­ing them — in the­ory — to be de­coded and read by Telegram it­self.

A se­nior ex­ec­u­tive in the tech­nol­ogy sec­tor told the Guardian that WhatsApp’s vaunted pri­vacy leaves much to be de­sired”, given the plat­for­m’s will­ing­ness to col­lect meta­data on its users, such as their pro­file in­for­ma­tion, their con­tact lists, and who they speak to and when.

However, the idea that WhatsApp can se­lec­tively and retroac­tively ac­cess the con­tent of [end-to-end en­crypted] in­di­vid­ual chats is a math­e­mat­i­cal im­pos­si­bil­ity”, he said.

Woog, of Meta, said: We’re pur­su­ing sanc­tions against Quinn Emanuel for fil­ing a mer­it­less law­suit that was de­signed purely to grab head­lines. WhatsApp’s en­cryp­tion re­mains se­cure and we’ll con­tinue to stand up against those try­ing to deny peo­ple’s right to pri­vate com­mu­ni­ca­tion.”

...

Read the original on www.theguardian.com »

10 168 shares, 8 trendiness

Guix System First Impressions as a Nix User

6. Results

Feel free to skip this sec­tion if you don’t re­ally care about back­sto­ries. I just fig­ured it makes sense to re­cap how and why one might start hav­ing an in­ter­est in de­clar­a­tive dis­tros be­fore tack­ling the main topic.

I’ve been a Linux-only user for about ten years now and, like many oth­ers, I too em­barked on the ar­du­ous jour­ney of dis­tro-hop­ping. I started with Mint and when that felt too slow, I switched to Ubuntu. When Ubuntu felt too hand­holdy, I switched to Arch, which proved to be my main dri­ver for well over five or so years. And when I could­n’t re­sist the Siren’s call, I moved on to Gentoo, think­ing surely harder is bet­ter”. Which re­sulted in se­vere burnout in a few months, so I ca­pit­u­lated and switched to Fedora, which was very sta­ble and hon­estly an all around ex­cel­lent sys­tem. But once more, my in­ter­est was piqued, and (before to­day’s ad­ven­ture) I fi­nally switched to NixOS.

I’ve al­ways had a pass­ing in­ter­est to­wards Nix ever since I’ve first heard about it, but un­til fairly re­cently, I al­ways dis­missed it as a tool for DevOps guys. The syn­tax was weird, the need for re­pro­ducible en­vi­ron­ments seem­ingly ir­rel­e­vant, and stuff like the oft-rec­om­mended Nix Pills seemed any­thing but new­bie-friendly.

So then why would some­one like me, who’s so adamant about not need­ing Nix even­tu­ally choose to go all-in? I guess it was at first less about Nix be­ing bet­ter and just the rest be­ing worse.

Of the two big rea­sons for the switch, one was that I re­al­ized that hav­ing per-di­rec­tory en­vi­ron­ments for your pro­jects is ac­tu­ally a very handy thing to do when you like to toy around with many tech­nolo­gies. I used to gen­er­ate my other blog us­ing Jekyll and, no mat­ter which dis­tro I used, it was al­ways a pain in the neck to have a good Ruby en­vi­ron­ment set up. bundler in­stall did­n’t re­ally want to work with­out priv­i­leges and I was­n’t re­ally a fan of un­leash­ing sudo on it, but usu­ally that was the only way I could get things to work.

With Nix, how­ever, it was a mat­ter of just de­scrib­ing a few pack­ages in a shell and boom, Ruby in one folder, no Ruby (and thus no mess) every­where else. I was hooked! I started adding shell.nix files to all my lit­tle pro­jects, hell, I started plan­ning pro­jects by first adding a shell.nix with all the de­pen­den­cies I would rea­son­ably need.

The other rea­son, which ul­ti­mately ce­mented that I need to com­mit, was that I was get­ting tired of my in­stalled pack­ages slowly drift­ing out of con­trol. Sure, every pack­age man­ager has some method of list­ing what’s in­stalled, but these are usu­ally cum­ber­some and com­pletely ephemeral (in the sense that any list­ing be­comes in­valid the mo­ment you change any­thing).

With NixOS, the equa­tion is flipped on its head: No longer did I query the sys­tem to tell me what’s in­stalled and what’s not, it was now the sys­tem that worked based on files that I edit. The dif­fer­ence sounds small on pa­per, but for me it was an ex­tremely lib­er­at­ing feel­ing to know that I could edit my sys­tem con­fig­u­ra­tion in a ver­sion­able, ex­plicit, and cen­tral­ized way.

But NixOS is­n’t the only de­clar­a­tive dis­tro out there. In fact GNU forked Nix fairly early and made their own spin called Guix, whose big in­no­va­tion is that, in­stead of us­ing the un­wieldy Nix-language, it uses Scheme. Specifically Guile Scheme, GNUs sanc­tioned con­fig­u­ra­tion lan­guage. I’ve been fol­low­ing Guix for a bit, but it never felt quite ready to me with stuff like KDE be­ing only barely sup­ported and a lot of hard­ware not work­ing out of the box.

However, now that (after three years) Guix an­nounced its 1.5.0 re­lease with a lot of stuff sta­bi­lized and KDE fi­nally a first-party cit­i­zen, I fig­ured now is the best time to give it a fresh shot. This post cap­tures my ex­pe­ri­ences from in­stal­la­tion to the first 3-4 days.

Plug your USB in, dd the file onto the drive, re­boot, noth­ing un­usual. If you’ve ever in­stalled a Linux sys­tem, it’s more of the same.

After se­lect­ing the pen­drive in my BIOS set­tings, the mon­i­tor be­gan to glow in a deep, ra­di­ant blue as the Guix System logo ap­peared on my screen… only to sud­denly switch to a men­ac­ing red: My CPUs in­te­grated GPU is not sup­ported by free firmware. A help­ful popup gave me a gen­tle nudge about pick­ing free hard­ware next time (buddy, have you seen the PC part prices these days?) and off I went into the in­staller proper.

The in­staller it­self is re­fresh­ingly bare­bones and I mean this in a pos­i­tive way. It asks all the nec­es­sary ques­tions and pro­vides a nice ba­sic con­fig­u­ra­tion file, all done in a retro Ncurses-based TUI. I was re­ally happy to see that, un­like my last at­tempt at us­ing Guix System in the early 2020-s, KDE Plasma is now a first-party choice dur­ing in­stal­la­tion. I never re­ally vibed too much with GNOME and the other op­tions did­n’t ap­peal ei­ther, so the choice was ob­vi­ous.

Now, I’m not sure if I just picked the worst pos­si­ble time or if the Guix servers were fac­ing un­usual load or what­ever may have hap­pened, but af­ter such a breeze of a setup, the mo­ment I pressed in­stall, my PC be­came un­us­able for the next 2.5 hours. Which is un­ac­cept­able for an in­stal­la­tion process these days in my opin­ion. I am lucky enough to live in a house­hold with fiber-op­tic in­ter­net, that merely shrugs at band­width of up to a gi­ga­byte per sec­ond and yet nearly all pack­ages down­loaded with a whop­ping 50 kilo­bytes per sec­ond, mean­ing even small-ish 5-10 megabyte pack­ages took long min­utes to down­load.

A re­boot later my is­sues only got worse.

I was as­sum­ing I’d get SDDM af­ter hav­ing cho­sen KDE Plasma, but (what a later, closer read of the man­ual made me re­al­ize is the ex­pected out­come for a de­fault con­fig) it was GDM that loaded in. I en­tered my name and pass­word, and I was greeted with the fa­mil­iar Plasma 6 spin­ner. The first hint that some­thing might be off was that it loaded a bit longer than usual, but I was not go­ing to get mad at wait­ing 10 sec­onds in­stead of 3. After all, I did just wait mag­ni­tudes longer to get here.

With prac­ti­cally noth­ing in­stalled be­yond the very ba­sics, I clicked on Konsole, hop­ing to start prod­ding around my con­fig and add some of my day to day apps. To my hor­ror, it opened in the top left cor­ner, with­out a ti­tle­bar and with­out any bor­ders. What’s more, no mat­ter what I did, I could­n’t move it. It also did­n’t show up on the menu bar, de­spite the ap­pli­ca­tion launcher still be­ing com­pletely us­able. At this point I was fairly ex­hausted by these an­tics, but I fig­ured,

Well, it’s a brand new re­lease, per­haps this just snuck in. Let’s give up­dat­ing a shot and see if that helps.

So I is­sued guix pull… The down­load whizzed by with speed quite un­ex­pected af­ter what I ex­pe­ri­enced with the in­staller… Only to crash into the brick wall that’s in­dex­ing. Okay, what­ever, an­other 10-12 min­utes down the drain, at least now I have newest ver­sion.

Except I did­n’t. Because, un­like Nix, the guix ex­e­cutable is not an om­nipresent, unique thing that any­one and every­one uses on your PC. Not only does every user have their own in­stance, if you don’t is­sue a cer­tain set of com­mands, you won’t start us­ing the new ver­sion, de­spite up­dat­ing it.

To Guix’s credit, the CLI does scream at you to up­date your en­vi­ron­ment or else you’ll keep us­ing the old ver­sion, but I still find this sys­tem very dis­ori­en­tat­ing com­pared to Nix. I’m cer­tain ex­pe­ri­enced Guixheads are long past be­ing tripped up by this sort of stuff and might even strug­gle to re­mem­ber that there was a time they had to do these spe­cial steps too, but as a new user it felt a bit rough, es­pe­cially cons­der­ing this is Guix System, i.e. the sys­tem whose whole pur­pose is to be in­te­grate Guix as much as it can.

Back to our is­sue at hand. I is­sued sudo -s and guix pull-ed again. Once more 10-12 min­utes passed in­dex­ing. But at least I could fi­nally call guix sys­tem re­con­fig­ure /etc/config.scm. Interestingly things are much faster this time around, I saw speeds up to 30-50 Mbps. Before long the sys­tem was up­dated to the newest com­mit and I re­booted with high hopes.

High hopes, that were im­me­di­ately dashed when Plasma loaded in the same messed up way. At this point I started to sus­pect this might be an is­sue with the GPU dri­ver, so I en­abled the LXQT desk­top en­vi­ron­ment and re­booted once more. Thankfully that one worked like a charm and I was able to boot up both Emacs (editing Scheme with GNU Nano is a pain I do not wish on any­one) and LibreWolf (Firefox’s de-Mozilla-d vari­ant).

Not hav­ing found any­thing too use­ful in the docs, I de­cided to make my prob­lem some­one else’s so I fired up ERC and con­nected to Libera.chat’s #guix chan­nel. After around half an hour of wait, a user by the name of Rutherther stepped up and of­fered me some help. We were able to fig­ure it out that Nouveau was­n’t able to drive my GPU (an RTX 5070), so his rec­om­men­da­tion was that I should try boot­ing with nomod­e­set. I did, but it sadly did­n’t help much ei­ther.

At this point I was out of ideas. Ideas of solv­ing this us­ing pure-Guix System, that is. There was still one op­tion I wanted to avoid as long as I could, but alas, it seemed like the only op­tion, that still had a re­al­is­tic chance of work­ing.

Enter Nonguix, the Mr. Hyde to Guix’s Dr. Jekyll, the shady guy who of­fers you a hit and first time’s for free, the… Erm, in a nut­shell, it’s the repos­i­tory for non-free ap­pli­ca­tions and dri­vers pack­ages for Guix System, ba­si­cally. Interestingly enough, by Guix’s own find­ings about 64% of users uti­lize the Nonguix chan­nel, which is per­haps not literally every­one”, but it does paint a pic­ture that there is still stuff out there that you sim­ply can­not re­place with FOSS soft­ware yet.

Enabling the repo was­n’t ex­actly dif­fi­cult. You just paste the short ex­cerpt from above (also found in the README) into your ~/.config/guix/channels.scm and /etc/guix/channels.scm files, guix pull, let it in­dex to its heart’s con­tent again, and then you have ac­cess to all that is nasty (yet oc­ca­sion­ally use­ful) in the world.

I fig­ured per­haps if Linux-libre and its free firmware could­n’t deal with my GPU, then surely Linux proper with its bi­nary blobs could. Hell, for good mea­sure I threw in the NVIDIA trans­form, which is sup­posed to au­tomag­i­cally trans­late all de­pen­den­cies to use the pro­pri­etary dri­vers.

Turns out my ea­ger­ness was a mis­take. Not only did the process take yet an­other half an hour (if not more, I stopped count­ing), upon re­boot all I was met with was a ker­nel panic about the dri­ver not be­ing able to cope with the GPU it found and a mas­sive spew of FSCK logs.

With no bet­ter ideas in mind, I took out my pen­drive again and burned Nonguix’s own pre-built ISO on it us­ing my part­ner’s PC. While it ul­ti­mately did get me a work­ing sys­tem, this ver­sion has three un­for­tu­nate hin­drances:

It was built in 2022, far be­fore Guix’s mi­gra­tion to Codeberg, mean­ing it still at­tempts to pull con­tent from the un­fath­omably slow GNU Savannah mir­ror. I had to man­u­ally over­ride my chan­nels.scm to point at the Codeberg repo in­stead, but with no easy means of find­ing its channel in­tro­duc­tion”, I had to pass in –disable-authentication to Guix when up­dat­ing my sys­tem. A bit scary, but I trust the Codeberg repo.

Because of its age, I got a lot of some­what in­tim­i­dat­ing er­rors about hard­ware not be­ing rec­og­nized and other stuff I could­n’t even de­ci­pher, but ul­ti­mately the sys­tem booted to the in­staller with­out is­sue.

For some rea­son while the in­staller it­self does in­clude Nonguix stuff, it ac­tu­ally does not in­clude the repo in the re­sult­ing chan­nels files, nor the sub­sti­tu­tion server for the pro­ject. The README has a warn­ing about this, but if you hap­pen to miss it, you could ac­ci­den­tally in­stall a non-Nonguix Guix System (say that three times fast).

None of these were par­tic­u­larly hard to fix, how­ever, and soon enough I was back where I started. That is to say, in a nomod­e­set X11 ses­sion, ex­cept this time run­ning i3, as LXQT was­n’t an avail­able op­tion on an in­staller this old. There was cer­tainly a bit of a hacker-ish vibe to mess­ing with code files in an en­vi­ron­ment like that, but I was hon­estly much more look­ing for­ward to fi­nally hav­ing a us­able desk­top.

Having learned from my hasti­ness, this time I was smarter. I only en­abled the full ker­nel and firmware blobs, with­out go­ing any­where near the NVIDIA trans­form. I is­sued an­other guix sys­tem re­con­fig­ure and, af­ter hav­ing time for an­other tea ses­sion, my up­date was fi­nally fin­ished.

Obviously there is lit­tle point in throw­ing Guix System on my PC and de­clar­ing suc­cess. I wanted to be able to at least re­pro­duce the kind of work­flow I’m used to us­ing NixOS. For that, I need the fol­low­ing:

* A browser: prefer­ably Firefox, as I’m not a huge fan of Chrome / Chromium,

* Dev en­vi­ron­ments: for Rust, Zig, Scheme, and TypeScript (with the op­tion for more, if pos­si­ble),

* Emacs: I do al­most all my text edit­ing in it these days, falling back to Neovim for quick tasks,

* Steam: for the very rare oc­ca­sions I want to game,

* NVIDIA dri­vers: I pre­fer to of­fload day-to-day us­age to my CPUs in­te­grated GPU, as it cuts my en­ergy us­age in half.

Of these it was ob­vi­ous that two would be rel­a­tively hard and one outright im­pos­si­ble”. The two be­ing Steam and the dri­vers (as both are non-free and thus not in Guix’s de­fault re­pos) and the impossible” one be­ing Discord (which not even the non-free repo has pack­aged). But I was ready to com­pro­mise a lit­tle bit since I am re­quest­ing stuff that’s ex­plic­itly against Guix’s goals.

Figure 6: My desk­top run­ning Wezterm pack­aged by me and Emacs.

While there has been oc­ca­sional bumps and hitches along the ride, I must say I’m very im­pressed with Guix System so far. Let’s go through this list in or­der:

* Browser: So far I’m re­ally en­joy­ing LibreWolf. It feels a lot snap­pier than Firefox and I’m re­ally baf­fled how much speed I was ap­par­ently miss­ing out on.

* E-mails: I in­stalled Icedove, which is ba­si­cally just Thunderbird with­out Mozilla brand­ing. It works as ex­pected.

* Office suite: LibreOffice is avail­able as ex­pected. Not much to say about it. I guess it’s in­ter­est­ing that Guix is­n’t fol­low­ing the usual -stale / -fresh pack­ag­ing schema, but I don’t re­ally mind not hav­ing cut­ting edge ver­sions of an of­fice suite :)

* Dev en­vi­ron­ments: I’ve only briefly toyed with de­vel­op­ment en­vi­ron­ments so far, but to me it seems like for sim­ple use-cases it might be even eas­ier to use than shell.nix (you don’t need any sort of cer­e­mony, just a man­i­fest.scm file with a (specifications->manifest form in­side and you have a dev env ready to go.)

* Emacs: Installed just fine. I had to in­stall emacs-vterm to make Vterm work, but all that took was the very sim­ple process of adding the li­brary to my home con­fig­u­ra­tion and then ref­er­enc­ing it in my Emacs con­fig as per this Reddit post.

* Discord: I de­cided to just use Discord’s browser ver­sion, which works just as fine (if not bet­ter). It’s trad­ing a tiny bit of con­ve­nience in re­turn for not hav­ing to fig­ure out how to man­u­ally add a pack­age for it from some ran­dom third-party source. From what I’ve read else­where Flatpak is also an op­tion, but I pre­fer hav­ing just one pack­age man­ager at a time.

* Steam: Installed shock­ingly eas­ily. I have to re­ally give props to the Nonguix team. I tested Portal 2 with the Nouveau dri­ver, it is a lit­tle dis­heart­en­ing to see a 15 years old game lag, but I un­der­stand the peo­ple’s hands are tied when it comes to the free dri­vers. After I man­aged to in­stall the pro­pri­etary dri­vers, I was able to play even Portal RTX, which is some­thing I never man­aged to get to work us­ing NixOS.

* NVIDIA dri­vers: This time I ac­tu­ally read the docs prop­erly and it did­n’t take long for me to re­al­ize the ini­tial prob­lem that caused my pre­vi­ous in­stall to be un­bootable was of course found be­tween the chair and key­board. This time, af­ter mak­ing sure I en­abled the open dri­vers and ker­nel mode-set­ting, I crossed my fin­gers, is­sued a re­con­fig­ure and it works beau­ti­fully!

In a nut­shell I’m very pos­i­tively sur­prised by Guix System. After strug­gling so much with it years ago, this time every­thing just clicked af­ter a much shorter bat­tle. So much so that I’m happy to make it my daily dri­ver for the fore­see­able fu­ture. Beyond the slightly slower ex­e­cu­tion speed, I’m get­ting a com­pa­ra­ble ex­pe­ri­ence to NixOS, with all the usual pros a de­clar­a­tive en­vi­ron­ment brings and with­out hav­ing to put up with Nixlang.

My only re­cur­ring is­sues so far are the oc­ca­sional slow down­load speeds and that I have to start my ker­nel in nomod­e­set be­cause oth­er­wise the graph­i­cal en­vi­ron­ment crashes with­out me be­ing able to switch to a TTY. It’s a bum­mer, but hon­estly, I’m not too both­ered by it so far. I’m trust­ing a dri­ver up­date will fix it soon enough and, if not, it’s not ex­actly dif­fi­cult to throw in a ker­nel pa­ra­me­ter into your con­fig.

I’m hop­ing to do a fol­lowup post about pack­ag­ing in Guix, be­cause I’ve been dip­ping my toes into it by try­ing to pack­age Wezterm and the jour­ney there was sim­i­larly ar­du­ous as in­stalling the sys­tem it­self.

Till then, thank you for read­ing and see you next time!

The stuff you see be­low are all I man­aged to write down mid-process. Some of these I threw it into the file from Nano, some from half-bro­ken X11 ses­sions. Because of this, it’s not ex­actly well-edited, but I hope it might pro­vide a glimpse into my mind at the time.

...

Read the original on nemin.hu »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.