10 interesting stories served every morning and every evening.




1 748 shares, 32 trendiness

Mobile carriers can get your GPS location

In iOS 26.3, Apple in­tro­duced a new pri­vacy fea­ture which lim­its precise lo­ca­tion” data made avail­able to cel­lu­lar net­works via cell tow­ers. The fea­ture is only avail­able to de­vices with Apple’s in-house mo­dem in­tro­duced in 2025. The an­nounce­ment says

Cellular net­works can de­ter­mine your lo­ca­tion based on which cell tow­ers your de­vice con­nects to.

This is well-known. I have served on a jury where the pros­e­cu­tion ob­tained lo­ca­tion data from cell tow­ers. Since cell tow­ers are sparse (especially be­fore 5G), the ac­cu­racy is in the range of tens to hun­dreds of me­tres.

But this is not the whole truth, be­cause cel­lu­lar stan­dards have built-in pro­to­cols that make your de­vice silently send GNSS (i.e. GPS, GLONASS, Galileo, BeiDou) lo­ca­tion to the car­rier. This would have the same pre­ci­sion as what you see in your Map apps, in sin­gle-digit me­tres.

In 2G and 3G this is called Radio Resources LCS Protocol (RRLP)

So the net­work sim­ply asks tell me your GPS co­or­di­nates if you know them” and the phone will re­spond.

In 4G and 5G this is called LTE Positioning Protocol (LPP)

RRLP, RRC, and LPP are na­tively con­trol-plane po­si­tion­ing pro­to­cols. This means that they are trans­ported in the in­ner work­ings of cel­lu­lar net­works and are prac­ti­cally in­vis­i­ble to end users.

It’s worth not­ing that GNSS lo­ca­tion is never meant to leave your de­vice. GNSS co­or­di­nates are cal­cu­lated en­tirely pas­sively, your de­vice does­n’t need to send a sin­gle bit of in­for­ma­tion. Using GNSS is like find­ing out where you are by read­ing a road sign: you don’t have to tell any­one else you read a road sign, any­one can read a road sign, and the peo­ple who put up road signs don’t know who read which road sign when.

These ca­pa­bil­i­ties are not se­crets but some­how they have mostly slid un­der the radar of the pub­lic con­scious­ness. They have been used in the wild for a long time, such as by the DEA in the US in 2006:

[T]he DEA agents pro­cured a court or­der (but not a search war­rant) to ob­tain GPS co­or­di­nates from the couri­er’s phone via a ping, or sig­nal re­quest­ing those co­or­di­nates, sent by the phone com­pany to the phone.

And by Shin Bet in Israel, which tracks every­one every­where all the time:

The GSS Tool was based on cen­tral­ized cel­lu­lar track­ing op­er­ated by Israel’s General Security Services (GSS). The tech­nol­ogy was based on a frame­work that tracks all the cel­lu­lar phones run­ning in Israel through the cel­lu­lar com­pa­nies’ data cen­ters. According to news sources, it rou­tinely col­lects in­for­ma­tion from cel­lu­lar com­pa­nies and iden­ti­fies the lo­ca­tion of all phones through cel­lu­lar an­tenna tri­an­gu­la­tion and GPS data.

Notably, the Israeli gov­ern­ment started us­ing the data for con­tact trac­ing in March 2020, only a few weeks af­ter the first Israeli COVID-19 case. An in­di­vid­ual would be sent an SMS mes­sage in­form­ing them of close con­tact with a COVID pa­tient and re­quired to quar­an­tine. This is good ev­i­dence that the lo­ca­tion data Israeli car­ri­ers are col­lect­ing are far more pre­cise than what cell tow­ers alone can achieve.

A ma­jor caveat is that I don’t know if RRLP and LPP are the ex­act tech­niques, and the only tech­niques, used by DEA, Shin Bet, and pos­si­bly oth­ers to col­lect GNSS data; there could be other pro­to­cols or back­doors we’re not privy to.

Another un­known is whether these pro­to­cols can be ex­ploited re­motely by a for­eign car­rier. Saudi Arabia has abused SS7 to spy on peo­ple in the US, but as far as I know this only lo­cates a de­vice to the cov­er­age area of a Mobile Switching Center, which is less pre­cise than cell tower data. Nonetheless, given the abysmal cul­ture, com­pe­tency, and in­tegrity in the tele­com in­dus­try, I would not be shocked if it’s pos­si­ble for a state ac­tor to ob­tain the pre­cise GNSS co­or­di­nates of any­one on earth us­ing a phone num­ber/​IMEI.

Apple made a good step in iOS 26.3 to limit at least one vec­tor of mass sur­veil­lance, en­abled by hav­ing full con­trol of the mo­dem sil­i­con and firmware. They must now al­low users to dis­able GNSS lo­ca­tion re­sponses to mo­bile car­ri­ers, and no­tify the user when such at­tempts are made to their de­vice.

...

Read the original on an.dywa.ng »

2 680 shares, 30 trendiness

Finland looks to end "uncontrolled human experiment" with Australia-style ban on social media

Children un­der the age of 15 might be delet­ing their apps if the gov­ern­men­t’s plans are passed into law.

Prime Minister Petteri Orpo (NCP), the Finnish pub­lic health au­thor­ity THL and two-thirds of Finns are in favour of ban­ning or re­strict­ing the use of so­cial me­dia by un­der-15s.

Children un­der the age of 15 might be delet­ing their apps if the gov­ern­men­t’s plans are passed into law.

Prime Minister Petteri Orpo (NCP), the Finnish pub­lic health au­thor­ity THL and two-thirds of Finns are in favour of ban­ning or re­strict­ing the use of so­cial me­dia by un­der-15s.

Lunch break at the Finnish International School of Tampere (FISTA) is a bois­ter­ous time.

The yard is filled with chil­dren — rang­ing from grades 1 to 9, or ages 6 to 16 — run­ning around, shout­ing, play­ing foot­ball, shoot­ing bas­ket­ball hoops, do­ing what kids do.

And there’s not a sin­gle screen in sight.

FISTA has taken ad­van­tage of the law change, brought in last August, which al­lows schools to re­strict or com­pletely ban the use of mo­bile phones dur­ing school hours. At FISTA, this means no phones at all un­less specif­i­cally used for learn­ing in the class­room.

We’ve seen that cut­ting down on the pos­si­bil­i­ties for stu­dents to use their phones, dur­ing the breaks for in­stance, has spurred a lot of cre­ativ­ity,” FISTA vice prin­ci­pal Antti Koivisto notes.

They’re more ac­tive, do­ing more phys­i­cal things like play­ing games out­doors or tak­ing part in the or­gan­ised break ac­tiv­i­ties or just so­cial­is­ing with each other.”

With the smart­phone re­stric­tion in schools widely con­sid­ered to have been a suc­cess, Finland’s gov­ern­ment has now set its sights on so­cial me­dia plat­forms.

Prime Minister Petteri Orpo (NCP) said ear­lier this month that he sup­ports ban­ning the use of so­cial me­dia by chil­dren un­der the age of 15.

I am deeply con­cerned about the lack of phys­i­cal ac­tiv­ity among chil­dren and young peo­ple, and the fact that it is in­creas­ing,” Orpo said at the time.

And there is a grow­ing groundswell of sup­port for Finland in­tro­duc­ing such a ban. Two-thirds of re­spon­dents to a sur­vey pub­lished ear­lier this week said they back a ban on so­cial me­dia for un­der-15s. This is a near 10 per­cent­age point jump com­pared to a sim­i­lar sur­vey car­ried out just last sum­mer.

The con­cerns over so­cial me­dia, and in par­tic­u­lar the ef­fects on chil­dren, have been well-doc­u­mented — but Finnish re­searcher Silja Kosola’s re­cent de­scrip­tion of the phe­nom­e­non as an uncontrolled hu­man ex­per­i­ment” has grabbed peo­ple’s at­ten­tion once again.

Kosola, an as­so­ci­ate pro­fes­sor in ado­les­cent med­i­cine, has re­searched the im­pact of so­cial me­dia on young peo­ple, and tells Yle News that the con­se­quences are not very well un­der­stood.

We see a rise in self-harm and es­pe­cially eat­ing dis­or­ders. We see a big sep­a­ra­tion in the val­ues of young girls and boys, which is also a big prob­lem in so­ci­ety,” Kosola ex­plains.

In the video be­low, Silja Kosola ex­plains the detri­men­tal ef­fects that ex­ces­sive use of so­cial me­dia can have on young peo­ple.

She fur­ther notes that cer­tain as­pects of Finnish cul­ture — such as the in­de­pen­dence and free­dom granted to chil­dren from a young age — have un­wit­tingly ex­ac­er­bated the ill ef­fects of so­cial me­dia use.

We have given smart­phones to younger peo­ple more than any­where else in the world. Just a cou­ple of years ago, about 95 per­cent of first graders had their own smart­phone, and that has­n’t hap­pened any­where else,” she says.

Since 10 December last year, chil­dren un­der the age of 16 in Australia have been banned from us­ing so­cial me­dia plat­forms such as TikTok, Snapchat, Facebook, Instagram and YouTube.

Prime Minister Anthony Albanese be­gan draft­ing the leg­is­la­tion af­ter he re­ceived a heart­felt let­ter from a griev­ing mother who lost her 12-year-old daugh­ter to sui­cide.

Although Albanese has never re­vealed the de­tails of the let­ter, he told pub­lic broad­caster ABC that it was obvious so­cial me­dia had played a key role” in the young girl’s death.

The leg­is­la­tion aims to shift the bur­den away from par­ents and chil­dren and onto the so­cial me­dia com­pa­nies, who face fines of up to 49.5 mil­lion Australian dol­lars (29 mil­lion eu­ros) if they con­sis­tently fail to keep kids off their plat­forms.

Clare Armstrong, ABCs chief dig­i­tal po­lit­i­cal cor­re­spon­dent, told Yle News that the ini­tial re­ac­tion to the roll-out has been some con­fu­sion but no lit­tle relief”.

The gov­ern­ment of­ten talks about this law as be­ing a tool to help par­ents and other in­sti­tu­tions en­force and start con­ver­sa­tions about tech and so­cial me­dia in ways that be­fore, they could­n’t,” she says.

Although it is still early days, as the ban has only been in force for about six weeks, Armstrong adds that the early in­di­ca­tors have been good.

ABC jour­nal­ist Clare Armstrong ex­plains in the video be­low how chil­dren in Australia have been spend­ing their time since the so­cial me­dia ban was in­tro­duced.

However, she adds a note of cau­tion to any coun­tries — such as Finland — look­ing to em­u­late the Australian model, not­ing that com­mu­ni­ca­tion is key.

Because you can write a very good law, but if the pub­lic does­n’t un­der­stand it, and if it can’t be en­forced at that house­hold level eas­ily, then it’s bound to fail,” Armstrong says.

Seona Candy, an Australian liv­ing in Helsinki for over eight years, has been keenly fol­low­ing the events in her home­land since the so­cial me­dia ban came into ef­fect in December.

She has heard anec­do­tally that if kids find them­selves blocked from one plat­form, they just set up an ac­count on an­other, ones that maybe their par­ents don’t even know ex­ist”.

And this is then much, much harder, be­cause those plat­forms don’t have parental con­trols, so they don’t have those things al­ready de­signed into them that the more main­stream plat­forms do,” Candy says.

Because of this is­sue, and oth­ers she has heard about, she warns against Finland in­tro­duc­ing like-for-like leg­is­la­tion based around Australia’s reactive, knee-jerk” law change.

I think the Finnish gov­ern­ment should re­ally in­vest in dig­i­tal ed­u­ca­tion, and dig­i­tal lit­er­acy, and teach kids about dig­i­tal safety. Finland is world-fa­mous for ed­u­ca­tion, and for me­dia lit­er­acy. Play to your strengths, right?”

The All Points North pod­cast asked if Finland should in­tro­duce a sim­i­lar ban on so­cial me­dia as in Australia. You can lis­ten to the episode via this em­bed­ded player, on Yle Areena, via Apple, Spotify or wher­ever you get your pod­casts.

...

Read the original on yle.fi »

3 369 shares, 66 trendiness

Open Source Zero Trust Networking

...

Read the original on netbird.io »

4 305 shares, 13 trendiness

Swift is a more convenient Rust

Rust is one of the most loved lan­guages out there, is fast, and has an amaz­ing com­mu­nity. Rust in­vented the con­cept of own­er­ship as a so­lu­tion mem­ory man­age­ment is­sues with­out re­sort­ing to some­thing slower like Garbage Collection or Reference Counting. But, when you don’t need to be quite as low level, it gives you util­i­ties such as Rc, Arc and Cow to do ref­er­ence count­ing and clone-on-right” in your code. And, when you need to go lower-level still, you can use the un­safe sys­tem and ac­cess raw C point­ers.

Rust also has a bunch of awe­some fea­tures from func­tional lan­guages like tagged enums, match ex­pres­sions, first class func­tions and a pow­er­ful type sys­tem with gener­ics.

Rust has an LLVM-based com­piler which lets it com­pile to na­tive code and WASM.

I’ve also been do­ing a bit of Swift pro­gram­ming for a cou­ple of years now. And the more I learn Rust, the more I see a re­flec­tion of Swift. (I know that Swift stole a lot of ideas from Rust, I’m talk­ing about my own per­spec­tive here).

Swift, too, has awe­some fea­tures from func­tional lan­guages like tagged enums, match ex­pres­sions and first-class func­tions. It too has a very pow­er­ful type sys­tem with gener­ics.

Swift too gives you com­plete type-safety with­out a garbage col­lec­tor. By de­fault, every­thing is a value type with copy-on-write” se­man­tics. But when you need ex­tra speed you can opt into an own­er­ship sys­tem and move” val­ues to avoid copy­ing. And if you need to go even lower level, you can use the un­safe sys­tem and ac­cess raw C point­ers.

Swift has an LLVM-based com­piler which lets it com­pile to na­tive code and WASM.

You’re prob­a­bly feel­ing like you just read the same para­graphs twice. This is no ac­ci­dent. Swift is ex­tremely sim­i­lar to Rust and has most of the same fea­ture-set. But there is a very big dif­fer­ence is per­spec­tive. If you con­sider the de­fault mem­ory model, this will start to make a lot of sense.

Rust is a low-level sys­tems lan­guage at heart, but it gives you the tools to go higher level. Swift starts at a high level and gives you the abil­ity to go low-level.

The most ob­vi­ous ex­am­ple of this is the mem­ory man­age­ment model. Swift use value-types by de­fault with copy-on-write se­man­tics. This is the equiv­a­lent of us­ing Cow<> for all your val­ues in Rust. But de­faults mat­ter. Rust makes it easy to use moved” and borrowed” val­ues but re­quires ex­tra cer­e­mony to use Cow<> val­ues as you need to unwrap” them .as_mutable() to ac­tu­ally use the value within. Swift makes these Copy-on-Write val­ues easy to use and in­stead re­quires ex­tra cer­e­mony to use bor­row­ing and mov­ing in­stead. Rust is faster by de­fault, Swift is sim­pler and eas­ier by de­fault.

Swift’s syn­tax is a mas­ter­class in tak­ing awe­some func­tional lan­guage con­cepts and hid­ing them in C-like syn­tax to trick the de­vel­op­ers into ac­cept­ing them.

Consider match state­ments. This is what a match state­ment looks like in Rust:

Here’s how that same code would be writ­ten in Swift:

Swift does­n’t have a match state­ment or ex­pres­sion. It has a switch state­ment that de­vel­op­ers are al­ready fa­mil­iar with. Except this switch state­ment is ac­tu­ally not a switch state­ment at all. It’s an ex­pres­sion. It does­n’t fallthrough”. It does pat­tern match­ing. It’s just a match ex­pres­sion with a dif­fer­ent name and syn­tax.

In fact, Swift treats enums as more than just types and lets you put meth­ods di­rectly on it:

Rust does­n’t have null, but it does have None. Swift has a nil, but it’s re­ally just a None in hid­ing. Instead of an Option, Swift let’s you use T?, but the com­piler still forces you to check that the value is not nil be­fore you can use it.

You get the same safety with more con­ve­nience since you can do this in Swift with an op­tional type:

let val: T?if let val { // val is now of type `T`.}

Also, you’re not forced to wrap every value with a Some(val) be­fore re­turn­ing it. The Swift com­piler takes care of that for you. A T will trans­par­ently be con­verted into a T? when needed.

Rust does­n’t have try-catch. Instead it has a Result type which con­tains the suc­cess and er­ror types.

Swift does­n’t have a try-catch ei­ther, but it does have do-catch and you have to use try be­fore call­ing a func­tion that could throw. Again, this is just de­cep­tion for those de­vel­op­ers com­ing from C-like lan­guages. Swift’s er­ror han­dling works ex­actly like Rust’s be­hind the scenes, but it is hid­den in a clever, fa­mil­iar syn­tax.

func us­esEr­rorThrow­ing­Func­tion() throws { let x = try th­isFn­Can­Throw()}func han­dle­sEr­rors() { do { let x = try th­isFn­Can­Throw() } catch err { // han­dle the `err` here. }}

This is very sim­i­lar to how Rust let’s you use ? at the end of state­ments to au­to­mat­i­cally for­ward er­rors, but you don’t have to wrap your suc­cess val­ues in Ok().

There are many com­mon prob­lems that Rust’s com­piler will catch at com­pile time and even sug­gest so­lu­tions for you. The ex­am­ple that por­trays this well is self-ref­er­enc­ing enums.

Consider an enum that rep­re­sents a tree. Since, it is a re­cur­sive type, Rust will force you to use some­thing like Box<> for ref­er­enc­ing a type within it­self.

This makes the prob­lem ex­plicit and forces you to deal with it di­rectly. Swift is a lit­tle more, au­to­matic.

Note: that you still have to an­no­tate this enum with the in­di­rect key­word to in­di­cate that it is re­cur­sive. But once you’ve done that, Swift’s com­piler takes care of the rest. You don’t have to think about Box<> or Rc<>. The val­ues just work nor­mally.

Swift was de­signed to re­place Objective-C and needed to be able to in­ter­face with ex­ist­ing code. So, it has made a lot of prag­matic choices that makes it a much less pure” and minimalist” lan­guage. Swift is a pretty big lan­guage com­pared to Rust and has many more fea­tures built-in. However, Swift is de­signed with progressive dis­clo­sure” in mind which means that just as soon as you think you’ve learned the lan­guage a lit­tle more of the ice­berg pops out of the wa­ter.

Here are just some of the lan­guage fea­tures:

Swift is a far eas­ier lan­guage to get started and pro­duc­tive with. The syn­tax is more fa­mil­iar and a lot more is done for you au­to­mat­i­cally. But this re­ally just makes Swift a higher-level lan­guage and it comes with the same trade­offs.

By de­fault, a Rust pro­gram is much faster than a Swift pro­gram. This is be­cause Rust is fast by de­fault, and lets you be slow, while Swift is easy by de­fault and lets you be fast.

Based on this, I would say both lan­guages have their uses. Rust is bet­ter for sys­tems and em­bed­ded pro­gram­ming. It’s bet­ter for writ­ing com­pil­ers and browser en­gines (Servo) and it’s bet­ter for writ­ing en­tire op­er­at­ing sys­tems.

Swift is bet­ter for writ­ing UI and servers and some parts of com­pil­ers and op­er­at­ing sys­tems. Over time I ex­pect to see the over­lap get big­ger.

There is a per­cep­tion that Swift is only a good lan­guage for Apple plat­forms. While this was once true, this is no longer the case and Swift is be­com­ing in­creas­ingly a good cross-plat­form lan­guage. Hell, Swift even com­piles to wasm, and the forks made by the swift-wasm team were merged back into Swift core ear­lier this year.

Swift on Windows is be­ing used by The Browser Company to share code and bring the Arc browser to win­dows. Swift on Linux has long been sup­ported by Apple them­selves in or­der to push Swift on Server”. Apple is di­rectly spon­sor­ing the Swift on Server con­fer­ence.

This year Embedded Swift was also an­nounced which is al­ready be­ing used on small de­vices like the Panic Playdate.

Swift web­site has been high­light­ing many of these pro­jects:

The browser com­pany says that Interoperability is Swift’s su­per power.

And the Swift pro­ject has been try­ing make work­ing with Swift a great ex­pe­ri­ence out­side of XCode with pro­jects like an open source LSP and fund­ing the the VSCode ex­ten­sion.

Compile times are (like Rust) quite bad. There is some amount of fea­ture creep and the lan­guage is larger than it should be. Not all syn­tax feels fa­mil­iar. The pack­age ecosys­tem is­n’t nearly as rich as Rust.

But the Swift is only for Apple plat­forms” is an old and tired cliche at this point. Swift is al­ready a cross-plat­form, ABI-stable lan­guage with no GC, au­to­matic Reference Counting and the op­tion to opt into own­er­ship for even more per­for­mance. Swift pack­ages in­creas­ingly work on Linux. Foundation was ported to Swift, open sourced and made open source. It’s still early days for Swift as a good, more con­ve­nient, Rust al­ter­na­tive for cross-plat­form de­vel­op­ment, but it is here now. It’s no longer a fu­ture to wait for.

...

Read the original on nmn.sh »

5 261 shares, 16 trendiness

Scientist who helped eradicate smallpox dies at age 89

A leader in the global fight against small­pox and a cham­pion of vac­cine sci­ence, William Foege died last SaturdayThe late physi­cians and health ad­min­is­tra­tors William Foege (middle), J. Donald Millar (left) and J. Michael Lane (right), all of whom served in the Global Smallpox Eradication Program, in 1980. Sign Up for Our Free Daily NewsletterI agree my in­for­ma­tion will be processed in ac­cor­dance with the Scientific American and Springer Nature Limited Privacy Policy . We lever­age third party ser­vices to both ver­ify and de­liver email. By pro­vid­ing your email ad­dress, you also con­sent to hav­ing the email ad­dress shared with third par­ties for those pur­poses. William Foege, a leader in the global fight to elim­i­nate small­pox, has died. Foege passed away on Saturday at the age of 89, ac­cord­ing to the Task Force for Global Health, a pub­lic health or­ga­ni­za­tion he co-founded.Foege headed the U.S. Centers for Disease Control and Prevention’s Smallpox Eradication Program in the 1970s. Before the dis­ease was of­fi­cially erad­i­cated in 1980, it killed around one in three peo­ple who were in­fected. According to the CDC, there have been no new small­pox cases since 1977.“If you look at the sim­ple met­ric of who has saved the most lives, he is right up there with the pan­theon,” said for­mer CDC di­rec­tor Tom Frieden to the Associated Press. Smallpox erad­i­ca­tion has pre­vented hun­dreds of mil­lions of deaths.”If you’re en­joy­ing this ar­ti­cle, con­sider sup­port­ing our award-win­ning jour­nal­ism by sub­scrib­ing. By pur­chas­ing a sub­scrip­tion you are help­ing to en­sure the fu­ture of im­pact­ful sto­ries about the dis­cov­er­ies and ideas shap­ing our world to­day.Foege went on to lead the CDC and served as a se­nior med­ical ad­viser and se­nior fel­low at the Bill & Melinda Gates Foundation. In 2012 then pres­i­dent Barack Obama awarded him the Presidential Medal of Freedom.Foege was a vo­cal pro­po­nent of vac­cines for pub­lic health, writ­ing with epi­demi­ol­o­gist Larry Brilliant in Scientific American in 2013 that the ef­fort to elim­i­nate po­lio has never been closer” to suc­cess. By work­ing to­gether,” they wrote, we will soon rel­e­gate po­lio—along­side small­pox—to the his­tory books.” Polio re­mains a candidate for erad­i­ca­tion,” ac­cord­ing to the World Health Assembly.And in 2025 Foege, along­side sev­eral other for­mer CDC di­rec­tors, spoke out against the poli­cies of the cur­rent sec­re­tary of health and hu­man ser­vices Robert F. Kennedy, Jr. In a New York Times op-ed, they wrote that the top health of­fi­cial’s tenure was unlike any­thing we had ever seen at the agency.”In a state­ment, Task Force for Global Health CEO Patrick O’Carroll re­mem­bered Foege as an inspirational” fig­ure, both for early-ca­reer pub­lic health work­ers and vet­er­ans of the field. Whenever he spoke, his vi­sion and com­pas­sion would reawaken the op­ti­mism that prompted us to choose this field, and re-en­er­gize our ef­forts to make this world a bet­ter place,” O’Carroll said.It’s Time to Stand Up for ScienceIf you en­joyed this ar­ti­cle, I’d like to ask for your sup­port. Scientific American has served as an ad­vo­cate for sci­ence and in­dus­try for 180 years, and right now may be the most crit­i­cal mo­ment in that two-cen­tury his­tory.I’ve been a Scientific American sub­scriber since I was 12 years old, and it helped shape the way I look at the world. SciAm al­ways ed­u­cates and de­lights me, and in­spires a sense of awe for our vast, beau­ti­ful uni­verse. I hope it does that for you, too.If you sub­scribe to Scientific American, you help en­sure that our cov­er­age is cen­tered on mean­ing­ful re­search and dis­cov­ery; that we have the re­sources to re­port on the de­ci­sions that threaten labs across the U.S.; and that we sup­port both bud­ding and work­ing sci­en­tists at a time when the value of sci­ence it­self too of­ten goes un­rec­og­nized.In re­turn, you get es­sen­tial news, cap­ti­vat­ing pod­casts, bril­liant in­fo­graph­ics, can’t-miss newslet­ters, must-watch videos, chal­leng­ing games, and the sci­ence world’s best writ­ing and re­port­ing. You can even gift some­one a sub­scrip­tion.There has never been a more im­por­tant time for us to stand up and show why sci­ence mat­ters. I hope you’ll sup­port us in that mis­sion.

...

Read the original on www.scientificamerican.com »

6 218 shares, 14 trendiness

In Praise of –dry-run

For the last few months, I have been de­vel­op­ing a new re­port­ing ap­pli­ca­tion. Early on, I de­cided to add a –dry-run op­tion to the run com­mand. This turned out to be quite use­ful — I have used it many times a day while de­vel­op­ing and test­ing the ap­pli­ca­tion.

The ap­pli­ca­tion will gen­er­ate a set of re­ports every week­day. It has a loop that checks pe­ri­od­i­cally if it is time to gen­er­ate new re­ports. If so, it will read data from a data­base, ap­ply some logic to cre­ate the re­ports, zip the re­ports, up­load them to an sftp server, check for er­ror re­sponses on the sftp server, parse the er­ror re­sponses, and send out no­ti­fi­ca­tion mails. The files (the gen­er­ated re­ports, and the down­loaded feed­back files) are moved to dif­fer­ent di­rec­to­ries de­pend­ing on the step in the process. A sim­ple and straight­for­ward ap­pli­ca­tion.

Early in the de­vel­op­ment process, when test­ing the in­com­plete ap­pli­ca­tion, I re­mem­bered that Subversion (the ver­sion con­trol sys­tem af­ter CVS, be­fore Git) had a –dry-run op­tion. Other linux com­mands have this op­tion too. If a com­mand is run with the ar­gu­ment –dry-run, the out­put will print what will hap­pen when the com­mand is run, but no changes will be made. This lets the user see what will hap­pen if the com­mand is run with­out the –dry-run ar­gu­ment.

I re­mem­bered how help­ful that was, so I de­cided to add it to my com­mand as well. When I run the com­mand with –dry-run, it prints out the steps that will be taken in each phase: which re­ports that will be gen­er­ated (and which will not be), which files will be zipped and moved, which files will be up­loaded to the sftp server, and which files will be down­loaded from it (it logs on and lists the files).

Looking back at the pro­ject, I re­al­ized that I ended up us­ing the –dry-run op­tion pretty much every day.

I am sur­prised how use­ful I found it to be. I of­ten used it as a check be­fore get­ting started. Since I know –dry-run will not change any­thing, it is safe to run with­out think­ing. I can im­me­di­ately see that every­thing is ac­ces­si­ble, that the con­fig­u­ra­tion is cor­rect, and that the state is as ex­pected. It is a quick and easy san­ity check.

I also used it quite a bit when test­ing the com­plete sys­tem. For ex­am­ple, if I changed a date in the re­port state file (the date for the last suc­cess­ful re­port of a given type), I could im­me­di­ately see from the out­put whether it would now be gen­er­ated or not. Without –dry-run, the ac­tual re­port would also be gen­er­ated, which takes some time. So I can test the be­hav­ior, and re­ceive very quick feed­back.

The down­side is that the dryRun-flag pol­lutes the code a bit. In all the ma­jor phases, I need to check if the flag is set, and only print the ac­tion that will be taken, but not ac­tu­ally do­ing it. However, this does­n’t go very deep. For ex­am­ple, none of the code that ac­tu­ally gen­er­ates the re­port needs to check it. I only need to check if that code should be in­voked in the first place.

The type of ap­pli­ca­tion I have been writ­ing is ideal for –dry-run. It is in­voked by a com­mand, and it may cre­ate some changes, for ex­am­ple gen­er­at­ing new re­ports. More re­ac­tive ap­pli­ca­tions (that wait for mes­sages be­fore act­ing) don’t seem to be a good fit.

I added –dry-run on a whim early on in the pro­ject. I was sur­prised at how use­ful I found it to be. Adding it early was also good, since I got the ben­e­fit of it while de­vel­op­ing more func­tion­al­ity.

The –dry-run flag is not for every sit­u­a­tion, but when it fits, it can be quite use­ful.

...

Read the original on henrikwarne.com »

7 216 shares, 18 trendiness

list animals until failure

You have lim­ited time, but get more time for each an­i­mal listed. When the timer runs out, that’s game over.

No over­lap­ping terms.

For ex­am­ple, if you list bear” and polar bear”, you get no point (or time bonus) for the lat­ter. But you can still get a point for a sec­ond kind of bear. Order does­n’t mat­ter.

...

Read the original on rose.systems »

8 209 shares, 8 trendiness

zpoint/CPython-Internals: Dive into CPython internals, trying to illustrate every detail of CPython implementation

* Watch this repo if you need to be no­ti­fied when there’s up­date

This repos­i­tory is my notes/​blog for cpython source code

Trying to il­lus­trate every de­tail of cpython im­ple­men­ta­tion

# based on ver­sion 3.8.0a0

cd cpython

git re­set –hard ab54b9a130c88f708077c2e­f6c4963b632c132b3

The fol­low­ing con­tents are suit­able for those who have python pro­gram­ming ex­pe­ri­ence and in­ter­ested in in­ter­nal of python in­ter­preter, for those who needs be­gin­ner or ad­vanced ma­te­r­ial please re­fer to awe­some-python-books

I will only rec­om­mend what I’ve read

All kinds of con­tri­bu­tions are wel­come

* sub­mit a pull re­quest

if you want to share any knowl­edge you know

* if you want to share any knowl­edge you know

...

Read the original on github.com »

9 194 shares, 7 trendiness

Inside Nvidia's 10-year effort to make the Shield TV the most updated Android device ever

Inside Nvidia’s 10-year ef­fort to make the Shield TV the most up­dated Android de­vice ever

Selfishly, a lit­tle bit, we built Shield for our­selves.”

The Shield TV has that clas­sic Nvidia aes­thetic.

The Shield TV has that clas­sic Nvidia aes­thetic.

It took Android de­vice­mak­ers a very long time to com­mit to long-term up­date sup­port. Samsung and Google have only re­cently de­cided to of­fer seven years of up­dates for their flag­ship Android de­vices, but a decade ago, you were lucky to get more than one or two up­dates on even the most ex­pen­sive Android phones and tablets. How is it, then, that an Android-powered set-top box from 2015 is still go­ing strong?

Nvidia re­leased the first Shield Android TV in 2015, and ac­cord­ing to the com­pa­ny’s se­nior VP of hard­ware en­gi­neer­ing, Andrew Bell, sup­port­ing these de­vices has been a la­bor of love. And the team at Nvidia still loves the Shield. Bell as­sures us that Nvidia has never given up, even when it looked like sup­port for the Shield was wan­ing, and it does­n’t plan to stop any time soon.

Gaming has been cen­tral to Nvidia since its start, and that fo­cus gave rise to the Shield. Pretty much every­body who worked at Nvidia in the early days re­ally wanted to make a game con­sole,” said Bell, who has worked at the com­pany for 25 years.

However, Nvidia did­n’t have what it needed back then. Before gam­ing, crypto, and AI turned it into the multi-tril­lion-dol­lar pow­er­house it is to­day, Nvidia had a startup men­tal­ity and the bud­get to match. When Shield de­vices be­gan per­co­lat­ing in the com­pa­ny’s labs, it was seen as an im­por­tant way to gain ex­pe­ri­ence with full-stack” sys­tems and all the com­pli­ca­tions that arise when man­ag­ing them.

To build a game con­sole was pretty com­pli­cated be­cause, of course, you have to have a GPU, which we know how to make,” Bell ex­plained. But in ad­di­tion to that, you need a CPU, an OS, games, and you need a UI.”

Through ac­qui­si­tions and part­ner­ships, the pieces of Nvidia’s fa­bled game con­sole slowly fell into place. The pur­chase of PortalPlayer in 2007 brought the CPU tech­nol­ogy that would be­come the Tegra Arm chips, and the com­pa­ny’s surg­ing suc­cess in GPUs gave it the part­ner­ships it needed to get games. But the UI was still miss­ing—that did­n’t change un­til Google ex­panded Android to the TV in 2014. The com­pa­ny’s first Android mo­bile ef­forts were al­ready out there in the form of the Shield Portable and Shield Tablet, but the TV-connected box is what Nvidia re­ally wanted.

Selfishly, a lit­tle bit, we built Shield for our­selves,” Bell told Ars Technica. We ac­tu­ally wanted a re­ally good TV streamer that was high-qual­ity and high-per­for­mance, and not nec­es­sar­ily in the Apple ecosys­tem. We built some pro­to­types, and we got so ex­cited about it. [CEO Jensen Huang] was like, Why don’t we bring it out and sell it to peo­ple?’”

The first Shield box in 2015 had a heavy gam­ing fo­cus, with a raft of both lo­cal and cloud-based (GeForce Now) games. The base model in­cluded only a game con­troller, with the re­mote con­trol sold sep­a­rately. According to Bell, Nvidia even­tu­ally rec­og­nized that the gam­ing an­gle was­n’t as pop­u­lar as it had hoped. The 2017 and 2019 Shield re­freshes were more fo­cused on the stream­ing ex­pe­ri­ence.

Eventually, we kind of said, Maybe the soul is that it’s a streamer for gamers,’” said Bell. We un­der­stand gamers from GeForce, and we un­der­stand they care about qual­ity and per­for­mance. A lot of these third-party de­vices like tablets, they’re go­ing cheap. Set-top boxes, they’re go­ing cheap. But we were the only com­pany that was like, Let’s go af­ter peo­ple who re­ally want a pre­mium ex­pe­ri­ence.’”

And pre­mium it is, of­fer­ing au­dio and video sup­port far be­yond what you find in other TV boxes, even years af­ter re­lease. The Shield TV started at $200 in 2015, and that’s still what you’ll pay for the Pro model to this day. However, Bell notes that pas­sion was the dri­ving force be­hind bring­ing the Shield TV to mar­ket. The team did­n’t know if it would make money, and in­deed, the com­pany lost money on every unit sold dur­ing the orig­i­nal pro­duc­tion run. The 2017 and 2019 re­freshes were about ad­dress­ing that while also em­pha­siz­ing the Shield’s stream­ing me­dia chops.

Update sup­port for Internet-connected de­vices is vi­tal—whether they’re phones, tablets, set-top boxes, or some­thing else. When up­dates cease, gad­gets fall out of sync with plat­form fea­tures, lead­ing to new bugs (which will never be fixed) and se­cu­rity holes that can af­fect safety and func­tion­al­ity. The sup­port guar­an­tee at­tached to a de­vice is ba­si­cally its ex­pi­ra­tion date.

We were all frus­trated as buy­ers of phones and tablets that you buy a de­vice, you get one or two up­dates, and that’s it!” said Bell. Early on when we were build­ing Shield TV, we de­cided we were go­ing to make it for a long time. Jensen and I had a dis­cus­sion, and it was, How long do we want to sup­port this thing?’ And Jensen said, For as long as we shall live.’”

In 2025, Nvidia wrapped up its tenth year of sup­port­ing the Shield plat­form. Even those orig­i­nal 2015 boxes are still be­ing main­tained with bug fixes and the oc­ca­sional new fea­ture. They’ve gone all the way from Android 5.0 to Android 11 in that time. No Android de­vice—not a sin­gle phone, tablet, watch, or stream­ing box—has got­ten any­where close to this level of sup­port.

The best ex­am­ple of Nvidia’s pas­sion for sup­port is, be­lieve it or not, a two-year gap in up­dates.

Across the dozens of Shield TV up­dates, there have been a few times when fans feared Nvidia was done with the box. Most no­tably, there were no pub­lic up­dates for the Shield TV in 2023 or 2024, but over-the-air up­dates re­sumed in 2025.

On the out­side, it looked like we went quiet, but it’s ac­tu­ally one of our big­ger de­vel­op­ment ef­forts,” ex­plained Bell.

The ori­gins of that ef­fort, sur­pris­ingly, stretch back years to the launch of the Nintendo Switch. The Shield runs Nvidia’s cus­tom Tegra X1 Arm chip, the same proces­sor Nintendo chose to power the orig­i­nal Switch in 2017. Soon af­ter re­lease, mod­ders dis­cov­ered a chip flaw that could by­pass Nintendo’s se­cu­rity mea­sures, en­abling home­brew (and piracy). An up­dated Tegra X1 chip (also used in the 2019 Shield re­fresh) fixed that for Nintendo, but Nvidia’s 2015 and 2017 Shield boxes ran the same ex­ploitable ver­sion.

Initially, Nvidia was able to roll out pe­ri­odic patches to pro­tect against the vul­ner­a­bil­ity, but by 2023, the Shield needed some­thing more. Around that time, own­ers of 2015 and 2017 Shield boxes had no­ticed that DRM-protected 4K con­tent of­ten failed to play—that was thanks to the same bug that af­fected the Switch years ear­lier.

With a newer, non-vul­ner­a­ble prod­uct on the mar­ket, many com­pa­nies might have just ac­cepted that the older prod­uct would lose func­tion­al­ity, but Nvidia’s pas­sion for Shield re­mained. Bell con­sulted Huang, whom he calls Shield cus­tomer No. 1, about the mean­ing of his as long as we shall live” pledge, and the team was ap­proved to spend what­ever time was needed to fix the vul­ner­a­bil­ity on the first two gen­er­a­tions of Shield TV.

According to Bell, it took about 18 months to get there, re­quir­ing the cre­ation of an en­tirely new se­cu­rity stack. He ex­plains that Android up­dates aren’t ac­tu­ally that much work com­pared to DRM se­cu­rity, and some of its part­ners weren’t that keen on re-cer­ti­fy­ing older prod­ucts. The Shield team fought for it be­cause they felt, as they had through­out the pro­duc­t’s run, that they’d made a promise to cus­tomers who ex­pected the box to have cer­tain fea­tures.

In February 2025, Nvidia re­leased Shield Patch 9.2, the first wide re­lease in two years. The changelog in­cluded an unas­sum­ing line read­ing, Added se­cu­rity en­hance­ment for 4K DRM play­back.” That was the Tegra X1 bug fi­nally be­ing laid to rest on the 2015 and 2017 Shield boxes.

The re­freshed Tegra X1+ in the 2019 Shield TV spared it from those DRM is­sues, and Nvidia still has­n’t stopped work­ing on that chip. The Tegra X1 was blaz­ing fast in 2015, and it’s still quite ca­pa­ble com­pared to your av­er­age smart TV to­day. The chip has ac­tu­ally out­lasted sev­eral of the com­po­nents needed to man­u­fac­ture it. For ex­am­ple, when the Tegra chip’s mem­ory was phased out, the team im­me­di­ately be­gan work on qual­i­fy­ing a new mem­ory sup­plier. To this day, Nvidia is still it­er­at­ing on the Tegra X1 plat­form, sup­port­ing the Shield’s con­tin­ued up­dates.

If op­er­a­tions calls me and says they just ran out of this com­po­nent, I’ve got en­gi­neers on it tonight look­ing for a new com­po­nent,” Bell said.

Nvidia has put its money where its mouth is by sup­port­ing all ver­sions of the Shield for so long. But it’s been over six years since we’ve seen new hard­ware. Surely the Shield has to be run­ning out of steam, right?

Not so, says Bell. Nvidia still man­u­fac­tures the 2019 Shield be­cause peo­ple are still buy­ing it. In fact, the sales vol­ume has re­mained ba­si­cally un­changed for the past 10 years. The Shield Pro is a spendy step-top box at $200, so Nvidia has ex­per­i­mented with pric­ing and pro­mo­tion with lit­tle ef­fect. The 2019 non-Pro Shield was one such ef­fort. The base model was orig­i­nally priced at $99, but the MSRP even­tu­ally landed at $150.

No mat­ter how much we dropped the price or how much we mar­ket or don’t mar­ket it, the same num­ber of peo­ple come out of the wood­work every week to buy Shield,” Bell ex­plained.

Nvidia had no choice but to put that gi­ant Netflix but­ton on the re­mote.

Nvidia had no choice but to put that gi­ant Netflix but­ton on the re­mote.

That kind of con­sis­tency is­n’t lost on Nvidia. Bell says the com­pany has no plans to stop pro­duc­tion or up­dates for the Shield any time soon.” It’s also still pos­si­ble that Nvidia could re­lease new Shield TV hard­ware in the fu­ture. Nvidia’s Shield de­vices came about as a re­sult of en­gi­neers tin­ker­ing with new con­cepts in a lab set­ting, but most of those ex­per­i­ments never see the light of day. For ex­am­ple, Bell notes that the team pro­duced sev­eral up­dated ver­sions of the Shield Tablet and Shield Portable (some of which you can find float­ing around on eBay) that never got a re­tail re­lease, and they con­tinue to work on Shield TV.

We’re al­ways play­ing in the labs, try­ing to dis­cover new things,” said Bell. We’ve played with new con­cepts for Shield and we’ll con­tinue to play, and if we find some­thing we’re su­per-ex­cited about, we’ll prob­a­bly make a go of it.”

But what would that look like? Video tech­nol­ogy has ad­vanced since 2019, leav­ing the Shield un­able to take full ad­van­tage of some newer for­mats. First up would be sup­port for VP9 Profile 2 hard­ware de­cod­ing, which en­ables HDR video on YouTube. Bell says a re­freshed Shield would also pri­or­i­tize for­mats like AV1 and the HDR 10+ stan­dard, as well as sup­port for newer Dolby Vision pro­files for peo­ple with backed-up me­dia.

And then there’s the enor­mous, easy-to-press-by-ac­ci­dent Netflix but­ton on the re­mote. While adding new video tech­nolo­gies would be job one, fix­ing the Netflix but­ton is No. 2 for a the­o­ret­i­cal new Shield. According to Bell, Nvidia does­n’t re­ceive any money from Netflix for the gi­ant but­ton on its re­mote. It’s ac­tu­ally there as a re­quire­ment of Netflix’s cer­ti­fi­ca­tion pro­gram, which was very strong” in 2019. In a re­fresh, he thinks Nvidia could get away with a smaller N” but­ton. We can only hope.

But does Bell think he’ll get a chance to build that new Shield TV, shrunken Netflix but­ton and all? He stopped short of pre­dict­ing the fu­ture, but there’s def­i­nitely in­ter­est.

We talk about it all the time—I’d love to,” he said.

Ryan Whitwam is a se­nior tech­nol­ogy re­porter at Ars Technica, cov­er­ing the ways Google, AI, and mo­bile tech­nol­ogy con­tinue to change the world. Over his 20-year ca­reer, he’s writ­ten for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has re­viewed more phones than most peo­ple will ever own. You can fol­low him on Bluesky, where you will see pho­tos of his dozens of me­chan­i­cal key­boards.

AI agents now have their own Reddit-style so­cial net­work, and it’s get­ting weird fast

The TV in­dus­try fi­nally con­cedes that the fu­ture may not be in 8K

Inside Nvidia’s 10-year ef­fort to make the Shield TV the most up­dated Android de­vice ever

FCC aims to en­sure only liv­ing and law­ful Americans” get Lifeline ben­e­fits

ICE pro­tester says her Global Entry was re­voked af­ter agent scanned her face

...

Read the original on arstechnica.com »

10 188 shares, 10 trendiness

Generative AI and Wikipedia editing

Like many or­ga­ni­za­tions, Wiki Education has grap­pled with gen­er­a­tive AI, its im­pacts, op­por­tu­ni­ties, and threats, for sev­eral years. As an or­ga­ni­za­tion that runs large-scale pro­grams to bring new ed­i­tors to Wikipedia (we’re re­spon­si­ble for about 19% of all new ac­tive ed­i­tors on English Wikipedia), we have deep un­der­stand­ing of what chal­lenges face new con­tent con­trib­u­tors to Wikipedia — and how to sup­port them to suc­cess­fully edit. As many peo­ple have be­gun us­ing gen­er­a­tive AI chat­bots like ChatGPT, Gemini, or Claude in their daily lives, it’s un­sur­pris­ing that peo­ple will also con­sider us­ing them to help draft con­tri­bu­tions to Wikipedia. Since Wiki Education’s pro­grams pro­vide a co­hort of con­tent con­trib­u­tors whose work we can eval­u­ate, we’ve looked into how our par­tic­i­pants are us­ing GenAI tools.

We are choos­ing to share our per­spec­tive through this blog post be­cause we hope it will help in­form dis­cus­sions of GenAI-created con­tent on Wikipedia. In an open en­vi­ron­ment like the Wikimedia move­ment, it’s im­por­tant to share what you’ve learned. In this case, we be­lieve our learn­ings can help Wikipedia ed­i­tors who are try­ing to pro­tect the in­tegrity of con­tent on the en­cy­clo­pe­dia, Wikipedians who may be in­ter­ested in us­ing gen­er­a­tive AI tools them­selves, other pro­gram lead­ers glob­ally who are try­ing to on­board new con­trib­u­tors who may be in­ter­ested in us­ing these tools, and the Wikimedia Foundation, whose prod­uct and tech­nol­ogy team builds soft­ware to help sup­port the de­vel­op­ment of high-qual­ity con­tent on Wikipedia.

Our fun­da­men­tal con­clu­sion about gen­er­a­tive AI is: Wikipedia ed­i­tors should never copy and paste the out­put from gen­er­a­tive AI chat­bots like ChatGPT into Wikipedia ar­ti­cles.

Let me ex­plain more.

Since the launch of ChatGPT in November 2022, we’ve been pay­ing close at­ten­tion to GenAI-created con­tent, and how it re­lates to Wikipedia. We’ve spot-checked work of new ed­i­tors from our pro­grams, pri­mar­ily fo­cus­ing on ci­ta­tions to en­sure they were real and not hal­lu­ci­nated. We ex­per­i­mented with tools our­selves, we led video ses­sions about GenAI for our pro­gram par­tic­i­pants, and we closely tracked on-wiki pol­icy dis­cus­sions around GenAI. Currently, English Wikipedia pro­hibits the use of gen­er­a­tive AI to cre­ate im­ages or in talk page dis­cus­sions, and re­cently adopted a guide­line against us­ing large lan­guage mod­els to gen­er­ate new ar­ti­cles.

As our Wiki Experts Brianda Felix and Ian Ramjohn worked with pro­gram par­tic­i­pants through­out the first half of 2025, they found more and more text bear­ing the hall­marks of gen­er­a­tive AI in ar­ti­cle con­tent, like bolded words or bul­leted lists in odd places. But the use of gen­er­a­tive AI was­n’t nec­es­sar­ily prob­lem­atic, as long as the con­tent was ac­cu­rate. Wikipedia’s open edit­ing process en­cour­ages styl­is­tic re­vi­sions to fac­tual text to bet­ter fit Wikipedia’s style.

This find­ing led us to in­vest sig­nif­i­cant staff time into clean­ing up these ar­ti­cles — far more than these ed­i­tors had likely spent cre­at­ing them. Wiki Education’s core mis­sion is to im­prove Wikipedia, and when we dis­cover our pro­gram has un­know­ingly con­tributed to mis­in­for­ma­tion on Wikipedia, we are com­mit­ted to clean­ing it up. In the clean-up process, Wiki Education staff moved more re­cent work back to sand­boxes, we stub-ified ar­ti­cles that passed no­ta­bil­ity but mostly failed ver­i­fi­ca­tion, and we PRODed some ar­ti­cles that from our judg­ment weren’t sal­vage­able. All these are ways of ad­dress­ing Wikipedia ar­ti­cles with flaws in their con­tent. (While there are many grum­blings about Wikipedia’s dele­tion processes, we found sev­eral of the ar­ti­cles we PRODed due to their fully hal­lu­ci­nated GenAI con­tent were then de-PRODed by other ed­i­tors, show­ing the di­ver­sity of opin­ion about gen­er­a­tive AI among the Wikipedia com­mu­nity.

Given what we found through our in­ves­ti­ga­tion into the work from prior terms, and given the in­creas­ing us­age of gen­er­a­tive AI, we wanted to proac­tively ad­dress gen­er­a­tive AI us­age within our pro­grams. Thanks to in-kind sup­port from our friends at Pangram, we be­gan run­ning our par­tic­i­pants’ Wikipedia ed­its, in­clud­ing in their sand­boxes, through Pangram nearly in real time. This is pos­si­ble be­cause of the Dash­board course man­age­ment plat­form Sage built, which tracks ed­its and gen­er­ates tick­ets for our Wiki Experts based on on-wiki ed­its.

We cre­ated a brand-new train­ing mod­ule on Us­ing gen­er­a­tive AI tools with Wikipedia. This train­ing em­pha­sizes where par­tic­i­pants could use gen­er­a­tive AI tools in their work, and where they should not. The core mes­sage of these train­ings is, do not copy and paste any­thing from a GenAI chat­bot into Wikipedia.

We crafted a va­ri­ety of au­to­mated emails to par­tic­i­pants who Pangram de­tected were adding text cre­ated by gen­er­a­tive AI chat­bots. Sage also recorded some videos, since many young peo­ple are ac­cus­tomed to learn­ing via video rather than read­ing text. We also pro­vided op­por­tu­ni­ties for en­gage­ment and con­ver­sa­tion with pro­gram par­tic­i­pants.

In to­tal, we had 1,406 AI edit alerts in the sec­ond half of 2025, al­though only 314 of these (or 22%) were in the ar­ti­cle name­space on Wikipedia (meaning ed­its to live ar­ti­cles). In most cases, Pangram de­tected par­tic­i­pants us­ing GenAI in their sand­boxes dur­ing early ex­er­cises, when we ask them to do things like choose an ar­ti­cle, eval­u­ate an ar­ti­cle, cre­ate a bib­li­og­ra­phy, and out­line their con­tri­bu­tion.

Pangram strug­gled with false pos­i­tives in a few sand­box sce­nar­ios:

* Bibliographies, which are of­ten a com­bi­na­tion of hu­man-writ­ten prose (describing a source and its rel­e­vance) and non-prose text (the ci­ta­tion for a source, in some stan­dard for­mat)

* Outlines with a high por­tion of non-prose con­tent (such as bul­let lists, sec­tion head­ers, text frag­ments, and so on)

We also had a hand­ful of cases where sand­boxes were flagged for AI af­ter a par­tic­i­pant copied an AI-written sec­tion from an ex­ist­ing ar­ti­cle to use as a start­ing point to edit or to ex­pand. (This is­n’t a flaw of Pangram, but a re­minder of how much AI-generated con­tent ed­i­tors out­side our pro­grams are adding to Wikipedia!)

In broad strokes, we found that Pangram is great at an­a­lyz­ing plain prose — the kind of sen­tences and para­graphs you’ll find in the body of a Wikipedia ar­ti­cle — but some­times it gets tripped up by for­mat­ting, markup, and non-prose text. Early on, we dis­abled alert emails for par­tic­i­pants’ bib­li­og­ra­phy and out­line ex­er­cises, and through­out the end of 2025, we re­fined the Dashboard’s pre­pro­cess­ing steps to ex­tract the prose por­tions of re­vi­sions and con­vert them to plain text be­fore send­ing them to Pangram.

Many par­tic­i­pants also re­ported just us­ing Grammarly to copy edit.” In our ex­pe­ri­ence, how­ever, the small­est fixes done with Grammarly never trig­ger Pangram’s de­tec­tion, but if you use its more ad­vanced con­tent cre­ation fea­tures, the re­sult­ing text reg­is­ters as be­ing AI gen­er­ated.

But over­whelm­ingly, we were pleased with Pangram’s re­sults. Our early in­ter­ven­tions with par­tic­i­pants who were flagged as us­ing gen­er­a­tive AI for ex­er­cises that would not en­ter main­space seemed to head off their fu­ture use of gen­er­a­tive AI. We sup­ported 6,357 new ed­i­tors in fall 2025, and only 217 of them (or 3%) had mul­ti­ple AI alerts. Only 5% of the par­tic­i­pants we sup­ported had main­space AI alerts. That means thou­sands of par­tic­i­pants suc­cess­fully edited Wikipedia with­out us­ing gen­er­a­tive AI to draft their con­tent.

For those who did add GenAI-drafted text, we en­sured that the con­tent was re­verted. In fact, par­tic­i­pants some­times self-re­verted once they re­ceived our email let­ting them know Pangram had de­tected their con­tri­bu­tions as be­ing AI cre­ated. Instructors also jumped in to re­vert, as did some Wikipedians who found the con­tent on their own. Our tick­et­ing sys­tem also alerted our Wiki Expert staff, who re­verted the text as soon as they could.

While some in­struc­tors in our Wikipedia Student Program had con­cerns about AI de­tec­tion, we had a lot of suc­cess fo­cus­ing the con­ver­sa­tion on the con­cept of ver­i­fi­a­bil­ity. If the in­struc­tor as sub­ject mat­ter ex­pert could at­test the in­for­ma­tion was ac­cu­rate, and they could find the spe­cific facts in the sources they were cited to, we per­mit­ted text to come back to Wikipedia. However, the process of at­tempt­ing to ver­ify stu­dent-cre­ated work (which in many cases the stu­dents swore they’d writ­ten them­selves) led many in­struc­tors to re­al­ize what we had found in our own as­sess­ment: In their cur­rent states, GenAI-powered chat­bots can­not write fac­tu­ally ac­cu­rate text for Wikipedia that is ver­i­fi­able.

We be­lieve our Pangram-based de­tec­tion in­ter­ven­tions led to fewer par­tic­i­pants adding GenAI-created con­tent to Wikipedia. Following the trend lines, we an­tic­i­pated about 25% of par­tic­i­pants to add GenAI con­tent to Wikipedia ar­ti­cles; in­stead, it was only 5%, and our staff were able to re­vert all prob­lem­atic con­tent.

I’m deeply ap­pre­cia­tive of every­one who made this suc­cess pos­si­ble this term: Participants who fol­lowed our rec­om­men­da­tions, Pangram who gave us ac­cess to their de­tec­tion ser­vice, Wiki Education staff who did the heavy lift of work­ing with all of the pos­i­tive de­tec­tions, and the Wikipedia com­mu­nity, some of whom got to the prob­lem­atic work from our pro­gram par­tic­i­pants be­fore we did.

So far, I’ve fo­cused on the prob­lems with gen­er­a­tive AI-created con­tent. But that’s not all these tools can do, and we did find some ways they were use­ful. Our train­ing mod­ule en­cour­ages ed­i­tors — if their in­sti­tu­tion’s poli­cies per­mit it — to con­sider us­ing gen­er­a­tive AI tools for:

To eval­u­ate the suc­cess of these use sce­nar­ios, we worked di­rectly with 7 of the classes we sup­ported in fall 2025 in our Wikipedia Student Program. We asked stu­dents to anony­mously fill out a sur­vey every time they used gen­er­a­tive AI tools in their Wikipedia work. We asked what tool they used, what prompt they used, how they used the out­put, and whether they found it help­ful. While some stu­dents filled the sur­vey out mul­ti­ple times, oth­ers filled it out once. We had 102 re­sponses re­port­ing us­age at var­i­ous stages in the pro­ject. Overwhelmingly, 87% of the re­sponses who re­ported us­ing gen­er­a­tive AI said it was help­ful for them in the task. The most pop­u­lar tool by far was ChatGPT, with Grammarly as a dis­tant sec­ond, and the oth­ers in the sin­gle-dig­its of us­age.

* Identifying ar­ti­cles to work on that were rel­e­vant to the course they were tak­ing

* Highlighting gaps within ex­ist­ing ar­ti­cles, in­clud­ing miss­ing sec­tions or more re­cent in­for­ma­tion that was miss­ing

* Finding re­li­able sources that they had­n’t al­ready lo­cated

* Pointing to which data­base a cer­tain jour­nal ar­ti­cle could be found

* When prompted with the text they had drafted and the check­list of re­quire­ments, eval­u­at­ing the draft against those re­quire­ments

* Identifying cat­e­gories they could add to the ar­ti­cle they’d edited

Critically, no par­tic­i­pants re­ported us­ing AI tools to draft text for their as­sign­ments. One stu­dent re­ported: I pasted all of my writ­ing from my sand­box and said Put this in a ca­sual, less aca­d­e­mic tone’ … I fig­ured I’d try this but it did­n’t sound like what I nor­mally write and I did­n’t feel that it cap­tured what I was try­ing to get across so I scrapped it.”

While this was an in­for­mal re­search pro­ject, we re­ceived enough pos­i­tive feed­back from it to be­lieve us­ing ChatGPT and other tools can be help­ful in the re­search stage if ed­i­tors then crit­i­cally eval­u­ate the out­put they get, in­stead of blindly ac­cept­ing it. Even par­tic­i­pants who found AI help­ful re­ported that they did­n’t use every­thing it gave them, as some was ir­rel­e­vant. Undoubtedly, it’s cru­cial to main­tain the hu­man think­ing com­po­nent through­out the process.

My con­clu­sion is that, at least as of now, gen­er­a­tive AI-powered chat­bots like ChatGPT should never be used to gen­er­ate text for Wikipedia; too much of it will sim­ply be un­ver­i­fi­able. Our staff would spend far more time at­tempt­ing to ver­ify facts in AI-generated ar­ti­cles than if we’d sim­ply done the re­search and writ­ing our­selves.

That be­ing said, AI tools can be help­ful in the re­search process, es­pe­cially to help iden­tify con­tent gaps or sources, when used in con­junc­tion with a hu­man brain that care­fully eval­u­ates the in­for­ma­tion. Editors should never sim­ply take a chat­bot’s sug­ges­tion; in­stead, if they want to use a chat­bot, they should use it as a brain­storm part­ner to help them think through their plans for an ar­ti­cle.

To date, Wiki Education’s in­ter­ven­tions as our pro­gram par­tic­i­pants edit Wikipedia show promise for keep­ing un­ver­i­fi­able, GenAI-drafted con­tent off Wikipedia. Based on our ex­pe­ri­ences in the fall term, we have high con­fi­dence in Pangram as a de­tec­tor of AI con­tent, at least in Wikipedia ar­ti­cles. We will con­tinue our cur­rent strat­egy in 2026 (with more small ad­just­ments to make the sys­tem as re­li­able as we can).

More gen­er­ally, we found par­tic­i­pants had less AI lit­er­acy than pop­u­lar dis­course might sug­gest. Because of this, we cre­ated a sup­ple­men­tal large lan­guage mod­els train­ing that we’ve of­fered as an op­tional mod­ule for all par­tic­i­pants. Many par­tic­i­pants in­di­cated that they found our guid­ance re­gard­ing AI to be wel­come and help­ful as they at­tempt to nav­i­gate the new com­plex­i­ties cre­ated by AI tools.

We are also look­ing for­ward to more re­search on our work. A team of re­searchers — Francesco Salvi and Manoel Horta Ribeiro at Princeton University, Robert Cummings at the University of Mississippi, and Wiki Education’s Sage Ross — have been look­ing into Wiki Education’s Wikipedia Student Program ed­i­tors’ use of gen­er­a­tive AI over time. Preliminary re­sults have backed up our anec­do­tal un­der­stand­ing, while also re­veal­ing nu­ances of how text pro­duced by our stu­dents over time has changed with the in­tro­duc­tion of GenAI chat­bots. They also con­firmed our be­lief in Pangram: After run­ning stu­dent ed­its from 2015 up un­til the launch of ChatGPT through Pangram, with­out any date in­for­ma­tion in­volved, the team found Pangram cor­rectly iden­ti­fied that it was all 100% hu­man writ­ten. This re­search will con­tinue into the spring, as the team ex­plores ways of un­pack­ing the ef­fects of AI on dif­fer­ent as­pects of ar­ti­cle qual­ity.

And, of course, gen­er­a­tive AI is a rapidly chang­ing field. Just be­cause these were our find­ings in 2025 does­n’t mean they will hold true through­out 2026. Wiki Education re­mains com­mit­ted to mon­i­tor­ing, eval­u­at­ing, it­er­at­ing, and adapt­ing as needed. Fundamentally, we are com­mit­ted to en­sur­ing we add high qual­ity con­tent to Wikipedia through our pro­grams. And when we miss the mark, we are com­mit­ted to clean­ing up any dam­age.

While I’ve fo­cused this post on what Wiki Education has learned from work­ing with our pro­gram par­tic­i­pants, the lessons are ex­tend­able to oth­ers who are edit­ing Wikipedia. Already, 10% of adults world­wide are us­ing ChatGPT, and draft­ing text is one of the top use cases. As gen­er­a­tive AI us­age pro­lif­er­ates, its us­age by well-mean­ing peo­ple to draft con­tent for Wikipedia will as well. It’s un­likely that long­time, daily Wikipedia ed­i­tors would add con­tent copied and pasted from a GenAI chat­bot with­out ver­i­fy­ing all the in­for­ma­tion is in the sources it cites. But many ca­sual Wikipedia con­trib­u­tors or new ed­i­tors may un­know­ingly add bad con­tent to Wikipedia when us­ing a chat­bot. After all, it pro­vides what looks like ac­cu­rate facts, cited to what are of­ten real, rel­e­vant, re­li­able sources. Most ed­its we ended up re­vert­ing seemed ac­cept­able with a cur­sory re­view; it was only af­ter we at­tempted to ver­ify the in­for­ma­tion that we un­der­stood the prob­lems.

Because this un­ver­i­fi­able con­tent of­ten seems okay at first pass, it’s crit­i­cal for Wikipedia ed­i­tors to be equipped with tools like Pangram to more ac­cu­rately de­tect when they should take a closer look at ed­its. Automating re­view of text for gen­er­a­tive AI us­age — as Wikipedians have done for copy­right vi­o­la­tion text for years — would help pro­tect the in­tegrity of Wikipedia con­tent. In Wiki Education’s ex­pe­ri­ence, Pangram is a tool that could pro­vide ac­cu­rate as­sess­ments of text for ed­i­tors, and we would love to see a larger scale ver­sion of the tool we built to eval­u­ate ed­its from our pro­grams to be de­ployed across all ed­its on Wikipedia. Currently, ed­i­tors can add a warn­ing ban­ner that high­lights that the text might be LLM gen­er­ated, but this is based solely on the as­sess­ment of the per­son adding the ban­ner. Our ex­pe­ri­ence sug­gests that judg­ing by tone alone is­n’t enough; in­stead, tools like Pangram can flag highly prob­lem­atic in­for­ma­tion that should be re­verted im­me­di­ately but that might sound okay.

We’ve also found suc­cess in the train­ing mod­ules and sup­port we’ve cre­ated for our pro­gram par­tic­i­pants. Providing clear guid­ance — and the rea­son why that guid­ance ex­ists — has been key in help­ing us head off poor us­age of gen­er­a­tive AI text. We en­cour­age Wikipedians to con­sider re­vis­ing guid­ance to new con­trib­u­tors in the wel­come mes­sages to em­pha­size the pit­falls of adding GenAI-drafted text. Software aimed at new con­trib­u­tors cre­ated by the Wikimedia Foundation should cen­ter start­ing with a list of sources and draw­ing in­for­ma­tion from them, us­ing hu­man in­tel­lect, in­stead of gen­er­a­tive AI, to sum­ma­rize in­for­ma­tion. Providing guid­ance up­front can help well-mean­ing con­trib­u­tors steer clear of bad GenAI-created text.

Wikipedia re­cently cel­e­brated its 25th birth­day. For it to sur­vive into the fu­ture, it will need to adapt as tech­nol­ogy around it changes. Wikipedia would be noth­ing with­out its corps of vol­un­teer ed­i­tors. The con­sen­sus-based de­ci­sion-mak­ing model of Wikipedia means change does­n’t come quickly, but we hope this deep-dive will help spark a con­ver­sa­tion about changes that are needed to pro­tect Wikipedia into the fu­ture.

...

Read the original on wikiedu.org »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.