10 interesting stories served every morning and every evening.
In iOS 26.3, Apple introduced a new privacy feature which limits “precise location” data made available to cellular networks via cell towers. The feature is only available to devices with Apple’s in-house modem introduced in 2025. The announcement says
Cellular networks can determine your location based on which cell towers your device connects to.
This is well-known. I have served on a jury where the prosecution obtained location data from cell towers. Since cell towers are sparse (especially before 5G), the accuracy is in the range of tens to hundreds of metres.
But this is not the whole truth, because cellular standards have built-in protocols that make your device silently send GNSS (i.e. GPS, GLONASS, Galileo, BeiDou) location to the carrier. This would have the same precision as what you see in your Map apps, in single-digit metres.
In 2G and 3G this is called Radio Resources LCS Protocol (RRLP)
So the network simply asks “tell me your GPS coordinates if you know them” and the phone will respond.
In 4G and 5G this is called LTE Positioning Protocol (LPP)
RRLP, RRC, and LPP are natively control-plane positioning protocols. This means that they are transported in the inner workings of cellular networks and are practically invisible to end users.
It’s worth noting that GNSS location is never meant to leave your device. GNSS coordinates are calculated entirely passively, your device doesn’t need to send a single bit of information. Using GNSS is like finding out where you are by reading a road sign: you don’t have to tell anyone else you read a road sign, anyone can read a road sign, and the people who put up road signs don’t know who read which road sign when.
These capabilities are not secrets but somehow they have mostly slid under the radar of the public consciousness. They have been used in the wild for a long time, such as by the DEA in the US in 2006:
[T]he DEA agents procured a court order (but not a search warrant) to obtain GPS coordinates from the courier’s phone via a ping, or signal requesting those coordinates, sent by the phone company to the phone.
And by Shin Bet in Israel, which tracks everyone everywhere all the time:
The GSS Tool was based on centralized cellular tracking operated by Israel’s General Security Services (GSS). The technology was based on a framework that tracks all the cellular phones running in Israel through the cellular companies’ data centers. According to news sources, it routinely collects information from cellular companies and identifies the location of all phones through cellular antenna triangulation and GPS data.
Notably, the Israeli government started using the data for contact tracing in March 2020, only a few weeks after the first Israeli COVID-19 case. An individual would be sent an SMS message informing them of close contact with a COVID patient and required to quarantine. This is good evidence that the location data Israeli carriers are collecting are far more precise than what cell towers alone can achieve.
A major caveat is that I don’t know if RRLP and LPP are the exact techniques, and the only techniques, used by DEA, Shin Bet, and possibly others to collect GNSS data; there could be other protocols or backdoors we’re not privy to.
Another unknown is whether these protocols can be exploited remotely by a foreign carrier. Saudi Arabia has abused SS7 to spy on people in the US, but as far as I know this only locates a device to the coverage area of a Mobile Switching Center, which is less precise than cell tower data. Nonetheless, given the abysmal culture, competency, and integrity in the telecom industry, I would not be shocked if it’s possible for a state actor to obtain the precise GNSS coordinates of anyone on earth using a phone number/IMEI.
Apple made a good step in iOS 26.3 to limit at least one vector of mass surveillance, enabled by having full control of the modem silicon and firmware. They must now allow users to disable GNSS location responses to mobile carriers, and notify the user when such attempts are made to their device.
...
Read the original on an.dywa.ng »
Children under the age of 15 might be deleting their apps if the government’s plans are passed into law.
Prime Minister Petteri Orpo (NCP), the Finnish public health authority THL and two-thirds of Finns are in favour of banning or restricting the use of social media by under-15s.
Children under the age of 15 might be deleting their apps if the government’s plans are passed into law.
Prime Minister Petteri Orpo (NCP), the Finnish public health authority THL and two-thirds of Finns are in favour of banning or restricting the use of social media by under-15s.
Lunch break at the Finnish International School of Tampere (FISTA) is a boisterous time.
The yard is filled with children — ranging from grades 1 to 9, or ages 6 to 16 — running around, shouting, playing football, shooting basketball hoops, doing what kids do.
And there’s not a single screen in sight.
FISTA has taken advantage of the law change, brought in last August, which allows schools to restrict or completely ban the use of mobile phones during school hours. At FISTA, this means no phones at all unless specifically used for learning in the classroom.
“We’ve seen that cutting down on the possibilities for students to use their phones, during the breaks for instance, has spurred a lot of creativity,” FISTA vice principal Antti Koivisto notes.
“They’re more active, doing more physical things like playing games outdoors or taking part in the organised break activities or just socialising with each other.”
With the smartphone restriction in schools widely considered to have been a success, Finland’s government has now set its sights on social media platforms.
Prime Minister Petteri Orpo (NCP) said earlier this month that he supports banning the use of social media by children under the age of 15.
“I am deeply concerned about the lack of physical activity among children and young people, and the fact that it is increasing,” Orpo said at the time.
And there is a growing groundswell of support for Finland introducing such a ban. Two-thirds of respondents to a survey published earlier this week said they back a ban on social media for under-15s. This is a near 10 percentage point jump compared to a similar survey carried out just last summer.
The concerns over social media, and in particular the effects on children, have been well-documented — but Finnish researcher Silja Kosola’s recent description of the phenomenon as an “uncontrolled human experiment” has grabbed people’s attention once again.
Kosola, an associate professor in adolescent medicine, has researched the impact of social media on young people, and tells Yle News that the consequences are not very well understood.
“We see a rise in self-harm and especially eating disorders. We see a big separation in the values of young girls and boys, which is also a big problem in society,” Kosola explains.
In the video below, Silja Kosola explains the detrimental effects that excessive use of social media can have on young people.
She further notes that certain aspects of Finnish culture — such as the independence and freedom granted to children from a young age — have unwittingly exacerbated the ill effects of social media use.
“We have given smartphones to younger people more than anywhere else in the world. Just a couple of years ago, about 95 percent of first graders had their own smartphone, and that hasn’t happened anywhere else,” she says.
Since 10 December last year, children under the age of 16 in Australia have been banned from using social media platforms such as TikTok, Snapchat, Facebook, Instagram and YouTube.
Prime Minister Anthony Albanese began drafting the legislation after he received a heartfelt letter from a grieving mother who lost her 12-year-old daughter to suicide.
Although Albanese has never revealed the details of the letter, he told public broadcaster ABC that it was “obvious social media had played a key role” in the young girl’s death.
The legislation aims to shift the burden away from parents and children and onto the social media companies, who face fines of up to 49.5 million Australian dollars (29 million euros) if they consistently fail to keep kids off their platforms.
Clare Armstrong, ABC’s chief digital political correspondent, told Yle News that the initial reaction to the roll-out has been some confusion but no little “relief”.
“The government often talks about this law as being a tool to help parents and other institutions enforce and start conversations about tech and social media in ways that before, they couldn’t,” she says.
Although it is still early days, as the ban has only been in force for about six weeks, Armstrong adds that the early indicators have been good.
ABC journalist Clare Armstrong explains in the video below how children in Australia have been spending their time since the social media ban was introduced.
However, she adds a note of caution to any countries — such as Finland — looking to emulate the Australian model, noting that communication is key.
“Because you can write a very good law, but if the public doesn’t understand it, and if it can’t be enforced at that household level easily, then it’s bound to fail,” Armstrong says.
Seona Candy, an Australian living in Helsinki for over eight years, has been keenly following the events in her homeland since the social media ban came into effect in December.
She has heard anecdotally that if kids find themselves blocked from one platform, they just set up an account on another, “ones that maybe their parents don’t even know exist”.
“And this is then much, much harder, because those platforms don’t have parental controls, so they don’t have those things already designed into them that the more mainstream platforms do,” Candy says.
Because of this issue, and others she has heard about, she warns against Finland introducing like-for-like legislation based around Australia’s “reactive, knee-jerk” law change.
“I think the Finnish government should really invest in digital education, and digital literacy, and teach kids about digital safety. Finland is world-famous for education, and for media literacy. Play to your strengths, right?”
The All Points North podcast asked if Finland should introduce a similar ban on social media as in Australia. You can listen to the episode via this embedded player, on Yle Areena, via Apple, Spotify or wherever you get your podcasts.
...
Rust is one of the most loved languages out there, is fast, and has an amazing community. Rust invented the concept of ownership as a solution memory management issues without resorting to something slower like Garbage Collection or Reference Counting. But, when you don’t need to be quite as low level, it gives you utilities such as Rc, Arc and Cow to do reference counting and “clone-on-right” in your code. And, when you need to go lower-level still, you can use the unsafe system and access raw C pointers.
Rust also has a bunch of awesome features from functional languages like tagged enums, match expressions, first class functions and a powerful type system with generics.
Rust has an LLVM-based compiler which lets it compile to native code and WASM.
I’ve also been doing a bit of Swift programming for a couple of years now. And the more I learn Rust, the more I see a reflection of Swift. (I know that Swift stole a lot of ideas from Rust, I’m talking about my own perspective here).
Swift, too, has awesome features from functional languages like tagged enums, match expressions and first-class functions. It too has a very powerful type system with generics.
Swift too gives you complete type-safety without a garbage collector. By default, everything is a value type with “copy-on-write” semantics. But when you need extra speed you can opt into an ownership system and “move” values to avoid copying. And if you need to go even lower level, you can use the unsafe system and access raw C pointers.
Swift has an LLVM-based compiler which lets it compile to native code and WASM.
You’re probably feeling like you just read the same paragraphs twice. This is no accident. Swift is extremely similar to Rust and has most of the same feature-set. But there is a very big difference is perspective. If you consider the default memory model, this will start to make a lot of sense.
Rust is a low-level systems language at heart, but it gives you the tools to go higher level. Swift starts at a high level and gives you the ability to go low-level.
The most obvious example of this is the memory management model. Swift use value-types by default with copy-on-write semantics. This is the equivalent of using Cow<> for all your values in Rust. But defaults matter. Rust makes it easy to use “moved” and “borrowed” values but requires extra ceremony to use Cow<> values as you need to “unwrap” them .as_mutable() to actually use the value within. Swift makes these Copy-on-Write values easy to use and instead requires extra ceremony to use borrowing and moving instead. Rust is faster by default, Swift is simpler and easier by default.
Swift’s syntax is a masterclass in taking awesome functional language concepts and hiding them in C-like syntax to trick the developers into accepting them.
Consider match statements. This is what a match statement looks like in Rust:
Here’s how that same code would be written in Swift:
Swift doesn’t have a match statement or expression. It has a switch statement that developers are already familiar with. Except this switch statement is actually not a switch statement at all. It’s an expression. It doesn’t “fallthrough”. It does pattern matching. It’s just a match expression with a different name and syntax.
In fact, Swift treats enums as more than just types and lets you put methods directly on it:
Rust doesn’t have null, but it does have None. Swift has a nil, but it’s really just a None in hiding. Instead of an Option, Swift let’s you use T?, but the compiler still forces you to check that the value is not nil before you can use it.
You get the same safety with more convenience since you can do this in Swift with an optional type:
let val: T?if let val { // val is now of type `T`.}
Also, you’re not forced to wrap every value with a Some(val) before returning it. The Swift compiler takes care of that for you. A T will transparently be converted into a T? when needed.
Rust doesn’t have try-catch. Instead it has a Result type which contains the success and error types.
Swift doesn’t have a try-catch either, but it does have do-catch and you have to use try before calling a function that could throw. Again, this is just deception for those developers coming from C-like languages. Swift’s error handling works exactly like Rust’s behind the scenes, but it is hidden in a clever, familiar syntax.
func usesErrorThrowingFunction() throws { let x = try thisFnCanThrow()}func handlesErrors() { do { let x = try thisFnCanThrow() } catch err { // handle the `err` here. }}
This is very similar to how Rust let’s you use ? at the end of statements to automatically forward errors, but you don’t have to wrap your success values in Ok().
There are many common problems that Rust’s compiler will catch at compile time and even suggest solutions for you. The example that portrays this well is self-referencing enums.
Consider an enum that represents a tree. Since, it is a recursive type, Rust will force you to use something like Box<> for referencing a type within itself.
This makes the problem explicit and forces you to deal with it directly. Swift is a little more, automatic.
Note: that you still have to annotate this enum with the indirect keyword to indicate that it is recursive. But once you’ve done that, Swift’s compiler takes care of the rest. You don’t have to think about Box<> or Rc<>. The values just work normally.
Swift was designed to replace Objective-C and needed to be able to interface with existing code. So, it has made a lot of pragmatic choices that makes it a much less “pure” and “minimalist” language. Swift is a pretty big language compared to Rust and has many more features built-in. However, Swift is designed with “progressive disclosure” in mind which means that just as soon as you think you’ve learned the language a little more of the iceberg pops out of the water.
Here are just some of the language features:
Swift is a far easier language to get started and productive with. The syntax is more familiar and a lot more is done for you automatically. But this really just makes Swift a higher-level language and it comes with the same tradeoffs.
By default, a Rust program is much faster than a Swift program. This is because Rust is fast by default, and lets you be slow, while Swift is easy by default and lets you be fast.
Based on this, I would say both languages have their uses. Rust is better for systems and embedded programming. It’s better for writing compilers and browser engines (Servo) and it’s better for writing entire operating systems.
Swift is better for writing UI and servers and some parts of compilers and operating systems. Over time I expect to see the overlap get bigger.
There is a perception that Swift is only a good language for Apple platforms. While this was once true, this is no longer the case and Swift is becoming increasingly a good cross-platform language. Hell, Swift even compiles to wasm, and the forks made by the swift-wasm team were merged back into Swift core earlier this year.
Swift on Windows is being used by The Browser Company to share code and bring the Arc browser to windows. Swift on Linux has long been supported by Apple themselves in order to push “Swift on Server”. Apple is directly sponsoring the Swift on Server conference.
This year Embedded Swift was also announced which is already being used on small devices like the Panic Playdate.
Swift website has been highlighting many of these projects:
The browser company says that Interoperability is Swift’s super power.
And the Swift project has been trying make working with Swift a great experience outside of XCode with projects like an open source LSP and funding the the VSCode extension.
Compile times are (like Rust) quite bad. There is some amount of feature creep and the language is larger than it should be. Not all syntax feels familiar. The package ecosystem isn’t nearly as rich as Rust.
But the “Swift is only for Apple platforms” is an old and tired cliche at this point. Swift is already a cross-platform, ABI-stable language with no GC, automatic Reference Counting and the option to opt into ownership for even more performance. Swift packages increasingly work on Linux. Foundation was ported to Swift, open sourced and made open source. It’s still early days for Swift as a good, more convenient, Rust alternative for cross-platform development, but it is here now. It’s no longer a future to wait for.
...
A leader in the global fight against smallpox and a champion of vaccine science, William Foege died last SaturdayThe late physicians and health administrators William Foege (middle), J. Donald Millar (left) and J. Michael Lane (right), all of whom served in the Global Smallpox Eradication Program, in 1980. Sign Up for Our Free Daily NewsletterI agree my information will be processed in accordance with the Scientific American and Springer Nature Limited Privacy Policy . We leverage third party services to both verify and deliver email. By providing your email address, you also consent to having the email address shared with third parties for those purposes. William Foege, a leader in the global fight to eliminate smallpox, has died. Foege passed away on Saturday at the age of 89, according to the Task Force for Global Health, a public health organization he co-founded.Foege headed the U.S. Centers for Disease Control and Prevention’s Smallpox Eradication Program in the 1970s. Before the disease was officially eradicated in 1980, it killed around one in three people who were infected. According to the CDC, there have been no new smallpox cases since 1977.“If you look at the simple metric of who has saved the most lives, he is right up there with the pantheon,” said former CDC director Tom Frieden to the Associated Press. “Smallpox eradication has prevented hundreds of millions of deaths.”If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.Foege went on to lead the CDC and served as a senior medical adviser and senior fellow at the Bill & Melinda Gates Foundation. In 2012 then president Barack Obama awarded him the Presidential Medal of Freedom.Foege was a vocal proponent of vaccines for public health, writing with epidemiologist Larry Brilliant in Scientific American in 2013 that the effort to eliminate polio “has never been closer” to success. “By working together,” they wrote, “we will soon relegate polio—alongside smallpox—to the history books.” Polio remains a “candidate for eradication,” according to the World Health Assembly.And in 2025 Foege, alongside several other former CDC directors, spoke out against the policies of the current secretary of health and human services Robert F. Kennedy, Jr. In a New York Times op-ed, they wrote that the top health official’s tenure was “unlike anything we had ever seen at the agency.”In a statement, Task Force for Global Health CEO Patrick O’Carroll remembered Foege as an “inspirational” figure, both for early-career public health workers and veterans of the field. “Whenever he spoke, his vision and compassion would reawaken the optimism that prompted us to choose this field, and re-energize our efforts to make this world a better place,” O’Carroll said.It’s Time to Stand Up for ScienceIf you enjoyed this article, I’d like to ask for your support. Scientific American has served as an advocate for science and industry for 180 years, and right now may be the most critical moment in that two-century history.I’ve been a Scientific American subscriber since I was 12 years old, and it helped shape the way I look at the world. SciAm always educates and delights me, and inspires a sense of awe for our vast, beautiful universe. I hope it does that for you, too.If you subscribe to Scientific American, you help ensure that our coverage is centered on meaningful research and discovery; that we have the resources to report on the decisions that threaten labs across the U.S.; and that we support both budding and working scientists at a time when the value of science itself too often goes unrecognized.In return, you get essential news, captivating podcasts, brilliant infographics, can’t-miss newsletters, must-watch videos, challenging games, and the science world’s best writing and reporting. You can even gift someone a subscription.There has never been a more important time for us to stand up and show why science matters. I hope you’ll support us in that mission.
...
Read the original on www.scientificamerican.com »
For the last few months, I have been developing a new reporting application. Early on, I decided to add a –dry-run option to the run command. This turned out to be quite useful — I have used it many times a day while developing and testing the application.
The application will generate a set of reports every weekday. It has a loop that checks periodically if it is time to generate new reports. If so, it will read data from a database, apply some logic to create the reports, zip the reports, upload them to an sftp server, check for error responses on the sftp server, parse the error responses, and send out notification mails. The files (the generated reports, and the downloaded feedback files) are moved to different directories depending on the step in the process. A simple and straightforward application.
Early in the development process, when testing the incomplete application, I remembered that Subversion (the version control system after CVS, before Git) had a –dry-run option. Other linux commands have this option too. If a command is run with the argument –dry-run, the output will print what will happen when the command is run, but no changes will be made. This lets the user see what will happen if the command is run without the –dry-run argument.
I remembered how helpful that was, so I decided to add it to my command as well. When I run the command with –dry-run, it prints out the steps that will be taken in each phase: which reports that will be generated (and which will not be), which files will be zipped and moved, which files will be uploaded to the sftp server, and which files will be downloaded from it (it logs on and lists the files).
Looking back at the project, I realized that I ended up using the –dry-run option pretty much every day.
I am surprised how useful I found it to be. I often used it as a check before getting started. Since I know –dry-run will not change anything, it is safe to run without thinking. I can immediately see that everything is accessible, that the configuration is correct, and that the state is as expected. It is a quick and easy sanity check.
I also used it quite a bit when testing the complete system. For example, if I changed a date in the report state file (the date for the last successful report of a given type), I could immediately see from the output whether it would now be generated or not. Without –dry-run, the actual report would also be generated, which takes some time. So I can test the behavior, and receive very quick feedback.
The downside is that the dryRun-flag pollutes the code a bit. In all the major phases, I need to check if the flag is set, and only print the action that will be taken, but not actually doing it. However, this doesn’t go very deep. For example, none of the code that actually generates the report needs to check it. I only need to check if that code should be invoked in the first place.
The type of application I have been writing is ideal for –dry-run. It is invoked by a command, and it may create some changes, for example generating new reports. More reactive applications (that wait for messages before acting) don’t seem to be a good fit.
I added –dry-run on a whim early on in the project. I was surprised at how useful I found it to be. Adding it early was also good, since I got the benefit of it while developing more functionality.
The –dry-run flag is not for every situation, but when it fits, it can be quite useful.
...
Read the original on henrikwarne.com »
You have limited time, but get more time for each animal listed. When the timer runs out, that’s game over.
No overlapping terms.
For example, if you list “bear” and “polar bear”, you get no point (or time bonus) for the latter. But you can still get a point for a second kind of bear. Order doesn’t matter.
...
Read the original on rose.systems »
* Watch this repo if you need to be notified when there’s update
This repository is my notes/blog for cpython source code
Trying to illustrate every detail of cpython implementation
# based on version 3.8.0a0
cd cpython
git reset –hard ab54b9a130c88f708077c2ef6c4963b632c132b3
The following contents are suitable for those who have python programming experience and interested in internal of python interpreter, for those who needs beginner or advanced material please refer to awesome-python-books
I will only recommend what I’ve read
All kinds of contributions are welcome
* submit a pull request
if you want to share any knowledge you know
* if you want to share any knowledge you know
...
Read the original on github.com »
Inside Nvidia’s 10-year effort to make the Shield TV the most updated Android device ever
“Selfishly, a little bit, we built Shield for ourselves.”
The Shield TV has that classic Nvidia aesthetic.
The Shield TV has that classic Nvidia aesthetic.
It took Android devicemakers a very long time to commit to long-term update support. Samsung and Google have only recently decided to offer seven years of updates for their flagship Android devices, but a decade ago, you were lucky to get more than one or two updates on even the most expensive Android phones and tablets. How is it, then, that an Android-powered set-top box from 2015 is still going strong?
Nvidia released the first Shield Android TV in 2015, and according to the company’s senior VP of hardware engineering, Andrew Bell, supporting these devices has been a labor of love. And the team at Nvidia still loves the Shield. Bell assures us that Nvidia has never given up, even when it looked like support for the Shield was waning, and it doesn’t plan to stop any time soon.
Gaming has been central to Nvidia since its start, and that focus gave rise to the Shield. “Pretty much everybody who worked at Nvidia in the early days really wanted to make a game console,” said Bell, who has worked at the company for 25 years.
However, Nvidia didn’t have what it needed back then. Before gaming, crypto, and AI turned it into the multi-trillion-dollar powerhouse it is today, Nvidia had a startup mentality and the budget to match. When Shield devices began percolating in the company’s labs, it was seen as an important way to gain experience with “full-stack” systems and all the complications that arise when managing them.
“To build a game console was pretty complicated because, of course, you have to have a GPU, which we know how to make,” Bell explained. “But in addition to that, you need a CPU, an OS, games, and you need a UI.”
Through acquisitions and partnerships, the pieces of Nvidia’s fabled game console slowly fell into place. The purchase of PortalPlayer in 2007 brought the CPU technology that would become the Tegra Arm chips, and the company’s surging success in GPUs gave it the partnerships it needed to get games. But the UI was still missing—that didn’t change until Google expanded Android to the TV in 2014. The company’s first Android mobile efforts were already out there in the form of the Shield Portable and Shield Tablet, but the TV-connected box is what Nvidia really wanted.
“Selfishly, a little bit, we built Shield for ourselves,” Bell told Ars Technica. “We actually wanted a really good TV streamer that was high-quality and high-performance, and not necessarily in the Apple ecosystem. We built some prototypes, and we got so excited about it. [CEO Jensen Huang] was like, ‘Why don’t we bring it out and sell it to people?’”
The first Shield box in 2015 had a heavy gaming focus, with a raft of both local and cloud-based (GeForce Now) games. The base model included only a game controller, with the remote control sold separately. According to Bell, Nvidia eventually recognized that the gaming angle wasn’t as popular as it had hoped. The 2017 and 2019 Shield refreshes were more focused on the streaming experience.
“Eventually, we kind of said, ‘Maybe the soul is that it’s a streamer for gamers,’” said Bell. “We understand gamers from GeForce, and we understand they care about quality and performance. A lot of these third-party devices like tablets, they’re going cheap. Set-top boxes, they’re going cheap. But we were the only company that was like, ‘Let’s go after people who really want a premium experience.’”
And premium it is, offering audio and video support far beyond what you find in other TV boxes, even years after release. The Shield TV started at $200 in 2015, and that’s still what you’ll pay for the Pro model to this day. However, Bell notes that passion was the driving force behind bringing the Shield TV to market. The team didn’t know if it would make money, and indeed, the company lost money on every unit sold during the original production run. The 2017 and 2019 refreshes were about addressing that while also emphasizing the Shield’s streaming media chops.
Update support for Internet-connected devices is vital—whether they’re phones, tablets, set-top boxes, or something else. When updates cease, gadgets fall out of sync with platform features, leading to new bugs (which will never be fixed) and security holes that can affect safety and functionality. The support guarantee attached to a device is basically its expiration date.
“We were all frustrated as buyers of phones and tablets that you buy a device, you get one or two updates, and that’s it!” said Bell. “Early on when we were building Shield TV, we decided we were going to make it for a long time. Jensen and I had a discussion, and it was, ‘How long do we want to support this thing?’ And Jensen said, ‘For as long as we shall live.’”
In 2025, Nvidia wrapped up its tenth year of supporting the Shield platform. Even those original 2015 boxes are still being maintained with bug fixes and the occasional new feature. They’ve gone all the way from Android 5.0 to Android 11 in that time. No Android device—not a single phone, tablet, watch, or streaming box—has gotten anywhere close to this level of support.
The best example of Nvidia’s passion for support is, believe it or not, a two-year gap in updates.
Across the dozens of Shield TV updates, there have been a few times when fans feared Nvidia was done with the box. Most notably, there were no public updates for the Shield TV in 2023 or 2024, but over-the-air updates resumed in 2025.
“On the outside, it looked like we went quiet, but it’s actually one of our bigger development efforts,” explained Bell.
The origins of that effort, surprisingly, stretch back years to the launch of the Nintendo Switch. The Shield runs Nvidia’s custom Tegra X1 Arm chip, the same processor Nintendo chose to power the original Switch in 2017. Soon after release, modders discovered a chip flaw that could bypass Nintendo’s security measures, enabling homebrew (and piracy). An updated Tegra X1 chip (also used in the 2019 Shield refresh) fixed that for Nintendo, but Nvidia’s 2015 and 2017 Shield boxes ran the same exploitable version.
Initially, Nvidia was able to roll out periodic patches to protect against the vulnerability, but by 2023, the Shield needed something more. Around that time, owners of 2015 and 2017 Shield boxes had noticed that DRM-protected 4K content often failed to play—that was thanks to the same bug that affected the Switch years earlier.
With a newer, non-vulnerable product on the market, many companies might have just accepted that the older product would lose functionality, but Nvidia’s passion for Shield remained. Bell consulted Huang, whom he calls Shield customer No. 1, about the meaning of his “as long as we shall live” pledge, and the team was approved to spend whatever time was needed to fix the vulnerability on the first two generations of Shield TV.
According to Bell, it took about 18 months to get there, requiring the creation of an entirely new security stack. He explains that Android updates aren’t actually that much work compared to DRM security, and some of its partners weren’t that keen on re-certifying older products. The Shield team fought for it because they felt, as they had throughout the product’s run, that they’d made a promise to customers who expected the box to have certain features.
In February 2025, Nvidia released Shield Patch 9.2, the first wide release in two years. The changelog included an unassuming line reading, “Added security enhancement for 4K DRM playback.” That was the Tegra X1 bug finally being laid to rest on the 2015 and 2017 Shield boxes.
The refreshed Tegra X1+ in the 2019 Shield TV spared it from those DRM issues, and Nvidia still hasn’t stopped working on that chip. The Tegra X1 was blazing fast in 2015, and it’s still quite capable compared to your average smart TV today. The chip has actually outlasted several of the components needed to manufacture it. For example, when the Tegra chip’s memory was phased out, the team immediately began work on qualifying a new memory supplier. To this day, Nvidia is still iterating on the Tegra X1 platform, supporting the Shield’s continued updates.
“If operations calls me and says they just ran out of this component, I’ve got engineers on it tonight looking for a new component,” Bell said.
Nvidia has put its money where its mouth is by supporting all versions of the Shield for so long. But it’s been over six years since we’ve seen new hardware. Surely the Shield has to be running out of steam, right?
Not so, says Bell. Nvidia still manufactures the 2019 Shield because people are still buying it. In fact, the sales volume has remained basically unchanged for the past 10 years. The Shield Pro is a spendy step-top box at $200, so Nvidia has experimented with pricing and promotion with little effect. The 2019 non-Pro Shield was one such effort. The base model was originally priced at $99, but the MSRP eventually landed at $150.
“No matter how much we dropped the price or how much we market or don’t market it, the same number of people come out of the woodwork every week to buy Shield,” Bell explained.
Nvidia had no choice but to put that giant Netflix button on the remote.
Nvidia had no choice but to put that giant Netflix button on the remote.
That kind of consistency isn’t lost on Nvidia. Bell says the company has no plans to stop production or updates for the Shield “any time soon.” It’s also still possible that Nvidia could release new Shield TV hardware in the future. Nvidia’s Shield devices came about as a result of engineers tinkering with new concepts in a lab setting, but most of those experiments never see the light of day. For example, Bell notes that the team produced several updated versions of the Shield Tablet and Shield Portable (some of which you can find floating around on eBay) that never got a retail release, and they continue to work on Shield TV.
“We’re always playing in the labs, trying to discover new things,” said Bell. “We’ve played with new concepts for Shield and we’ll continue to play, and if we find something we’re super-excited about, we’ll probably make a go of it.”
But what would that look like? Video technology has advanced since 2019, leaving the Shield unable to take full advantage of some newer formats. First up would be support for VP9 Profile 2 hardware decoding, which enables HDR video on YouTube. Bell says a refreshed Shield would also prioritize formats like AV1 and the HDR 10+ standard, as well as support for newer Dolby Vision profiles for people with backed-up media.
And then there’s the enormous, easy-to-press-by-accident Netflix button on the remote. While adding new video technologies would be job one, fixing the Netflix button is No. 2 for a theoretical new Shield. According to Bell, Nvidia doesn’t receive any money from Netflix for the giant button on its remote. It’s actually there as a requirement of Netflix’s certification program, which was “very strong” in 2019. In a refresh, he thinks Nvidia could get away with a smaller “N” button. We can only hope.
But does Bell think he’ll get a chance to build that new Shield TV, shrunken Netflix button and all? He stopped short of predicting the future, but there’s definitely interest.
“We talk about it all the time—I’d love to,” he said.
Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he’s written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards.
AI agents now have their own Reddit-style social network, and it’s getting weird fast
The TV industry finally concedes that the future may not be in 8K
Inside Nvidia’s 10-year effort to make the Shield TV the most updated Android device ever
FCC aims to ensure “only living and lawful Americans” get Lifeline benefits
ICE protester says her Global Entry was revoked after agent scanned her face
...
Read the original on arstechnica.com »
Like many organizations, Wiki Education has grappled with generative AI, its impacts, opportunities, and threats, for several years. As an organization that runs large-scale programs to bring new editors to Wikipedia (we’re responsible for about 19% of all new active editors on English Wikipedia), we have deep understanding of what challenges face new content contributors to Wikipedia — and how to support them to successfully edit. As many people have begun using generative AI chatbots like ChatGPT, Gemini, or Claude in their daily lives, it’s unsurprising that people will also consider using them to help draft contributions to Wikipedia. Since Wiki Education’s programs provide a cohort of content contributors whose work we can evaluate, we’ve looked into how our participants are using GenAI tools.
We are choosing to share our perspective through this blog post because we hope it will help inform discussions of GenAI-created content on Wikipedia. In an open environment like the Wikimedia movement, it’s important to share what you’ve learned. In this case, we believe our learnings can help Wikipedia editors who are trying to protect the integrity of content on the encyclopedia, Wikipedians who may be interested in using generative AI tools themselves, other program leaders globally who are trying to onboard new contributors who may be interested in using these tools, and the Wikimedia Foundation, whose product and technology team builds software to help support the development of high-quality content on Wikipedia.
Our fundamental conclusion about generative AI is: Wikipedia editors should never copy and paste the output from generative AI chatbots like ChatGPT into Wikipedia articles.
Let me explain more.
Since the launch of ChatGPT in November 2022, we’ve been paying close attention to GenAI-created content, and how it relates to Wikipedia. We’ve spot-checked work of new editors from our programs, primarily focusing on citations to ensure they were real and not hallucinated. We experimented with tools ourselves, we led video sessions about GenAI for our program participants, and we closely tracked on-wiki policy discussions around GenAI. Currently, English Wikipedia prohibits the use of generative AI to create images or in talk page discussions, and recently adopted a guideline against using large language models to generate new articles.
As our Wiki Experts Brianda Felix and Ian Ramjohn worked with program participants throughout the first half of 2025, they found more and more text bearing the hallmarks of generative AI in article content, like bolded words or bulleted lists in odd places. But the use of generative AI wasn’t necessarily problematic, as long as the content was accurate. Wikipedia’s open editing process encourages stylistic revisions to factual text to better fit Wikipedia’s style.
This finding led us to invest significant staff time into cleaning up these articles — far more than these editors had likely spent creating them. Wiki Education’s core mission is to improve Wikipedia, and when we discover our program has unknowingly contributed to misinformation on Wikipedia, we are committed to cleaning it up. In the clean-up process, Wiki Education staff moved more recent work back to sandboxes, we stub-ified articles that passed notability but mostly failed verification, and we PRODed some articles that from our judgment weren’t salvageable. All these are ways of addressing Wikipedia articles with flaws in their content. (While there are many grumblings about Wikipedia’s deletion processes, we found several of the articles we PRODed due to their fully hallucinated GenAI content were then de-PRODed by other editors, showing the diversity of opinion about generative AI among the Wikipedia community.
Given what we found through our investigation into the work from prior terms, and given the increasing usage of generative AI, we wanted to proactively address generative AI usage within our programs. Thanks to in-kind support from our friends at Pangram, we began running our participants’ Wikipedia edits, including in their sandboxes, through Pangram nearly in real time. This is possible because of the Dashboard course management platform Sage built, which tracks edits and generates tickets for our Wiki Experts based on on-wiki edits.
We created a brand-new training module on Using generative AI tools with Wikipedia. This training emphasizes where participants could use generative AI tools in their work, and where they should not. The core message of these trainings is, do not copy and paste anything from a GenAI chatbot into Wikipedia.
We crafted a variety of automated emails to participants who Pangram detected were adding text created by generative AI chatbots. Sage also recorded some videos, since many young people are accustomed to learning via video rather than reading text. We also provided opportunities for engagement and conversation with program participants.
In total, we had 1,406 AI edit alerts in the second half of 2025, although only 314 of these (or 22%) were in the article namespace on Wikipedia (meaning edits to live articles). In most cases, Pangram detected participants using GenAI in their sandboxes during early exercises, when we ask them to do things like choose an article, evaluate an article, create a bibliography, and outline their contribution.
Pangram struggled with false positives in a few sandbox scenarios:
* Bibliographies, which are often a combination of human-written prose (describing a source and its relevance) and non-prose text (the citation for a source, in some standard format)
* Outlines with a high portion of non-prose content (such as bullet lists, section headers, text fragments, and so on)
We also had a handful of cases where sandboxes were flagged for AI after a participant copied an AI-written section from an existing article to use as a starting point to edit or to expand. (This isn’t a flaw of Pangram, but a reminder of how much AI-generated content editors outside our programs are adding to Wikipedia!)
In broad strokes, we found that Pangram is great at analyzing plain prose — the kind of sentences and paragraphs you’ll find in the body of a Wikipedia article — but sometimes it gets tripped up by formatting, markup, and non-prose text. Early on, we disabled alert emails for participants’ bibliography and outline exercises, and throughout the end of 2025, we refined the Dashboard’s preprocessing steps to extract the prose portions of revisions and convert them to plain text before sending them to Pangram.
Many participants also reported “just using Grammarly to copy edit.” In our experience, however, the smallest fixes done with Grammarly never trigger Pangram’s detection, but if you use its more advanced content creation features, the resulting text registers as being AI generated.
But overwhelmingly, we were pleased with Pangram’s results. Our early interventions with participants who were flagged as using generative AI for exercises that would not enter mainspace seemed to head off their future use of generative AI. We supported 6,357 new editors in fall 2025, and only 217 of them (or 3%) had multiple AI alerts. Only 5% of the participants we supported had mainspace AI alerts. That means thousands of participants successfully edited Wikipedia without using generative AI to draft their content.
For those who did add GenAI-drafted text, we ensured that the content was reverted. In fact, participants sometimes self-reverted once they received our email letting them know Pangram had detected their contributions as being AI created. Instructors also jumped in to revert, as did some Wikipedians who found the content on their own. Our ticketing system also alerted our Wiki Expert staff, who reverted the text as soon as they could.
While some instructors in our Wikipedia Student Program had concerns about AI detection, we had a lot of success focusing the conversation on the concept of verifiability. If the instructor as subject matter expert could attest the information was accurate, and they could find the specific facts in the sources they were cited to, we permitted text to come back to Wikipedia. However, the process of attempting to verify student-created work (which in many cases the students swore they’d written themselves) led many instructors to realize what we had found in our own assessment: In their current states, GenAI-powered chatbots cannot write factually accurate text for Wikipedia that is verifiable.
We believe our Pangram-based detection interventions led to fewer participants adding GenAI-created content to Wikipedia. Following the trend lines, we anticipated about 25% of participants to add GenAI content to Wikipedia articles; instead, it was only 5%, and our staff were able to revert all problematic content.
I’m deeply appreciative of everyone who made this success possible this term: Participants who followed our recommendations, Pangram who gave us access to their detection service, Wiki Education staff who did the heavy lift of working with all of the positive detections, and the Wikipedia community, some of whom got to the problematic work from our program participants before we did.
So far, I’ve focused on the problems with generative AI-created content. But that’s not all these tools can do, and we did find some ways they were useful. Our training module encourages editors — if their institution’s policies permit it — to consider using generative AI tools for:
To evaluate the success of these use scenarios, we worked directly with 7 of the classes we supported in fall 2025 in our Wikipedia Student Program. We asked students to anonymously fill out a survey every time they used generative AI tools in their Wikipedia work. We asked what tool they used, what prompt they used, how they used the output, and whether they found it helpful. While some students filled the survey out multiple times, others filled it out once. We had 102 responses reporting usage at various stages in the project. Overwhelmingly, 87% of the responses who reported using generative AI said it was helpful for them in the task. The most popular tool by far was ChatGPT, with Grammarly as a distant second, and the others in the single-digits of usage.
* Identifying articles to work on that were relevant to the course they were taking
* Highlighting gaps within existing articles, including missing sections or more recent information that was missing
* Finding reliable sources that they hadn’t already located
* Pointing to which database a certain journal article could be found
* When prompted with the text they had drafted and the checklist of requirements, evaluating the draft against those requirements
* Identifying categories they could add to the article they’d edited
Critically, no participants reported using AI tools to draft text for their assignments. One student reported: “I pasted all of my writing from my sandbox and said ‘Put this in a casual, less academic tone’ … I figured I’d try this but it didn’t sound like what I normally write and I didn’t feel that it captured what I was trying to get across so I scrapped it.”
While this was an informal research project, we received enough positive feedback from it to believe using ChatGPT and other tools can be helpful in the research stage if editors then critically evaluate the output they get, instead of blindly accepting it. Even participants who found AI helpful reported that they didn’t use everything it gave them, as some was irrelevant. Undoubtedly, it’s crucial to maintain the human thinking component throughout the process.
My conclusion is that, at least as of now, generative AI-powered chatbots like ChatGPT should never be used to generate text for Wikipedia; too much of it will simply be unverifiable. Our staff would spend far more time attempting to verify facts in AI-generated articles than if we’d simply done the research and writing ourselves.
That being said, AI tools can be helpful in the research process, especially to help identify content gaps or sources, when used in conjunction with a human brain that carefully evaluates the information. Editors should never simply take a chatbot’s suggestion; instead, if they want to use a chatbot, they should use it as a brainstorm partner to help them think through their plans for an article.
To date, Wiki Education’s interventions as our program participants edit Wikipedia show promise for keeping unverifiable, GenAI-drafted content off Wikipedia. Based on our experiences in the fall term, we have high confidence in Pangram as a detector of AI content, at least in Wikipedia articles. We will continue our current strategy in 2026 (with more small adjustments to make the system as reliable as we can).
More generally, we found participants had less AI literacy than popular discourse might suggest. Because of this, we created a supplemental large language models training that we’ve offered as an optional module for all participants. Many participants indicated that they found our guidance regarding AI to be welcome and helpful as they attempt to navigate the new complexities created by AI tools.
We are also looking forward to more research on our work. A team of researchers — Francesco Salvi and Manoel Horta Ribeiro at Princeton University, Robert Cummings at the University of Mississippi, and Wiki Education’s Sage Ross — have been looking into Wiki Education’s Wikipedia Student Program editors’ use of generative AI over time. Preliminary results have backed up our anecdotal understanding, while also revealing nuances of how text produced by our students over time has changed with the introduction of GenAI chatbots. They also confirmed our belief in Pangram: After running student edits from 2015 up until the launch of ChatGPT through Pangram, without any date information involved, the team found Pangram correctly identified that it was all 100% human written. This research will continue into the spring, as the team explores ways of unpacking the effects of AI on different aspects of article quality.
And, of course, generative AI is a rapidly changing field. Just because these were our findings in 2025 doesn’t mean they will hold true throughout 2026. Wiki Education remains committed to monitoring, evaluating, iterating, and adapting as needed. Fundamentally, we are committed to ensuring we add high quality content to Wikipedia through our programs. And when we miss the mark, we are committed to cleaning up any damage.
While I’ve focused this post on what Wiki Education has learned from working with our program participants, the lessons are extendable to others who are editing Wikipedia. Already, 10% of adults worldwide are using ChatGPT, and drafting text is one of the top use cases. As generative AI usage proliferates, its usage by well-meaning people to draft content for Wikipedia will as well. It’s unlikely that longtime, daily Wikipedia editors would add content copied and pasted from a GenAI chatbot without verifying all the information is in the sources it cites. But many casual Wikipedia contributors or new editors may unknowingly add bad content to Wikipedia when using a chatbot. After all, it provides what looks like accurate facts, cited to what are often real, relevant, reliable sources. Most edits we ended up reverting seemed acceptable with a cursory review; it was only after we attempted to verify the information that we understood the problems.
Because this unverifiable content often seems okay at first pass, it’s critical for Wikipedia editors to be equipped with tools like Pangram to more accurately detect when they should take a closer look at edits. Automating review of text for generative AI usage — as Wikipedians have done for copyright violation text for years — would help protect the integrity of Wikipedia content. In Wiki Education’s experience, Pangram is a tool that could provide accurate assessments of text for editors, and we would love to see a larger scale version of the tool we built to evaluate edits from our programs to be deployed across all edits on Wikipedia. Currently, editors can add a warning banner that highlights that the text might be LLM generated, but this is based solely on the assessment of the person adding the banner. Our experience suggests that judging by tone alone isn’t enough; instead, tools like Pangram can flag highly problematic information that should be reverted immediately but that might sound okay.
We’ve also found success in the training modules and support we’ve created for our program participants. Providing clear guidance — and the reason why that guidance exists — has been key in helping us head off poor usage of generative AI text. We encourage Wikipedians to consider revising guidance to new contributors in the welcome messages to emphasize the pitfalls of adding GenAI-drafted text. Software aimed at new contributors created by the Wikimedia Foundation should center starting with a list of sources and drawing information from them, using human intellect, instead of generative AI, to summarize information. Providing guidance upfront can help well-meaning contributors steer clear of bad GenAI-created text.
Wikipedia recently celebrated its 25th birthday. For it to survive into the future, it will need to adapt as technology around it changes. Wikipedia would be nothing without its corps of volunteer editors. The consensus-based decision-making model of Wikipedia means change doesn’t come quickly, but we hope this deep-dive will help spark a conversation about changes that are needed to protect Wikipedia into the future.
...
Read the original on wikiedu.org »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.