10 interesting stories served every morning and every evening.
Opinion I’m an eighth-generation American, and let me tell you, I wouldn’t trust my data, secrets, or services to a US company these days for love or money. Under our current government, we’re simply not trustworthy.
In the Trump‑redux era of 2026, European enterprises are finally taking data seriously, and that means packing up from Redmond-by-Seattle and moving their most sensitive workloads home. This isn’t just compliance theater; it’s a straight‑up national economic security play.
Europe’s digital sovereignty paranoia, long waved off as regulatory chatter, is now feeding directly into procurement decisions. Gartner told The Reg last year that IT spending in Europe is set to grow by 11 percent in 2026, hitting $1.4 trillion, with a big chunk rolling into “sovereign cloud” options and on‑prem/edge architectures.
The kicker? Fully 61 percent of European CIOs and tech leaders say they want to increase their use of local cloud providers. More than half say geopolitics will prevent them from leaning further on US‑based hyperscalers.
The American hypercloud vendors have figured this out. AWS recently made its European Sovereign Cloud available. This AWS cloud, Amazon claims, is “entirely located within the EU, and physically and logically separate from other AWS Regions.” On top of that, EU residents will “independently operate it” and “be backed by strong technical controls, sovereign assurances, and legal protections designed to meet the needs of European governments and enterprises for sensitive data.”
Many EU-based companies aren’t pleased with this Euro-washing of American hypercloud services. The Cloud Infrastructure Service Providers in Europe (CISPE) trade association accuses the EU Cloud Sovereignty Framework of being set up to favor the incumbent (American) hypercloud providers.
You don’t need a DEA warrant or a Justice Department subpoena to see the trend: Europe’s 90‑plus‑percent dependency on US cloud infrastructure, as former European Commission advisor Cristina Caffarra put it, is a single‑shock‑event security nightmare waiting to rupture the EU’s digital stability.
Seriously. What will you do if Washington decides to unplug you? Say Trump gets up on the wrong side of the bed and decides to invade Greenland. There goes NATO, and in all the saber-rattling leading up to the 10th Mountain Division being shipped to Nuuk, he orders American companies to cut their services to all EU countries and the UK.
With the way things are going, they’re not going to say no. I mean, CEOs Tim Cook of Apple, Eric Yuan of Zoom, Lisa Su of AMD, and — pay attention — Amazon’s Andy Jassy all went obediently to watch a feature-length White House screening of Melania, the universally-loathed, 104‑minute Amazon‑produced documentary about First Lady Melania Trump.
Sure, that’s a silly example, but for American companies to do business today, they’re kowtowing to Trump. Or, take a far more serious example, when Minnesota company CEOs called for “de-escalation” in the state, there was not one word about ICE or the government’s role in the bloodshed. It was the corporate equivalent of the mealy-mouthed “thoughts and prayers” American right-wingers always say after a US school shooting.
Some companies have already figured out which way the wind is blowing. Airbus, the European aerospace titan, has put out a €50 million, decade‑long tender to migrate its mission‑critical applications to a “sovereign European cloud.” Airbus wants its whole stack — data at rest, data in transit, logging, IAM, and security‑monitoring infrastructure — all rooted in EU law and overseen by EU operators. As Catherine Jestin, Airbus’s executive vice president of digital, told The Register: “We want to ensure this information remains under European control.”
Who can blame them? Thanks to the American CLOUD Act and related US surveillance statutes, US‑headquartered providers must hand over European data regardless of where the bytes sit. Exhibit A is that Microsoft has already conceded that it cannot guarantee data independence from US law enforcement. Airbus is betting that “data residency on paper” from AWS‑styled “EU sections” is not enough. Real sovereignty demands EU‑owned and run operations with full contractual and legal firewalls. Sure, your data may live in Frankfurt, but your fate still rests in Seattle, Redmond, or Mountain View if an American company owns your cloud provider.
Besides, do you really want some Trump apparatchik getting their hands on your data? I mean, this is a government where Madhu Gottumukkala, the acting director of the US Cybersecurity and Infrastructure Security Agency, uploaded sensitive data into ChatGPT!
In response, Brussels is pushing an open source‑led exit from hyperscaler lock‑in. Ministries are standardizing on Nextcloud‑style collaboration stacks instead of Microsoft 365 to fund Euro‑native clouds via the European Cloud Alliance. Some countries, like France, are already shoving Zoom, Teams, and other US videoconferencing platforms out the door in favor of a local service.
If you’re running an EU‑based firm in 2026, the takeaway isn’t that AWS‑in‑Frankfurt is evil; it’s that for certain workloads, especially national security, industrial IP, or high‑profile consumer data franchises, EU‑native cloud and services are no longer a nice‑to‑have but a business continuity plan requirement.
It’s time to get serious about digital sovereignty. The clock is ticking, and there’s no telling when Trump will go off. ®
...
Read the original on www.theregister.com »
In iOS 26.3, Apple introduced a new privacy feature which limits “precise location” data made available to cellular networks via cell towers. The feature is only available to devices with Apple’s in-house modem introduced in 2025. The announcement says
Cellular networks can determine your location based on which cell towers your device connects to.
This is well-known. I have served on a jury where the prosecution obtained location data from cell towers. Since cell towers are sparse (especially before 5G), the accuracy is in the range of tens to hundreds of metres.
But this is not the whole truth, because cellular standards have built-in protocols that make your device silently send GNSS (i.e. GPS, GLONASS, Galileo, BeiDou) location to the carrier. This would have the same precision as what you see in your Map apps, in single-digit metres.
In 2G and 3G this is called Radio Resources LCS Protocol (RRLP)
So the network simply asks “tell me your GPS coordinates if you know them” and the phone will respond.
In 4G and 5G this is called LTE Positioning Protocol (LPP)
RRLP, RRC, and LPP are natively control-plane positioning protocols. This means that they are transported in the inner workings of cellular networks and are practically invisible to end users.
It’s worth noting that GNSS location is never meant to leave your device. GNSS coordinates are calculated entirely passively, your device doesn’t need to send a single bit of information. Using GNSS is like finding out where you are by reading a road sign: you don’t have to tell anyone else you read a road sign, anyone can read a road sign, and the people who put up road signs don’t know who read which road sign when.
These capabilities are not secrets but somehow they have mostly slid under the radar of the public consciousness. They have been used in the wild for a long time, such as by the DEA in the US in 2006:
[T]he DEA agents procured a court order (but not a search warrant) to obtain GPS coordinates from the courier’s phone via a ping, or signal requesting those coordinates, sent by the phone company to the phone.
And by Shin Bet in Israel, which tracks everyone everywhere all the time:
The GSS Tool was based on centralized cellular tracking operated by Israel’s General Security Services (GSS). The technology was based on a framework that tracks all the cellular phones running in Israel through the cellular companies’ data centers. According to news sources, it routinely collects information from cellular companies and identifies the location of all phones through cellular antenna triangulation and GPS data.
Notably, the Israeli government started using the data for contact tracing in March 2020, only a few weeks after the first Israeli COVID-19 case. An individual would be sent an SMS message informing them of close contact with a COVID patient and required to quarantine. This is good evidence that the location data Israeli carriers are collecting are far more precise than what cell towers alone can achieve.
A major caveat is that I don’t know if RRLP and LPP are the exact techniques, and the only techniques, used by DEA, Shin Bet, and possibly others to collect GNSS data; there could be other protocols or backdoors we’re not privy to.
Another unknown is whether these protocols can be exploited remotely by a foreign carrier. Saudi Arabia has abused SS7 to spy on people in the US, but as far as I know this only locates a device to the coverage area of a Mobile Switching Center, which is less precise than cell tower data. Nonetheless, given the abysmal culture, competency, and integrity in the telecom industry, I would not be shocked if it’s possible for a state actor to obtain the precise GNSS coordinates of anyone on earth using a phone number/IMEI.
Apple made a good step in iOS 26.3 to limit at least one vector of mass surveillance, enabled by having full control of the modem silicon and firmware. They must now allow users to disable GNSS location responses to mobile carriers, and notify the user when such attempts are made to their device.
...
Read the original on an.dywa.ng »
Children under the age of 15 might be deleting their apps if the government’s plans are passed into law.
Prime Minister Petteri Orpo (NCP), the Finnish public health authority THL and two-thirds of Finns are in favour of banning or restricting the use of social media by under-15s.
Children under the age of 15 might be deleting their apps if the government’s plans are passed into law.
Prime Minister Petteri Orpo (NCP), the Finnish public health authority THL and two-thirds of Finns are in favour of banning or restricting the use of social media by under-15s.
Lunch break at the Finnish International School of Tampere (FISTA) is a boisterous time.
The yard is filled with children — ranging from grades 1 to 9, or ages 6 to 16 — running around, shouting, playing football, shooting basketball hoops, doing what kids do.
And there’s not a single screen in sight.
FISTA has taken advantage of the law change, brought in last August, which allows schools to restrict or completely ban the use of mobile phones during school hours. At FISTA, this means no phones at all unless specifically used for learning in the classroom.
“We’ve seen that cutting down on the possibilities for students to use their phones, during the breaks for instance, has spurred a lot of creativity,” FISTA vice principal Antti Koivisto notes.
“They’re more active, doing more physical things like playing games outdoors or taking part in the organised break activities or just socialising with each other.”
With the smartphone restriction in schools widely considered to have been a success, Finland’s government has now set its sights on social media platforms.
Prime Minister Petteri Orpo (NCP) said earlier this month that he supports banning the use of social media by children under the age of 15.
“I am deeply concerned about the lack of physical activity among children and young people, and the fact that it is increasing,” Orpo said at the time.
And there is a growing groundswell of support for Finland introducing such a ban. Two-thirds of respondents to a survey published earlier this week said they back a ban on social media for under-15s. This is a near 10 percentage point jump compared to a similar survey carried out just last summer.
The concerns over social media, and in particular the effects on children, have been well-documented — but Finnish researcher Silja Kosola’s recent description of the phenomenon as an “uncontrolled human experiment” has grabbed people’s attention once again.
Kosola, an associate professor in adolescent medicine, has researched the impact of social media on young people, and tells Yle News that the consequences are not very well understood.
“We see a rise in self-harm and especially eating disorders. We see a big separation in the values of young girls and boys, which is also a big problem in society,” Kosola explains.
In the video below, Silja Kosola explains the detrimental effects that excessive use of social media can have on young people.
She further notes that certain aspects of Finnish culture — such as the independence and freedom granted to children from a young age — have unwittingly exacerbated the ill effects of social media use.
“We have given smartphones to younger people more than anywhere else in the world. Just a couple of years ago, about 95 percent of first graders had their own smartphone, and that hasn’t happened anywhere else,” she says.
Since 10 December last year, children under the age of 16 in Australia have been banned from using social media platforms such as TikTok, Snapchat, Facebook, Instagram and YouTube.
Prime Minister Anthony Albanese began drafting the legislation after he received a heartfelt letter from a grieving mother who lost her 12-year-old daughter to suicide.
Although Albanese has never revealed the details of the letter, he told public broadcaster ABC that it was “obvious social media had played a key role” in the young girl’s death.
The legislation aims to shift the burden away from parents and children and onto the social media companies, who face fines of up to 49.5 million Australian dollars (29 million euros) if they consistently fail to keep kids off their platforms.
Clare Armstrong, ABC’s chief digital political correspondent, told Yle News that the initial reaction to the roll-out has been some confusion but no little “relief”.
“The government often talks about this law as being a tool to help parents and other institutions enforce and start conversations about tech and social media in ways that before, they couldn’t,” she says.
Although it is still early days, as the ban has only been in force for about six weeks, Armstrong adds that the early indicators have been good.
ABC journalist Clare Armstrong explains in the video below how children in Australia have been spending their time since the social media ban was introduced.
However, she adds a note of caution to any countries — such as Finland — looking to emulate the Australian model, noting that communication is key.
“Because you can write a very good law, but if the public doesn’t understand it, and if it can’t be enforced at that household level easily, then it’s bound to fail,” Armstrong says.
Seona Candy, an Australian living in Helsinki for over eight years, has been keenly following the events in her homeland since the social media ban came into effect in December.
She has heard anecdotally that if kids find themselves blocked from one platform, they just set up an account on another, “ones that maybe their parents don’t even know exist”.
“And this is then much, much harder, because those platforms don’t have parental controls, so they don’t have those things already designed into them that the more mainstream platforms do,” Candy says.
Because of this issue, and others she has heard about, she warns against Finland introducing like-for-like legislation based around Australia’s “reactive, knee-jerk” law change.
“I think the Finnish government should really invest in digital education, and digital literacy, and teach kids about digital safety. Finland is world-famous for education, and for media literacy. Play to your strengths, right?”
The All Points North podcast asked if Finland should introduce a similar ban on social media as in Australia. You can listen to the episode via this embedded player, on Yle Areena, via Apple, Spotify or wherever you get your podcasts.
...
Rust is one of the most loved languages out there, is fast, and has an amazing community. Rust invented the concept of ownership as a solution memory management issues without resorting to something slower like Garbage Collection or Reference Counting. But, when you don’t need to be quite as low level, it gives you utilities such as Rc, Arc and Cow to do reference counting and “clone-on-right” in your code. And, when you need to go lower-level still, you can use the unsafe system and access raw C pointers.
Rust also has a bunch of awesome features from functional languages like tagged enums, match expressions, first class functions and a powerful type system with generics.
Rust has an LLVM-based compiler which lets it compile to native code and WASM.
I’ve also been doing a bit of Swift programming for a couple of years now. And the more I learn Rust, the more I see a reflection of Swift. (I know that Swift stole a lot of ideas from Rust, I’m talking about my own perspective here).
Swift, too, has awesome features from functional languages like tagged enums, match expressions and first-class functions. It too has a very powerful type system with generics.
Swift too gives you complete type-safety without a garbage collector. By default, everything is a value type with “copy-on-write” semantics. But when you need extra speed you can opt into an ownership system and “move” values to avoid copying. And if you need to go even lower level, you can use the unsafe system and access raw C pointers.
Swift has an LLVM-based compiler which lets it compile to native code and WASM.
You’re probably feeling like you just read the same paragraphs twice. This is no accident. Swift is extremely similar to Rust and has most of the same feature-set. But there is a very big difference is perspective. If you consider the default memory model, this will start to make a lot of sense.
Rust is a low-level systems language at heart, but it gives you the tools to go higher level. Swift starts at a high level and gives you the ability to go low-level.
The most obvious example of this is the memory management model. Swift use value-types by default with copy-on-write semantics. This is the equivalent of using Cow<> for all your values in Rust. But defaults matter. Rust makes it easy to use “moved” and “borrowed” values but requires extra ceremony to use Cow<> values as you need to “unwrap” them .as_mutable() to actually use the value within. Swift makes these Copy-on-Write values easy to use and instead requires extra ceremony to use borrowing and moving instead. Rust is faster by default, Swift is simpler and easier by default.
Swift’s syntax is a masterclass in taking awesome functional language concepts and hiding them in C-like syntax to trick the developers into accepting them.
Consider match statements. This is what a match statement looks like in Rust:
Here’s how that same code would be written in Swift:
Swift doesn’t have a match statement or expression. It has a switch statement that developers are already familiar with. Except this switch statement is actually not a switch statement at all. It’s an expression. It doesn’t “fallthrough”. It does pattern matching. It’s just a match expression with a different name and syntax.
In fact, Swift treats enums as more than just types and lets you put methods directly on it:
Rust doesn’t have null, but it does have None. Swift has a nil, but it’s really just a None in hiding. Instead of an Option, Swift let’s you use T?, but the compiler still forces you to check that the value is not nil before you can use it.
You get the same safety with more convenience since you can do this in Swift with an optional type:
let val: T?if let val { // val is now of type `T`.}
Also, you’re not forced to wrap every value with a Some(val) before returning it. The Swift compiler takes care of that for you. A T will transparently be converted into a T? when needed.
Rust doesn’t have try-catch. Instead it has a Result type which contains the success and error types.
Swift doesn’t have a try-catch either, but it does have do-catch and you have to use try before calling a function that could throw. Again, this is just deception for those developers coming from C-like languages. Swift’s error handling works exactly like Rust’s behind the scenes, but it is hidden in a clever, familiar syntax.
func usesErrorThrowingFunction() throws { let x = try thisFnCanThrow()}func handlesErrors() { do { let x = try thisFnCanThrow() } catch err { // handle the `err` here. }}
This is very similar to how Rust let’s you use ? at the end of statements to automatically forward errors, but you don’t have to wrap your success values in Ok().
There are many common problems that Rust’s compiler will catch at compile time and even suggest solutions for you. The example that portrays this well is self-referencing enums.
Consider an enum that represents a tree. Since, it is a recursive type, Rust will force you to use something like Box<> for referencing a type within itself.
This makes the problem explicit and forces you to deal with it directly. Swift is a little more, automatic.
Note: that you still have to annotate this enum with the indirect keyword to indicate that it is recursive. But once you’ve done that, Swift’s compiler takes care of the rest. You don’t have to think about Box<> or Rc<>. The values just work normally.
Swift was designed to replace Objective-C and needed to be able to interface with existing code. So, it has made a lot of pragmatic choices that makes it a much less “pure” and “minimalist” language. Swift is a pretty big language compared to Rust and has many more features built-in. However, Swift is designed with “progressive disclosure” in mind which means that just as soon as you think you’ve learned the language a little more of the iceberg pops out of the water.
Here are just some of the language features:
Swift is a far easier language to get started and productive with. The syntax is more familiar and a lot more is done for you automatically. But this really just makes Swift a higher-level language and it comes with the same tradeoffs.
By default, a Rust program is much faster than a Swift program. This is because Rust is fast by default, and lets you be slow, while Swift is easy by default and lets you be fast.
Based on this, I would say both languages have their uses. Rust is better for systems and embedded programming. It’s better for writing compilers and browser engines (Servo) and it’s better for writing entire operating systems.
Swift is better for writing UI and servers and some parts of compilers and operating systems. Over time I expect to see the overlap get bigger.
There is a perception that Swift is only a good language for Apple platforms. While this was once true, this is no longer the case and Swift is becoming increasingly a good cross-platform language. Hell, Swift even compiles to wasm, and the forks made by the swift-wasm team were merged back into Swift core earlier this year.
Swift on Windows is being used by The Browser Company to share code and bring the Arc browser to windows. Swift on Linux has long been supported by Apple themselves in order to push “Swift on Server”. Apple is directly sponsoring the Swift on Server conference.
This year Embedded Swift was also announced which is already being used on small devices like the Panic Playdate.
Swift website has been highlighting many of these projects:
The browser company says that Interoperability is Swift’s super power.
And the Swift project has been trying make working with Swift a great experience outside of XCode with projects like an open source LSP and funding the the VSCode extension.
Compile times are (like Rust) quite bad. There is some amount of feature creep and the language is larger than it should be. Not all syntax feels familiar. The package ecosystem isn’t nearly as rich as Rust.
But the “Swift is only for Apple platforms” is an old and tired cliche at this point. Swift is already a cross-platform, ABI-stable language with no GC, automatic Reference Counting and the option to opt into ownership for even more performance. Swift packages increasingly work on Linux. Foundation was ported to Swift, open sourced and made open source. It’s still early days for Swift as a good, more convenient, Rust alternative for cross-platform development, but it is here now. It’s no longer a future to wait for.
...
In my YouTube channel, for some time now I started to refer to the process of writing software using AI assistance (soon to become just “the process of writing software”, I believe) with the term “Automatic Programming”.
In case you didn’t notice, automatic programming produces vastly different results with the same LLMs depending on the human that is guiding the process with their intuition, design, continuous steering and idea of software.
Please, stop saying “Claude vibe coded this software for me”. Vibe coding is the process of generating software using AI without being part of the process at all. You describe what you want in very general terms, and the LLM will produce whatever happens to be the first idea/design/code it would spontaneously, given the training, the specific sampling that happened to dominate in that run, and so forth. The vibe coder will, at most, report things not working or not in line with what they expected.
When the process is actual software production where you know what is going on, remember: it is the software *you* are producing. Moreover remember that the pre-training data, while not the only part where the LLM learns (RL has its big weight) was produced by humans, so we are not appropriating something else. We can pretend AI generated code is “ours”, we have the right to do so. Pre-training is, actually, our collective gift that allows many individuals to do things they could otherwise never do, like if we are now linked in a collective mind, in a certain way.
That said, if vibe coding is the process of producing software without much understanding of what is going on (which has a place, and democratizes software production, so it is totally ok with me), automatic programming is the process of producing software that attempts to be high quality and strictly following the producer’s vision of the software (this vision is multi-level: can go from how to do, exactly, certain things, at a higher level, to stepping in and tell the AI how to write a certain function), with the help of AI assistance. Also a fundamental part of the process is, of course, *what* to do.
I’m a programmer, and I use automatic programming. The code I generate in this way is mine. My code, my output, my production. I, and you, can be proud.
If you are not completely convinced, think to Redis. In Redis there is not much technical novelty, especially at its start it was just a sum of basic data structures and networking code that every competent system programmer could write. So, why it became a very useful piece of software? Because of the ideas and visions it contained.
Programming is now automatic, vision is not (yet).
Please enable JavaScript to view the comments powered by Disqus.
blog comments powered by
...
Read the original on antirez.com »
TLDR: I made a CLI tool that can resolve an IP address to a country, US state and even a city. https://github.com/jimaek/geolocation-tool
It works well and confirms ipinfo’s findings.
Recently, I read how ipinfo finally proved what most technical people assumed: VPN providers don’t actually maintain a crazy amount of infrastructure in hundreds of countries. They simply fake the IP geolocation by intentionally providing wrong location data to ARIN, RIPE, and Geo DB providers via geofeeds.
They achieved their results using a novel approach compared to other geo IP providers. Based on their blog and HackerNews comments, they built a large probe network and used it to trace and ping every (or most) IP addresses on the internet.
This latency and hop data, most likely along with advanced algorithms and data cross-reference, provides a reliable way of correctly detecting the physical geolocation of an IP address, without relying on faked data available in public sources.
This is a very interesting approach that makes total sense, and I’m sure their clients appreciate it and heavily rely on it.
While I can’t ping every single IP address on the internet from hundreds of locations just yet, I can do it to a limited subset using Globalping. So I decided to try it out and see if I can replicate their results and build a small tool to allow anyone to do the same.
Globalping is an open-source, community-powered project that allows users to self-host container-based probes. These probes then become part of our public network, which allows anyone to use them to run network testing tools such as ping and traceroute.
At the moment, the network has more than 3000 probes, which in theory should be plenty to geolocate almost any IP address down to a country and even a US state level.
To automate and simplify this process, I made a little CLI tool using the globalping-ts library. My original idea was simple:
Ping it a few times per continent to select the continent
Then ping the IP from many different probes on that continent
Group and sort the results; the country with the lowest latency should be the correct one
And as a bonus, repeat the same process for USA states if the winning country was the US
Essentially, what I had to do was simply create a few measurements and pass the location I needed using Globalping’s magic field, which would automatically figure out what I was looking for and select a few pseudo-random probes that fit the location and limit.
Now initially, I used ping with 2 packets to run all measurements as quickly as possible, but I quickly realized it wasn’t a good idea as most networks block ICMP traffic. Next, I tried switching to TCP-based ping, which required trying a few popular ports to get it to work. I quickly realized this was too complicated and unreliable and switched to traceroute.
It worked perfectly. Even though traceroute uses ICMP by default, it did not matter to me if the target IP’s network allowed ICMP or not, I simply analyzed the latency of the last available hop. Even if you block ICMP, your upstream most likely allows it, and in most cases, it’s located in the same country.
Of course, this means the resulting data is not 100% perfect. A better approach would be to analyze each IP using different methods, including TCP and UDP-based traceroute on different ports, and expand to the last few hops instead of just one. Maybe even try to figure out the location of the registered ASNs and use a weights system in combination with public whois info in order to “vote” for the right location based on different inputs. Probably even mark low certainty IPs to be retested with a double amount of probes. (end of rant)
But that’s something for a commercial provider to figure out, which it seems they did.
For continent detection, I decided to use just 5 probes per continent; the results were extremely accurate. Although for IPs just on the “border” of continents it might be ineffective, a higher amount of probes would generate better results. For this use case, it was good enough.
My home IP in central Europe was too easy to detect:
Phase 1: Detecting continent…
North America: 137.18 ms
Europe: 32.39 ms
Asia: 174.54 ms
South America: 215.08 ms
Oceania: 244.15 ms
Africa: 156.83 ms
In phase 2, all we need to do is run a single measurement with the winning continent as the location and a higher limit. Initially, I started with 250 probes with great accuracy.
Eventually, I decided to drop down to 50 as the default. Based on my tests, the results continued to look really good, and it would allow the tool to be run even without authentication, as the Globalping API allows 250 tests per hour per IP and 50 probes per measurement.
Although I recommend registering for a free account at https://dash.globalping.io/ and authenticating with a token to get up to 500 tests per hour and run more tests.
Note: If you need more tests than that, you can either host a probe to generate passive credits to be used as tests, or donate via GitHub Sponsors. We will automatically detect it and credit your account.
Phase 2: Detecting country…
Measuring from 50 probes…
[████████████████████████████████████████] 100.0% 50/50 - Best: PL (7.29 ms)
Top 3 Locations:
1.. Poland, EU 7.29 ms
2.. Germany, EU 13.42 ms
3.. Lithuania, EU 17.65 ms
SUMMARY
Location: Poland, EU
Minimum Latency: 7.29 ms
Confidence: Medium
Great, now we have a basic IP-to-country resolver that only takes a few seconds to provide a response, and I didn’t even have to understand or write any complicated math. Although I’m sure someone smarter could use a formula to geolocate IPs with even fewer probes and higher accuracy.
For phase 3, we want to resolve the US to a specific state or territory, just like ipinfo did, and luckily they even provided a few sample IPs and locations to benchmark against during testing.
Again, this was as simple as creating a new measurement with the USA as the location. I used 50 probes as the default limit and tested the NordVPN IP advertised as Bahamas but resolved to Miami by ipinfo.
Phase 3: Detecting US state…
Measuring from 50 probes…
[████████████████████████████████████████] 100.0% 50/50 - Best: FL (0.45 ms)
Top 3 Locations:
1. Florida, USA 0.45 ms
2. South Carolina, USA 12.23 ms
3. Georgia, USA 15.01 ms
SUMMARY
Location: Florida, United States
Minimum Latency: 0.45 ms
Confidence: Very High
The tool agrees, Florida is the correct location. But how accurate can this system be? Can we expand it to show the city too?
Let’s make a new phase, which again, will simply set the resulting country or state as the location and extract the city of the probe with the lowest latency. Here, since there are too many possible cities and towns per state and country, I expect the accuracy to be low and only point to the closest major hub. But in theory, this should be more than enough for use cases like routing or performance debugging.
And here we go, the same result ipinfo got
Phase 4: Detecting city…
Measuring from 36 probes…
[████████████████████████████████████████] 100.0% 36/36 - Best: Miami (0.00 ms)
Top 3 Locations:
1. Miami, Florida, USA 0.00 ms
2. West Palm Beach, Florida, USA 4.36 ms
3. Tampa, Florida, USA 5.85 ms
SUMMARY
Location: Miami, Florida, United States
Minimum Latency: 0.00 ms
Confidence: Very High
The current results are good but could be better. The main problem is with how the magic field works: when setting, for example, ‘Europe’ as the location, it tries to spread the tests across all European probes but does not guarantee that every single country is going to be included.
This results in inconsistencies where a probe in the same country as the target IP was not selected, and so the tool assumes the IP is located in a different neighbouring country.
To fix this and make the results more consistent, you would need to change the selection logic and manually set every country per continent and US state. By passing the full list of countries/states to the Globalping API, you ensure that at least one probe in that location is going to be selected. Additionally, you fully control the number of probes per location, which is very important to control the accuracy.
For example, North America technically contains 43 countries and territories. This means you can’t just set a limit of one probe per country, it is not enough to properly understand the latency to the target IP from the disproportionately larger USA. A better limit would be around 200 probes for the USA, 20 for Canada, and 10 for Mexico.
But the goal of this tool was to use a minimum amount of probes to allow unauthenticated users to test it out. The current approach works great, it is simple to implement and it is very easy to control the accuracy by simply setting a higher limit of probes.
Overall, latency-based geolocation detection seems to be a great way to verify the location of any IP as long as you have enough vantage points. It will most likely fall apart in regions with minimal or no coverage.
The tool itself is open source and you can run it like this:
You can also use the –limit parameter to use more probes per phase. But be careful as it applies the set value to all phases and this will very quickly eat through your limit. Check the full docs in GitHub.
Pull requests with improvements are welcome!
Feel free to email me if you need some free credits to play around with d@globalping.io
And of course consider hosting a probe, it’s as simple as running a container https://github.com/jsdelivr/globalping-probe
...
Read the original on blog.globalping.io »
A leader in the global fight against smallpox and a champion of vaccine science, William Foege died last SaturdayThe late physicians and health administrators William Foege (middle), J. Donald Millar (left) and J. Michael Lane (right), all of whom served in the Global Smallpox Eradication Program, in 1980. Sign Up for Our Free Daily NewsletterI agree my information will be processed in accordance with the Scientific American and Springer Nature Limited Privacy Policy . We leverage third party services to both verify and deliver email. By providing your email address, you also consent to having the email address shared with third parties for those purposes. William Foege, a leader in the global fight to eliminate smallpox, has died. Foege passed away on Saturday at the age of 89, according to the Task Force for Global Health, a public health organization he co-founded.Foege headed the U.S. Centers for Disease Control and Prevention’s Smallpox Eradication Program in the 1970s. Before the disease was officially eradicated in 1980, it killed around one in three people who were infected. According to the CDC, there have been no new smallpox cases since 1977.“If you look at the simple metric of who has saved the most lives, he is right up there with the pantheon,” said former CDC director Tom Frieden to the Associated Press. “Smallpox eradication has prevented hundreds of millions of deaths.”If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.Foege went on to lead the CDC and served as a senior medical adviser and senior fellow at the Bill & Melinda Gates Foundation. In 2012 then president Barack Obama awarded him the Presidential Medal of Freedom.Foege was a vocal proponent of vaccines for public health, writing with epidemiologist Larry Brilliant in Scientific American in 2013 that the effort to eliminate polio “has never been closer” to success. “By working together,” they wrote, “we will soon relegate polio—alongside smallpox—to the history books.” Polio remains a “candidate for eradication,” according to the World Health Assembly.And in 2025 Foege, alongside several other former CDC directors, spoke out against the policies of the current secretary of health and human services Robert F. Kennedy, Jr. In a New York Times op-ed, they wrote that the top health official’s tenure was “unlike anything we had ever seen at the agency.”In a statement, Task Force for Global Health CEO Patrick O’Carroll remembered Foege as an “inspirational” figure, both for early-career public health workers and veterans of the field. “Whenever he spoke, his vision and compassion would reawaken the optimism that prompted us to choose this field, and re-energize our efforts to make this world a better place,” O’Carroll said.It’s Time to Stand Up for ScienceIf you enjoyed this article, I’d like to ask for your support. Scientific American has served as an advocate for science and industry for 180 years, and right now may be the most critical moment in that two-century history.I’ve been a Scientific American subscriber since I was 12 years old, and it helped shape the way I look at the world. SciAm always educates and delights me, and inspires a sense of awe for our vast, beautiful universe. I hope it does that for you, too.If you subscribe to Scientific American, you help ensure that our coverage is centered on meaningful research and discovery; that we have the resources to report on the decisions that threaten labs across the U.S.; and that we support both budding and working scientists at a time when the value of science itself too often goes unrecognized.In return, you get essential news, captivating podcasts, brilliant infographics, can’t-miss newsletters, must-watch videos, challenging games, and the science world’s best writing and reporting. You can even gift someone a subscription.There has never been a more important time for us to stand up and show why science matters. I hope you’ll support us in that mission.
...
Read the original on www.scientificamerican.com »
* Watch this repo if you need to be notified when there’s update
This repository is my notes/blog for cpython source code
Trying to illustrate every detail of cpython implementation
# based on version 3.8.0a0
cd cpython
git reset –hard ab54b9a130c88f708077c2ef6c4963b632c132b3
The following contents are suitable for those who have python programming experience and interested in internal of python interpreter, for those who needs beginner or advanced material please refer to awesome-python-books
I will only recommend what I’ve read
All kinds of contributions are welcome
* submit a pull request
if you want to share any knowledge you know
* if you want to share any knowledge you know
...
Read the original on github.com »
US authorities have reportedly investigated claims that Meta can read users’ encrypted chats on the WhatsApp messaging platform, which it owns.
The reports follow a lawsuit filed last week, which claimed Meta “can access virtually all of WhatsApp users’ purportedly ‘private’ communications”.
Meta has denied the allegation, reported by Bloomberg, calling the lawsuit’s claim “categorically false and absurd”. It suggested the claim was a tactic to support the NSO Group, an Israeli firm that develops spyware used against activists and journalists, and which recently lost a lawsuit brought by WhatsApp.
The firm that filed last week’s lawsuit against Meta, Quinn Emanuel Urquhart & Sullivan, attributes the allegation to unnamed “courageous” whistleblowers from Australia, Brazil, India, Mexico and South Africa.
Quinn Emanuel is, in a separate case, helping to represent the NSO Group in its appeal against a judgment from a US federal court last year, which ordered it to pay $167m to WhatsApp for violating its terms of service in its deployment of Pegasus spyware against more than 1,400 users.
“We’re pursuing sanctions against Quinn Emanuel for filing a meritless lawsuit that was designed purely to grab headlines,” said Carl Woog, a Meta spokesperson, in a statement. “This is the same firm that is trying to help NSO overturn an injunction that barred their operations for targeting journalists and government officials with spyware.”
Adam Wolfson, a partner at Quinn Emanuel said: “Our colleagues’ defence of NSO on appeal has nothing to do with the facts disclosed to us and which form the basis of the lawsuit we brought for worldwide WhatsApp users.
“We look forward to moving forward with those claims and note WhatsApp’s denials have all been carefully worded in a way that stops short of denying the central allegation in the complaint — that Meta has the ability to read WhatsApp messages, regardless of its claims about end-to-end encryption.”
Steven Murdoch, professor of security engineering at UCL, said the lawsuit was “a bit strange”. “It seems to be going mostly on whistleblowers, and we don’t know much about them or their credibility,” he said. “I would be very surprised if what they are claiming is actually true.”
If WhatsApp were, indeed, reading users’ messages, this was likely to have been discovered by staff and would end the business, he said. “It’s very hard to keep secrets inside a company. If there was something as scandalous as this going on, I think it’s very likely that it would have leaked out from someone within WhatsApp.”
The Bloomberg article cites reports and interviews from officials within the US Department of Commerce in claiming that the US has investigated whether Meta could read WhatsApp messages. However, a spokesperson for the department called these assertions “unsubstantiated”.
WhatsApp bills itself as an end-to-end encrypted platform, which means that messages can be read only by their sender and recipient, and are not decoded by a server in the middle.
This contrasts with some other messaging apps, such as Telegram, which encrypt messages between a sender and its own servers, preventing third parties from reading the messages, but allowing them — in theory — to be decoded and read by Telegram itself.
A senior executive in the technology sector told the Guardian that WhatsApp’s vaunted privacy “leaves much to be desired”, given the platform’s willingness to collect metadata on its users, such as their profile information, their contact lists, and who they speak to and when.
However, the “idea that WhatsApp can selectively and retroactively access the content of [end-to-end encrypted] individual chats is a mathematical impossibility”, he said.
Woog, of Meta, said: “We’re pursuing sanctions against Quinn Emanuel for filing a meritless lawsuit that was designed purely to grab headlines. WhatsApp’s encryption remains secure and we’ll continue to stand up against those trying to deny people’s right to private communication.”
...
Read the original on www.theguardian.com »
6. Results
Feel free to skip this section if you don’t really care about backstories. I just figured it makes sense to recap how and why one might start having an interest in declarative distros before tackling the main topic.
I’ve been a Linux-only user for about ten years now and, like many others, I too embarked on the arduous journey of distro-hopping. I started with Mint and when that felt too slow, I switched to Ubuntu. When Ubuntu felt too handholdy, I switched to Arch, which proved to be my main driver for well over five or so years. And when I couldn’t resist the Siren’s call, I moved on to Gentoo, thinking surely “harder is better”. Which resulted in severe burnout in a few months, so I capitulated and switched to Fedora, which was very stable and honestly an all around excellent system. But once more, my interest was piqued, and (before today’s adventure) I finally switched to NixOS.
I’ve always had a passing interest towards Nix ever since I’ve first heard about it, but until fairly recently, I always dismissed it as a tool for DevOps guys. The syntax was weird, the need for reproducible environments seemingly irrelevant, and stuff like the oft-recommended Nix Pills seemed anything but newbie-friendly.
So then why would someone like me, who’s so adamant about not needing Nix eventually choose to go all-in? I guess it was at first less about Nix being better and just the rest being worse.
Of the two big reasons for the switch, one was that I realized that having per-directory environments for your projects is actually a very handy thing to do when you like to toy around with many technologies. I used to generate my other blog using Jekyll and, no matter which distro I used, it was always a pain in the neck to have a good Ruby environment set up. bundler install didn’t really want to work without privileges and I wasn’t really a fan of unleashing sudo on it, but usually that was the only way I could get things to work.
With Nix, however, it was a matter of just describing a few packages in a shell and boom, Ruby in one folder, no Ruby (and thus no mess) everywhere else. I was hooked! I started adding shell.nix files to all my little projects, hell, I started planning projects by first adding a shell.nix with all the dependencies I would reasonably need.
The other reason, which ultimately cemented that I need to commit, was that I was getting tired of my installed packages slowly drifting out of control. Sure, every package manager has some method of listing what’s installed, but these are usually cumbersome and completely ephemeral (in the sense that any listing becomes invalid the moment you change anything).
With NixOS, the equation is flipped on its head: No longer did I query the system to tell me what’s installed and what’s not, it was now the system that worked based on files that I edit. The difference sounds small on paper, but for me it was an extremely liberating feeling to know that I could edit my system configuration in a versionable, explicit, and centralized way.
But NixOS isn’t the only declarative distro out there. In fact GNU forked Nix fairly early and made their own spin called Guix, whose big innovation is that, instead of using the unwieldy Nix-language, it uses Scheme. Specifically Guile Scheme, GNU’s sanctioned configuration language. I’ve been following Guix for a bit, but it never felt quite ready to me with stuff like KDE being only barely supported and a lot of hardware not working out of the box.
However, now that (after three years) Guix announced its 1.5.0 release with a lot of stuff stabilized and KDE finally a first-party citizen, I figured now is the best time to give it a fresh shot. This post captures my experiences from installation to the first 3-4 days.
Plug your USB in, dd the file onto the drive, reboot, nothing unusual. If you’ve ever installed a Linux system, it’s more of the same.
After selecting the pendrive in my BIOS settings, the monitor began to glow in a deep, radiant blue as the Guix System logo appeared on my screen… only to suddenly switch to a menacing red: My CPU’s integrated GPU is not supported by free firmware. A helpful popup gave me a gentle nudge about picking free hardware next time (buddy, have you seen the PC part prices these days?) and off I went into the installer proper.
The installer itself is refreshingly barebones and I mean this in a positive way. It asks all the necessary questions and provides a nice basic configuration file, all done in a retro Ncurses-based TUI. I was really happy to see that, unlike my last attempt at using Guix System in the early 2020-s, KDE Plasma is now a first-party choice during installation. I never really vibed too much with GNOME and the other options didn’t appeal either, so the choice was obvious.
Now, I’m not sure if I just picked the worst possible time or if the Guix servers were facing unusual load or whatever may have happened, but after such a breeze of a setup, the moment I pressed install, my PC became unusable for the next 2.5 hours. Which is unacceptable for an installation process these days in my opinion. I am lucky enough to live in a household with fiber-optic internet, that merely shrugs at bandwidth of up to a gigabyte per second and yet nearly all packages downloaded with a whopping 50 kilobytes per second, meaning even small-ish 5-10 megabyte packages took long minutes to download.
A reboot later my issues only got worse.
I was assuming I’d get SDDM after having chosen KDE Plasma, but (what a later, closer read of the manual made me realize is the expected outcome for a default config) it was GDM that loaded in. I entered my name and password, and I was greeted with the familiar Plasma 6 spinner. The first hint that something might be off was that it loaded a bit longer than usual, but I was not going to get mad at waiting 10 seconds instead of 3. After all, I did just wait magnitudes longer to get here.
With practically nothing installed beyond the very basics, I clicked on Konsole, hoping to start prodding around my config and add some of my day to day apps. To my horror, it opened in the top left corner, without a titlebar and without any borders. What’s more, no matter what I did, I couldn’t move it. It also didn’t show up on the menu bar, despite the application launcher still being completely usable. At this point I was fairly exhausted by these antics, but I figured,
Well, it’s a brand new release, perhaps this just snuck in. Let’s give updating a shot and see if that helps.
So I issued guix pull… The download whizzed by with speed quite unexpected after what I experienced with the installer… Only to crash into the brick wall that’s indexing. Okay, whatever, another 10-12 minutes down the drain, at least now I have newest version.
Except I didn’t. Because, unlike Nix, the guix executable is not an omnipresent, unique thing that anyone and everyone uses on your PC. Not only does every user have their own instance, if you don’t issue a certain set of commands, you won’t start using the new version, despite updating it.
To Guix’s credit, the CLI does scream at you to update your environment or else you’ll keep using the old version, but I still find this system very disorientating compared to Nix. I’m certain experienced Guixheads are long past being tripped up by this sort of stuff and might even struggle to remember that there was a time they had to do these special steps too, but as a new user it felt a bit rough, especially consdering this is Guix System, i.e. the system whose whole purpose is to be integrate Guix as much as it can.
Back to our issue at hand. I issued sudo -s and guix pull-ed again. Once more 10-12 minutes passed indexing. But at least I could finally call guix system reconfigure /etc/config.scm. Interestingly things are much faster this time around, I saw speeds up to 30-50 Mbps. Before long the system was updated to the newest commit and I rebooted with high hopes.
High hopes, that were immediately dashed when Plasma loaded in the same messed up way. At this point I started to suspect this might be an issue with the GPU driver, so I enabled the LXQT desktop environment and rebooted once more. Thankfully that one worked like a charm and I was able to boot up both Emacs (editing Scheme with GNU Nano is a pain I do not wish on anyone) and LibreWolf (Firefox’s de-Mozilla-d variant).
Not having found anything too useful in the docs, I decided to make my problem someone else’s so I fired up ERC and connected to Libera.chat’s #guix channel. After around half an hour of wait, a user by the name of Rutherther stepped up and offered me some help. We were able to figure it out that Nouveau wasn’t able to drive my GPU (an RTX 5070), so his recommendation was that I should try booting with nomodeset. I did, but it sadly didn’t help much either.
At this point I was out of ideas. Ideas of solving this using pure-Guix System, that is. There was still one option I wanted to avoid as long as I could, but alas, it seemed like the only option, that still had a realistic chance of working.
Enter Nonguix, the Mr. Hyde to Guix’s Dr. Jekyll, the shady guy who offers you a hit and first time’s for free, the… Erm, in a nutshell, it’s the repository for non-free applications and drivers packages for Guix System, basically. Interestingly enough, by Guix’s own findings about 64% of users utilize the Nonguix channel, which is perhaps not “literally everyone”, but it does paint a picture that there is still stuff out there that you simply cannot replace with FOSS software yet.
Enabling the repo wasn’t exactly difficult. You just paste the short excerpt from above (also found in the README) into your ~/.config/guix/channels.scm and /etc/guix/channels.scm files, guix pull, let it index to its heart’s content again, and then you have access to all that is nasty (yet occasionally useful) in the world.
I figured perhaps if Linux-libre and its free firmware couldn’t deal with my GPU, then surely Linux proper with its binary blobs could. Hell, for good measure I threw in the NVIDIA transform, which is supposed to automagically translate all dependencies to use the proprietary drivers.
Turns out my eagerness was a mistake. Not only did the process take yet another half an hour (if not more, I stopped counting), upon reboot all I was met with was a kernel panic about the driver not being able to cope with the GPU it found and a massive spew of FSCK logs.
With no better ideas in mind, I took out my pendrive again and burned Nonguix’s own pre-built ISO on it using my partner’s PC. While it ultimately did get me a working system, this version has three unfortunate hindrances:
It was built in 2022, far before Guix’s migration to Codeberg, meaning it still attempts to pull content from the unfathomably slow GNU Savannah mirror. I had to manually override my channels.scm to point at the Codeberg repo instead, but with no easy means of finding its “channel introduction”, I had to pass in –disable-authentication to Guix when updating my system. A bit scary, but I trust the Codeberg repo.
Because of its age, I got a lot of somewhat intimidating errors about hardware not being recognized and other stuff I couldn’t even decipher, but ultimately the system booted to the installer without issue.
For some reason while the installer itself does include Nonguix stuff, it actually does not include the repo in the resulting channels files, nor the substitution server for the project. The README has a warning about this, but if you happen to miss it, you could accidentally install a non-Nonguix Guix System (say that three times fast).
None of these were particularly hard to fix, however, and soon enough I was back where I started. That is to say, in a nomodeset X11 session, except this time running i3, as LXQT wasn’t an available option on an installer this old. There was certainly a bit of a hacker-ish vibe to messing with code files in an environment like that, but I was honestly much more looking forward to finally having a usable desktop.
Having learned from my hastiness, this time I was smarter. I only enabled the full kernel and firmware blobs, without going anywhere near the NVIDIA transform. I issued another guix system reconfigure and, after having time for another tea session, my update was finally finished.
Obviously there is little point in throwing Guix System on my PC and declaring success. I wanted to be able to at least reproduce the kind of workflow I’m used to using NixOS. For that, I need the following:
* A browser: preferably Firefox, as I’m not a huge fan of Chrome / Chromium,
* Dev environments: for Rust, Zig, Scheme, and TypeScript (with the option for more, if possible),
* Emacs: I do almost all my text editing in it these days, falling back to Neovim for quick tasks,
* Steam: for the very rare occasions I want to game,
* NVIDIA drivers: I prefer to offload day-to-day usage to my CPU’s integrated GPU, as it cuts my energy usage in half.
Of these it was obvious that two would be relatively hard and one “outright impossible”. The two being Steam and the drivers (as both are non-free and thus not in Guix’s default repos) and the “impossible” one being Discord (which not even the non-free repo has packaged). But I was ready to compromise a little bit since I am requesting stuff that’s explicitly against Guix’s goals.
Figure 6: My desktop running Wezterm packaged by me and Emacs.
While there has been occasional bumps and hitches along the ride, I must say I’m very impressed with Guix System so far. Let’s go through this list in order:
* Browser: So far I’m really enjoying LibreWolf. It feels a lot snappier than Firefox and I’m really baffled how much speed I was apparently missing out on.
* E-mails: I installed Icedove, which is basically just Thunderbird without Mozilla branding. It works as expected.
* Office suite: LibreOffice is available as expected. Not much to say about it. I guess it’s interesting that Guix isn’t following the usual -stale / -fresh packaging schema, but I don’t really mind not having cutting edge versions of an office suite :)
* Dev environments: I’ve only briefly toyed with development environments so far, but to me it seems like for simple use-cases it might be even easier to use than shell.nix (you don’t need any sort of ceremony, just a manifest.scm file with a (specifications->manifest form inside and you have a dev env ready to go.)
* Emacs: Installed just fine. I had to install emacs-vterm to make Vterm work, but all that took was the very simple process of adding the library to my home configuration and then referencing it in my Emacs config as per this Reddit post.
* Discord: I decided to just use Discord’s browser version, which works just as fine (if not better). It’s trading a tiny bit of convenience in return for not having to figure out how to manually add a package for it from some random third-party source. From what I’ve read elsewhere Flatpak is also an option, but I prefer having just one package manager at a time.
* Steam: Installed shockingly easily. I have to really give props to the Nonguix team. I tested Portal 2 with the Nouveau driver, it is a little disheartening to see a 15 years old game lag, but I understand the people’s hands are tied when it comes to the free drivers. After I managed to install the proprietary drivers, I was able to play even Portal RTX, which is something I never managed to get to work using NixOS.
* NVIDIA drivers: This time I actually read the docs properly and it didn’t take long for me to realize the initial problem that caused my previous install to be unbootable was of course found between the chair and keyboard. This time, after making sure I enabled the open drivers and kernel mode-setting, I crossed my fingers, issued a reconfigure and it works beautifully!
In a nutshell I’m very positively surprised by Guix System. After struggling so much with it years ago, this time everything just clicked after a much shorter battle. So much so that I’m happy to make it my daily driver for the foreseeable future. Beyond the slightly slower execution speed, I’m getting a comparable experience to NixOS, with all the usual pros a declarative environment brings and without having to put up with Nixlang.
My only recurring issues so far are the occasional slow download speeds and that I have to start my kernel in nomodeset because otherwise the graphical environment crashes without me being able to switch to a TTY. It’s a bummer, but honestly, I’m not too bothered by it so far. I’m trusting a driver update will fix it soon enough and, if not, it’s not exactly difficult to throw in a kernel parameter into your config.
I’m hoping to do a followup post about packaging in Guix, because I’ve been dipping my toes into it by trying to package Wezterm and the journey there was similarly arduous as installing the system itself.
Till then, thank you for reading and see you next time!
The stuff you see below are all I managed to write down mid-process. Some of these I threw it into the file from Nano, some from half-broken X11 sessions. Because of this, it’s not exactly well-edited, but I hope it might provide a glimpse into my mind at the time.
...
Read the original on nemin.hu »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.