10 interesting stories served every morning and every evening.
A real-world production migration from DigitalOcean to Hetzner dedicated, handling 248 GB of MySQL data across 30 databases, 34 Nginx sites, GitLab EE, Neo4j, and live mobile app traffic — with zero downtime.
Running a software company in Turkey has become increasingly expensive over the last few years. Skyrocketing inflation and a dramatically weakening Turkish Lira against the US dollar have turned dollar-denominated infrastructure costs into a serious burden. A bill that felt manageable two years ago now hits very differently when the exchange rate has multiplied several times over.
Every month, we were paying $1,432 to DigitalOcean for a droplet with 192GB RAM, 32 vCPUs, 600GB SSD, two block volumes (1TB each), and backups enabled. The server was fine — but the price-to-performance ratio had stopped making sense.
Then we discovered the Hetzner AX162-R.
That’s $14,388 saved per year — for a server that’s objectively more powerful in every dimension. The decision was easy.
I’ve been a DigitalOcean customer for nearly 8 years. They have a great product and I have no complaints about reliability or developer experience. But looking at those numbers now, I cannot help feeling a bit sad about all the extra money I left on the table over the years. If you are running steady-state workloads and not actively using DO’s ecosystem features, do yourself a favor and check dedicated server pricing before your next renewal.
* Several live mobile apps serving hundreds of thousands of users
Old server: CentOS 7 — long past its end-of-life, but still running in production. New server: AlmaLinux 9.7 — a RHEL 9 compatible distribution and the natural successor to CentOS. This migration was also an opportunity to finally escape an OS that hadn’t received security updates in years.
The naive approach — change DNS, restart everything, hope for the best — wasn’t acceptable. Instead, we designed a proper migration path with six phases:
Phase 1 — Full stack installation on the new server
Nginx (compiled from source with identical flags), PHP (via Remi repo, with the same .ini config files from the old server), MySQL 8.0, Neo4J Graph DB, GitLab EE, Node.js, Supervisor, and Gearman. Every service had to be configured to match the old server’s behavior before we touched a single DNS record.
SSL certificates were handled by rsyncing the entire /etc/letsencrypt/ directory from the old server to the new one. After the migration was complete and all traffic was flowing through the new server, we force-renewed all certificates in one shot:
Phase 2 — Web files cloned with rsync
The entire /var/www/html directory (~65 GB, 1.5 million files) was cloned to the new server using rsync over SSH with the –checksum flag for integrity verification. We ran a final incremental sync right before cutover to catch any files changed after the initial clone.
Phase 3 — MySQL master to slave replication
Rather than taking the database offline for a dump-and-restore, we set up live replication. The old server became master, the new server a read-only slave. We used mydumper for the initial bulk load, then started replication from the exact binlog position recorded in the dump metadata. This kept both databases in real-time sync until the moment of cutover.
Phase 4 — DNS TTL reduction
We scripted the DigitalOcean DNS API to lower all A and AAAA record TTLs from 3600 to 300 seconds — without touching MX or TXT records (changing mail record TTLs can cause deliverability issues). After waiting one hour for old TTLs to expire globally, we were ready to cut over in under 5 minutes.
Phase 5 — Old server nginx converted to reverse proxy
We wrote a Python script that parsed every server {} block across all 34 Nginx site configs, backed up the originals, and replaced them with proxy configurations pointing to the new server. This meant that during DNS propagation, any request still hitting the old IP was silently forwarded. No user would see a disruption.
Phase 6 — DNS cutover and decommission
A single Python script hit the DigitalOcean API and flipped all A records to the new server IP in seconds. The old server remained as a cold standby for one week, then was shut down.
The key insight: at no point did we have a window where the service was unavailable. Traffic was always being served — either directly or through the proxy.
This was the most complex part of the entire operation.
We used mydumper instead of the standard mysqldump — and it made an enormous difference. By leveraging the new server’s 48 CPU cores for parallel export and import, what would have taken days with a traditional single-threaded mysqldump was completed in hours. If you’re migrating a large MySQL database and you’re not using mydumper/myloader, you’re doing it the hard way.
The main dump’s metadata file recorded the binlog position at the time of the snapshot:
File: mysql-bin.000004
Position: 21834307
This would be our replication starting point.
Once the dump was complete, we transferred it to the new server using rsync over SSH. With 248 GB of compressed chunks, this was significantly faster than any other transfer method:
The –compress flag in mydumper paid off here — compressed chunks transferred much faster over the wire.
Being stuck on CentOS 7 meant we were also stuck on MySQL 5.7 — an outdated version that had been running in production for years. Before the migration, we ran mysqlcheck –check-upgrade to verify that our data was compatible with MySQL 8.0. It came back clean, so we installed the latest MySQL 8.0 Community on the new server. The performance improvement across all our projects was immediately noticeable — query execution times dropped significantly thanks to MySQL 8.0’s improved optimizer and InnoDB enhancements.
That said, the version jump did introduce one tricky problem.
After import, the mysql.user table had the wrong column structure — 45 columns instead of the expected 51. This caused mysql.infoschema to be missing, breaking user authentication.
But this failed the first time with:
ERROR: ‘sys.innodb_buffer_stats_by_schema’ is not VIEW
The sys schema had been imported as regular tables instead of views. Solution:
With both dumps imported, we configured the new server as a replica of the old one:
Almost immediately, replication stopped with error 1062 (Duplicate Key). This happened because our dump was taken in two passes — during the gap between them, rows were written to certain tables, and now both the imported dump and the binlog replay were trying to insert the same rows.
IDEMPOTENT mode silently skips duplicate key and missing row errors. All critical databases synced without a single error. Within a few minutes, Seconds_Behind_Master dropped to 0.
Before touching a single DNS record, we needed to verify that all services were working correctly on the new server. The trick: we temporarily edited the /etc/hosts file on our local machine to point our domain names to the new server’s IP.
# /etc/hosts (local machine)
NEW_SERVER_IP yourdomain1.com
NEW_SERVER_IP yourdomain2.com
# … and so on for all your domains
With this in place, our browsers and Postman would hit the new server while the rest of the world was still going to the old one. We ran through our API endpoints, checked admin panels, and verified that every service was responding correctly. Only after this confirmation did we proceed with the cutover.
Once master-slave replication was fully synchronized, we noticed that INSERT statements were succeeding on the new server when they shouldn’t have been — read_only = 1 was set, but writes were going through.
The reason: all PHP application users had been granted SUPER privilege. In MySQL, SUPER bypasses read_only.
We revoked it from all 24 application users:
After this, read_only = 1 correctly blocked all writes from application users while allowing replication to continue.
All domains were managed through DigitalOcean DNS (with nameservers pointed from GoDaddy). We scripted the TTL reduction against the DigitalOcean API, only touching A and AAAA records — not MX or TXT records, since changing mail record TTLs can cause deliverability issues with Google Workspace.
After waiting one hour for old TTLs to expire, we were ready.
Rather than editing 34 config files by hand, we wrote a Python script that parsed every server {} block in every config file, identified the main content blocks, replaced them with proxy configs, and backed up originals as .backup files.
The key: proxy_ssl_verify off — the new server’s SSL cert is valid for the domain, not for the IP address. Disabling verification here is fine because we control both ends.
With replication at Seconds_Behind_Master: 0 and the reverse proxy ready, we executed the cutover in order:
1. New server: STOP SLAVE;
2. New server: SET GLOBAL read_only = 0;
3. New server: RESET SLAVE ALL;
4. New server: supervisorctl start all
5. Old server: nginx -t && systemctl reload nginx (proxy goes live)
6. Old server: supervisorctl stop all
7. Mac: python3 do_cutover.py (DNS: all A records to new server IP)
8. Wait: ~5 minutes for propagation
9. Old server: comment out all crontab entries
The DNS cutover script hit the DigitalOcean API and changed every A record to the new server IP — in about 10 seconds.
After migration, we discovered many GitLab project webhooks were still pointing to the old server IP. We wrote a script to scan all projects via the GitLab API and update them in bulk.
We went from $1,432/month down to $233/month — saving $14,388 per year. And we ended up with a more powerful machine:
The entire migration took roughly 24 hours. No users were affected.
MySQL replication is your best friend for zero-downtime migrations. Set it up early, let it catch up, then cut over with confidence.
Check your MySQL user privileges before migration. SUPER privilege bypasses read_only — if your app users have it, your slave environment isn’t actually read-only.
Script everything. DNS updates, nginx config rewrites, webhook updates — doing these by hand across 34+ sites would have taken hours and introduced errors.
mydumper + myloader dramatically outperforms mysqldump for large datasets. Parallel dump/restore with 32 threads cut what would have been days of work down to hours.
Cloud providers are expensive for steady-state workloads. If you’re not using autoscaling or ephemeral infrastructure, a dedicated server often delivers better performance at a fraction of the cost.
All Python scripts used in this migration are open-sourced and available on GitHub:
* do_list_domains_ttl.py — List all DigitalOcean domains with their A records, IPs, and TTLs
* do_to_hetzner_bulk_dns_records_import.py — Migrate all DNS zones from DigitalOcean to Hetzner DNS
* do_cutover_to_new_ip.py — Flip all A records from old server IP to new server IP
* mysql_compare.py — Compare row counts across all tables on two MySQL servers
* final_gitlab_webhook_update.py — Update all GitLab project webhooks to the new server IP
All scripts support a DRY_RUN = True mode so you can safely preview changes before applying them.
...
Read the original on isayeter.com »
Anonymous request-token comparisons from the community, showing how Opus 4.6 and Opus 4.7 differ on real inputs
...
Read the original on tokens.billchambers.me »
Japan is the land of the train. 28 percent of passenger kilometers in Japan are travelled by rail, more than anywhere else in the developed world. France achieves 10 percent, Germany 6.4 percent, and the United States just 0.25 percent. Travel in Japan is over a hundred times more likely to be by rail than travel in the United States.
Japan’s vast railway network is divided between dozens of companies, nearly all of them private. The largest of these, JR East, carries more passengers than the entire railway system of every country other than China and India. Each year, JR East carries four times as many passengers as the whole British railway system, even though it has fewer kilometers of track, serves about ten million fewer people, and competes with eight other companies. Japan’s railway system turns a large operating profit and receives far less public subsidy than European and American railways.
Subscribe for $100 to receive six beautiful issues per year.
In most developed countries, the railways have struggled since the rise of the automobile in the 1950s. From this point on, North America saw the near-total replacement of passenger trains with cars and planes. In Europe, it meant vast government financial support to keep the lines open.
Japan’s different trajectory is often attributed to culture: the Japanese are conformists who are content to take public transport, unlike freedom-loving Americans who prefer to drive everywhere. Europeans are somewhere in between. Culture is also used to explain the incredible punctuality of Japanese railways.
These cultural explanations are wrong. The Japanese love cars, but they take trains because they have the best railway system in the world. And their system excels because of good public policy: business structure, land use rules, driving rules, superior models for privatization, and sound regulation have given Japan its outstanding railways.
This is good news for friends of rail. Culture is built over centuries, and replicating it is hard. But successful public policies can be emulated by one good government. Much about Japan’s railway system could be replicable around the world.
Today, the most striking institutional feature of Japanese rail is that it is privately owned by a throng of competing companies.
The railway arrived in Japan in 1872, during the Meiji Restoration, which opened the country up to foreign trade, ideas, and technologies. Like most Western countries, Japan nationalized its railways in the early twentieth century, creating what became known as Japanese National Railways (JNR). But it did not nationalize all of the lines, focusing only on mainline railways of national importance, and new private railways were still permitted.
Between 1907 and World War II, Japan saw a boom in new private electric railways, coinciding with rapid urbanization. Technologically, most of these private railways were similar to the famous interurbans in the United States: they were basically electric trams, but running between cities as well as within them. The American network eventually withered, and almost nothing of it survives today. In Japan, however, the network consolidated, and the light tramlines gradually evolved into heavy-rail intercity connections.
These companies are today known as ‘legacy private railways’ on account of their having been private since their inception. There are eight legacy private railways in the Tokyo metropolitan area, five in the Osaka–Kobe–Kyoto megalopolis, two in Nagoya, and one in the fourth city of Fukuoka. There are also dozens of smaller ones elsewhere. In the three largest urban areas, these operators account for nearly half of railway track and stations, as well as a plurality of ridership. The largest, Kintetsu, not only operates urban services, but a whole intercity network stretching from Osaka to Nagoya.
These companies often compete head-to-head. At its most extreme, three separate commuter lines compete for the traffic between Osaka and the port city of Kobe, running in parallel, sometimes fewer than 500 meters apart.
Meanwhile, the nationalized railways were managed by JNR. In the postwar era, JNR was responsible for building the famous Shinkansen system, as well as running commuter and long-distance lines throughout Japan. But in 1988, it was largely privatized, broken into six regional monopolies for passenger services together with a single national freight operator. These are collectively known as the Japan Railways Group (JR).
This means that Japan has ended up with six railway companies that trace their descent to the nationalized railways, the sixteen big legacy companies that have always been private, and a host of minor legacy railways, as well as numerous underground metros (some private, some municipally owned), monorails, and tram systems. This institutional diversity is striking enough. But equally striking is the consistent business model that has evolved amidst this pluralism: the railway that builds a city.
If I take a train to go for a solitary walk in the countryside, the railway company can capture some of the value it creates by charging me for the journey, just as other companies capture the value of the goods and services they provide by charging for them. However, if I take a train to visit family, clients, a theater, or a shop, an important difference appears. The railway can capture the value it creates for me by charging me a fare, but it cannot capture the value it creates for those at my destination. As transport infrastructure creates benefits that produce no revenue for providers, free markets rarely build enough of it.
Japan has partly solved this problem by enabling railway companies to do a great deal beside running railways. Take the example of the Tokyu corporation, one of the legacy private railways in southern Tokyo. You can not only travel on its trains, but also ride a Tokyu bus, live in a Tokyu-built house, work in a Tokyu office complex, see a doctor in a Tokyu hospital, buy groceries in a Tokyu supermarket, spend an afternoon at a Tokyu museum-theater-cinema complex, take your children to their amusement park, and even die in a Tokyu retirement home. The positive spillover effects of the railway on these things are captured by Tokyu because it owns them. The president of Tokyu has said:
I think that though we are a railway company, we consider ourselves a city-shaping company. In Europe for instance, railway companies simply connect cities through their terminals. That is a pretty normal way of operating in this industry, whereas what we do is completely different: we create cities and then, as a utility facility, we add the stations and the railways to connect them one with another.
This model was pioneered in the 1950s by what became Hankyu Railways. Hankyu’s network connects central Osaka to its northern suburbs, as well as Kyoto and Kobe. Its innovative founder Kobayashi Ichizo first built suburban housing, then a department store at the terminal station; he then created a hot spring resort, a zoo, and his own distinctive brand of all-women musical theater, the Takarazuka Revue. He also began to run bus services to and from his stations. Other companies emulated Hankyu’s example: Tokyo Disneyland is a collaboration between Disney and the Keisei Railway, while Hanshin in Osaka owns the Hanshin Tigers baseball team.
Core rail operations are profitable for every Japanese private railway company, but they usually only account for a plurality or a small majority of revenue. The rest is contributed by their portfolio of side businesses. There is a natural financial synergy between the reliable but unremarkable cash flow of train fares and the profitable but riskier real estate and commercial side of the business. Railway companies’ side businesses also attract people to live and work on their rail corridor, reinforcing the customer base for the railway services themselves.
This virtuous circle is enabled by transit-oriented development. Japan’s liberal land use regulation makes it straightforward to build new neighborhoods next to railway lines, giving commuters easy access to city centers. It also enables the densification of these centers, which means that commuters have more places they want to go.
Railways cost a lot to build, but once they are built, they can move enormous numbers of people, far more than a road of similar size. This means that they work best in cities with a high density of people, jobs, and other activities. In 2019, New York City was the only American city where rail had a higher modal share than cars, in part because Manhattan has 2.5 million jobs, two million residents, and 50 million tourist visits crammed into 59 square kilometers.
This does not mean that rail-oriented cities must be structured like Chinese cities: islands of high-rise apartments connected by metros and separated by motorways. Japanese cities have the lowest residential density in Asia, and a plurality of the Japanese live in houses, usually detached ones. The urban area of Tokyo, the densest Japanese city, has a weighted population density less than that of many European cities, including Paris, Madrid, or Athens. Japanese cities have vast low-rise, predominantly residential suburbs, built at densities that might be higher than what is typical in the United States, but that would be quite normal in Northern Europe.
What makes Japan’s cities particularly suited to rail is thus not their residential districts, but their huge and hyperdense centers. These really are special: the cores of Tokyo or Osaka are unlike anything that exists in Europe or North America. Many of their features are famous worldwide: the vertical street zakkyo buildings, underground streets, shopping streets under rail tracks, covered arcades, elevated station squares, and vertical cities. Getting millions of commuters and shoppers into these downtowns is where rail excels because its extreme spatial efficiency means that infrastructure with a relatively modest footprint can transport vast numbers of people into a small area.
None of this emerged from a coherent masterplan of transit-oriented development like Copenhagen’s Finger Plan or Curitiba’s Trinary System. Postwar Japanese opinion was committed to decentralization both to rural peripheries and to the suburbs through greenbelts, motorways, and new towns.
Instead, this variety and adaptability around railways is possible because of the way Japanese urban planning works. Since 1919, Japan has had a standardized national zoning system, but it is much more liberal than development control systems in Western countries. The Japanese authorities did not intend or even desire dense urban centers, but they did not prevent them, rather like nineteenth-century governments in the West.
This liberal zoning system is reinforced by private access to city planning powers. Thirty percent of Japan’s urban land has been subject to land readjustment, where agreement among two thirds of residents and landowners in an area is enough to allow its replanning, including compulsorily taking and demolishing land for amenities and infrastructure. Initially land readjustment was used only to assemble rural land for urbanization, but over time it was increasingly used to redevelop already urbanized areas, and new variants were created to build the skyscrapers that surround the major stations of central Tokyo.
The history of the private railway companies could be written as a story of land readjustment projects: the initial building of the lines in the interwar years proceeded through one land readjustment project after another. Postwar improvements such as double-tracking, platform lengthening, and constant redevelopment of stations and their immediate thresholds were only possible because the railways could secure land takings cooperatively with local businesses and landowners.
Perhaps the greatest example of this phenomenon involved Tokyu. In 1953 the company decided to build the Den’en Toshi Line, or Garden City Line, to serve a rural area southwest of Tokyo. This would be enabled by a series of land readjustment projects collectively among the largest in Japanese history.
Over 30 years, 3,100 hectares were covered, of which only 36 percent was devoted to residential and commercial development, with 20 percent for forest and parks, 17 percent for roads, and much of the rest for watercourses. The population of the land readjustment zone would rise from 42,000 in 1954 to over 500,000 in 2003.
By connecting the affluent southwestern suburbs to Tokyu’s main real estate hub next to Shibuya station, now the second busiest in the world, the Den’en Toshi Line allowed Tokyu to become the largest private railway by revenue and ridership. The Japanese government and academics generally consider the Den’en Toshi Line to be the best corridor of transit–oriented development in Japan.
But the railway-as-city-builder model is not the only reason Japanese railways have been able to thrive. European countries usually prohibited railways from running real estate side businesses, but in the United States and Canada the practice was extremely widespread in the nineteenth and early twentieth centuries, and many famous railway suburbs were developed this way. Despite this, passenger rail in these countries collapsed in the mid-twentieth century. Part of the difference was that Japan did not extend the same implicit subsidies to cars as Western governments did.
The land of Toyota, Nissan, and Honda is not an anti-car nirvana. In fact, Japan has excellent motorways, and across the country as a whole a small majority of journeys are made by car. But Japan is a place where cars and car-oriented lifestyles compete on a level playing field.
Japan is one of the only countries to have privatized parking. In Europe and North America, vast quantities of parking space is socialized: municipalities own the streets and allow people to park on them at low or zero cost. Initially with the intention of encouraging the provision of more parking spaces, Japan made it illegal to park on public roads or pavements without special permission. Before someone buys a car, they must prove that they have a reserved night-time space on private land, either owned or leased.
Since parking on public land is banned, municipalities are not worried about overspill parking from developments with inadequate private parking. They therefore have no reason to impose parking minimums on developments: the market is left to decide whether parking is the most valuable use of private land. Where land is abundant, as in rural areas, suburbs, or small towns, private parking is plentiful. But in city centers, it is outcompeted by other land uses. According to the late Donald Shoup, central Tokyo has 23 parking spaces per hectare and 0.04 parking spaces per job, compared with 263 and 0.52 for Los Angeles. Even Manhattan, the densest urban area in North America with the lowest levels of car ownership, has about 60 parking spaces per hectare.
Japanese roads are expected to be self-financing. Motorways are run by self-contained public cooperatives, very similar to the statutory authorities that ran English roads and canals between 1660 and the late 1800s, and funded by tolls on their users. Vehicle registration taxes, which are allocated to localities for road construction and maintenance, are worth three percent of the Japanese government budget.
These measures, adopted in the 1950s, were not intended to suppress car use — the point was to fund a massive road expansion — but they have forced private vehicles to internalize many of their hidden costs. In the Tokyo urban area, the average household spends 71,000 yen ($450) each year on public transport fares and 210,000 yen ($1,350) on car purchase and maintenance costs.
But the private car was not the only competitor faced by the private railways. For eight decades in the twentieth century, they also had to face the juggernaut of Japanese National Railways. Its privatization in 1988 removed the final obstacle to creating the world’s best railway system.
Railway privatization in Britain, New Zealand, Argentina, and Sweden has had a mixed reception, and all of those countries, apart from Sweden, have taken steps to reverse it. In Japan, it has been so successful that the government subsequently privatized the metro systems in Tokyo and Osaka.
In the postwar period, JNR enjoyed real successes. It built the revolutionary Shinkansen, the first high-speed railway in the world. It also aggressively electrified and double-tracked major trunk lines, quadruple-tracked lines into and out of major cities, and added city-center loops and freight bypasses. But these achievements were overshadowed by two problems.
The first was politics. Many countries adapted to the rise of the car by closing the least profitable parts of their passenger rail network, like the consolidation of American freight rail into the Class I operators or the Beeching Axe in Britain. In Japan, however, the ruling Liberal Democratic Party drew its support from rural constituencies, whose support it retained with pork–barrel politics. Its ‘rail tribe’ group, led by rural MPs, prevented JNR from adapting itself to mass motorization.
JNR therefore did not amputate gangrenous rural and freight services that imposed heavy costs with few benefits. Worse, it continued to build new loss-making rural railway lines, known in Japanese as Gaden-intetsu, or railways pulled into the rice field.
The second problem was organized labor. In general, Japanese trade unions are known for their moderation and responsibility, a generalisation that also held true for the unions at the legacy private railways. The JNR unions, however, became highly militant, secure in the knowledge that their nationalized employers could never go bankrupt. Their largest series of strikes in 1973 provoked riots from commuters.
The railway unions imposed overstaffing on revenue-generating urban services, at a time when both international and private domestic operators were reducing staffing requirements against a backdrop of higher wages and the growing automation of signaling and ticketing. As a result, 78 percent of JNR’s costs were related to labor, compared to 40 percent for other Japanese railways. The average worker at a private railway was 121 percent more productive than their JNR counterpart.
By the early 1980s, only seven out of 200 JNR lines made a profit. Successive governments deferred serious reform, running up debt, cutting down investments in new urban lines, raising ticket prices to twice those of comparable private railways, and increasing subsidies — which rose until annual subsidies equaled the total cost of the Shinkansen.
In 1982, Prime Minister Yasuhiro Nakasone started to privatize the railways. Unlike other countries, Japan simply returned to the traditional private railway model of the nineteenth and early twentieth centuries: tracks, trains, stations, and yards were owned by vertically integrated regional conglomerates.
There are substantial advantages to vertical integration. Railways are a closed system that has to be planned as a single unit. Changing the timetable at station A can affect the timetable at station Z; buying new trains that can travel faster might require changes to the infrastructure so they can reach their top speed, which in turn requires rewriting the timetables. This becomes especially complicated if different services share tracks. To prevent delays from propagating from one service to another, the timetable needs to be carefully designed to make best use of the available infrastructure.
The starkest effect of privatization was a massive and immediate increase in labor productivity and profitability relative to the legacy private railways. In fact, this began before privatization: its mere threat strengthened the government’s hand when bargaining with the unions and forced JNR to begin closing rural lines.
Privatization saw a general trend of productivity improvements, following a big one-time improvement between 1982 and 1990, when the workforce was cut by more than half, 83 loss-making lines were removed, and JNR’s debts were transferred to a holding company.
The second great advantage of privatization was to allow the JR companies to emulate the railway-as-city-builder model of the legacy private railways: for instance, JR East owns two shopping center brands, a ski resort, a coffee chain, and even a vending machine drink company. The JR companies have not ignored their rail business: they have continued to build new high-speed lines and urban tunnels, upgrade stations, and implement a host of other improvements such as the introduction in the 1990s of smart cards that allow passengers to pay their fare with a tap.
This does not mean that the Japanese railway industry is a pure creature of free enterprise. No railway system ever has been. The Japanese system has found an equilibrium that makes rail policy explicit and limited. Leaving aside railway safety and business regulation, there are two main policy levers: fare maximums and capital expansion subsidies.
Price controls are often cited as a classic example of misguided government intervention, whether through rent controls, caps on the price of gasoline, wage freezes, or minimum agricultural prices. Tokyo’s infamously crammed trains are a symptom of underpriced rush hour traffic.
Railways have market power because the substitutes for railway trips – coaches, cars and planes – are quite a different product. This monopolistic position has historically meant trouble: monopoly systems, whether private or public, have a tendency to abuse their position to charge higher prices and run bad services. For this reason, the private monopolies that were common in the Western world before World War I often had price controls imposed on them. For example, most of the American streetcar networks were operated as long-term, price-controlled franchises granted by the city.
Price maximums, if set too low, could have ruined Japan’s railways. This is exactly what happened to many Western transit services after the First World War. But the postwar Japanese practice has capped fares generously. The system is explicitly designed to maintain profitability per rider, which in turn incentivizes the companies to maximize ridership. That buys political legitimacy for the privatized system, which is necessary for the continued provision of capital expansion subsidies. Indeed, during the long deflation era between 1992 and 2022, it was common for operators to charge below the maximum, and the real value of railway fares continued to rise. Fare maximums are set on the basis of the average cost structures of all railway operators in a region, so companies with below-average costs like Tokyu would often charge below the cap to maintain a competitive edge, prevent public backlash, and maximize traffic to their side-businesses.
Other than the fare maximums, the railways are free to make their own decisions about timetables, service patterns and day-to-day operations, a highly specialized and technical task which requires deep expertise. This contrasts with the government meddling with, say, Amtrak’s routes.
Carefully designed public subsidies also play a useful role. Although Japanese railways do not receive subsidies for day-to-day operations, they do receive government loans and grants for capital investments. These are typically tied to public priorities, such as disability access or earthquake-proofing, or to projects that have large spillovers that the railway company would be unable to internalize, like removing level crossings, or elevating at-grade railways or trams in order to reduce road congestion and accident risk. Generally, the local prefectural government will match the contribution of the national government. Larger new build projects are subject to lease back or debt-payment conditions that fare revenue is expected to pay back.
Railway companies invested heavily in real estate businesses, often funding lines through selling land for housing around new stations. Liberal spatial policy meant that such development happened easily, even as it enabled dense development in urban cores where radial rail lines converged. Rail companies were generally vertically integrated regional monopolies, owning the land, track, and rolling stock, setting their own timetables, and employing their staff. The state imposed controls to stop them exploiting their monopoly position, but it did so cautiously, allowing them to make sufficient profit that incentives to invest were preserved. Capital subsidies were targeted at providing specific public goods that normal commercial operations overlooked.
The above paragraph could be written by a historian of the future about contemporary Japan. But every word in it could also be written by a historian today about the United States in the nineteenth century — usually seen as the epitome of capitalist individualism. This striking fact contradicts the idea that America’s supposed individualism foreordains it to be the land of the car, or that Japan’s supposed communitarianism foreordained it to be the land of rail.
It also puts pressure on the idea that the demise of rail is the inevitable consequence of cars. All countries saw some shift to cars in the twentieth century, and all rail industries had to respond to that. But public policy had an enormous effect on how successfully they did so. The rise of zoning restrictions on density, excessive price controls, nationalization, and vertically disintegrated privatization have hampered Western rail in remaining competitive against cars since the 1920s. By maintaining and restoring the institutions that built the first railway systems in the nineteenth century, the Japanese have created the mightiest railway system of the twenty-first.
...
Read the original on worksinprogress.co »
...
Read the original on www.righto.com »
I tried Claude Design yesterday and I have a theory for how this whole thing shakes out.
As product teams scaled and design needed to justify itself inside engineering orgs, it was pushed toward systematization — and Figma invented its own primitives to make that work: components, styles, variables, props, and so on. Some concepts are borrowed from programming, some aren’t, and the whole thing doesn’t neatly map onto anything. Guidance evolves, migrations pile up, and if you want to automate any of it you’re stuck with a handful of shoddy plugins. The beast is hairy enough that entire design roles now specialize in wrangling the system itself.
There’s always been a tense push-pull between Figma and code over what the source of truth should be. Figma won over Sketch partially by staking its claim there — their tooling would be canonical.
That victory had a hidden cost. By nature of having a locked-down, largely-undocumented format that’s painful to work with programmatically, Figma accidentally excluded themselves from the training data that would have made them relevant in the agentic era. LLMs were trained on code, not Figma primitives, so models never learned them. As code becomes easier for designers to write and agents keep improving, the source of truth will naturally migrate back to code. And all the baroque infrastructure Figma had to introduce over the past decade will look nuts by comparison. Why fuss around in a lossy approximation of the thing when you can work directly in the medium where it will actually live? If we want to make pottery, why are we painting watercolors of the pot instead of just throwing the clay?
At work, we’ve spent quite a bit of time back-porting design changes made directly in code back to Figma and it is not fun. I can’t share that file, but for a fair comparison, this is Figma’s own design system file for their product. I have to assume it was built by the most competent design system team you can find. And yet…
These are Figma’s own files. Built by their own team. This is the gold standard.
Imagine debugging a color that looks wrong. You check the component. The component uses a variable. The variable is aliased to another variable. That variable references a mode. The mode is overridden at the instance level. The instance lives inside a nested component with a library swap applied. At this point, you’re either considering picking up code or moving to the countryside and becoming a sheep farmer because one more minute of this will make you lose your goddamn mind.
So as the source of truth shifts back to code, Figma is left in an odd spot: holding a largely manual, pre-agentic system that nobody in their right mind would design from scratch today.
I think design tooling forks into two distinct shapes from here — and there’s almost a clock resetting between Figma and every other tool competing to answer the same question they answered in 2016: who can help me, a designer, get my ideas out fastest?
Spoiler: it’s not Figma Make. Figma Make feels like it primarily benefits people who have already drunk the Kool-Aid — it reads from Figma styles, component libraries, and proprietary props (or, as I like to call them, Prop Props), and it’s the only tool in this new landscape still pretending the design file is canonical. It’s the tool for people who want to (or have no choice but to) stay inside the system.
Claude Design is the first of those two tools, and takes the opposite bet. There’s an Arts and Crafts principle called “truth to materials” — the idea that a thing should be honest about what it is and how it’s made, rather than masquerading as something else. Figma ended up being the opposite of this: a set of extremely rigid schemas with a free-form “just vibes, man” costume over the top. Like a Type-A personality physically incapable of relaxing, forced to perform chill while internally screaming that your frames aren’t nested and your tokens are detached and nothing is on the grid. Claude Design, for all its roughness, is at least honest about what it is: HTML and JS all the way down.
And it has a massive structural advantage: its sibling is Claude Code. Eventually, I can see Claude Design just dumping things directly into Claude Code and vice versa. Claude Design’s onboarding already lets you import your repos. The feedback loop between design and implementation — which has been a source of friction since the beginning of time — becomes a single conversation.
The other tool that emerges from this moment will have no expectation of code at all. It’ll be a pure exploration environment — somewhere to drop rectangles, stack layer styles, fuss with blend modes and gradients, and go completely nuts, unconstrained by systems or prompting conventions. Maybe it’s an iPad app with Pencil support where you just quickly sketch a bunch of rectangles. 37signals could do something really funny right now. Or maybe it goes in the opposite direction — something more like Photoshop that goes all-in on high-fidelity compositing and lets our imaginations run wild, now that we’re no longer beholden to the ceiling of what you can do with CSS effects. Doesn’t it seem kinda weird how for 90% of its life, Figma’s only layer effect was a drop shadow or a blur?
Figma’s Sketch moment is rapidly approaching. And if you said that sentence to a Victorian child, they would probably have a stroke.
The following are messages meant only for the teams behind Sketch and Figma. If neither apply to you, you can skedaddle.
To Figma: I can see a world where this post does numbers in the Figma internal Slack. If that’s the case and you’re reading this from Figma: this wouldn’t have happened if you hired me last year when I was interviewing. Your loss, big dawg.
To Sketch: GET YOUR HEADS OUTTA YOUR ASSES AND GIVE EM HELL. ADD PARTICLE EFFECTS. ADD DEBOSSING EFFECTS. MESH TRANSFORMS. FUCK IT, ADD METAL SHADERS. GO NUTS. STOP COASTING OFF OF BEING MAC NATIVE. QUIT DRINKING COCOA AND GET THIRSTY FOR BLOOD.
To mom: Sorry for cursing.
@jonnyburch on Twitter shared a link to their blog post with similar thoughts, it’s quite good if you wanna go deeper.
...
Read the original on samhenri.gold »
Computer chips that cram billions of electronic devices into a few square inches have powered the digital economy and transformed the world. Scientists may be on the cusp of launching a similar technological revolution — this time using light.
In a significant advance toward that goal, National Institute of Standards and Technology (NIST) scientists and collaborators have pioneered a way to make integrated circuits for light by depositing complex patterns of specialized materials onto silicon wafers. These so-called photonics chips use optical devices such as lasers, waveguides, filters and switches to shuttle light around and process information. The new advance could provide a big boost for emerging technologies such as artificial intelligence, quantum computers and optical atomic clocks.
Making circuitry for light as powerful and ubiquitous as circuitry for electrons is one of today’s technological frontiers, says Scott Papp, a NIST physicist whose group led the research, published this week in Nature. “We’re learning to make complex circuits with many functions, cutting across many application areas.”
When it comes to information transfer and processing, light can do things that electricity can’t. Photons — particles of light — are far zippier than electrons at working their way through circuits.
Laser light is also essential for controlling powerful, emerging quantum technologies such as optical atomic clocks and quantum computers.
But several hurdles remain before integrated photonics can truly hit its stride. One involves lasers. High-quality, compact and efficient lasers exist in only a few wavelengths, or colors, of light. For example, semiconductor lasers are very good at generating infrared light with a wavelength of 980 nanometers, or billionths of a meter — a color just outside the range of human vision.
Emerging technologies such as optical atomic clocks and quantum computers need laser light in many other colors as well. The lasers that produce those colors are big, costly and power-hungry, effectively confining these quantum technologies to a handful of special-purpose labs.
By integrating lasers into circuits on chips, scientists hope to help quantum technologies become cheaper and more portable, so they can start to fulfill their vast promise.
The new NIST photonics chip is a bit like a layer cake. NIST physicists Papp and Grant Brodnik, along with colleagues, started with a standard wafer of silicon coated with silicon dioxide (glass) and lithium niobate, a so-called nonlinear material that can change the color of light coming into it.
The researchers then added pieces of metal to electrically control how the circuits convert one color of light to others. The scientists also created other metal-lithium niobate interfaces that allowed them to rapidly turn light on and off within the circuits — a crucial ability for data processing and high-speed routing.
The icing on the cake, so to speak, was a second nonlinear material called tantalum pentoxide, or tantala. Tantala can transform light in ways that feel like magic, taking in a single laser color and putting out the full rainbow of visible light colors plus a wide range of infrared wavelengths. Papp and colleagues have spent years developing techniques to fabricate circuits out of tantala without heating it up, allowing the material to be deposited onto other materials without damaging them.
By patterning the different materials on top of each other in a three-dimensional stack, the researchers produced a single chip that efficiently routes light between layers. That allowed them to merge the light-manipulating wizardry of tantala with the controllability of lithium niobate. The new technique “allows seamless integration,” says Brodnik. “The real power is that tantala can be added to existing circuitry.”
Ultimately, the researchers were able to fit roughly 50 fingernail-sized chips containing 10,000 photonic circuits, each outputting a unique color, onto a wafer roughly the size of a beer coaster. “We can create all these different colors, just by designing circuits,” says Papp.
Quantum technologies such as clocks and computers could be among the biggest beneficiaries of integrated photonics. These devices often use arrays of atoms to store and process information. For each type of atom, physicists need lasers tailored to the atom’s internal quantum energy levels. For example, rubidium atoms, commonly used in quantum computers and clocks, respond to red light with a wavelength of 780 nanometers. Strontium atoms, another popular choice, “see” blue light at 461 nanometers. Shine other colors on the atoms and nothing happens.
The bulky, costly and complicated lasers needed to produce these bespoke colors have been a major hindrance to getting quantum computers and optical clocks out of the lab and into the field, where they could have big impacts. Cheap, low-power, portable optical clocks, for example, could help predict volcanic eruptions and earthquakes, offer an alternative to GPS for positioning and navigation, and help scientists investigate scientific mysteries such as the nature of dark matter. Quantum computers could offer new ways to study the physics and chemistry of drugs and materials.
Integrated photonic circuits aren’t just for quantum. Papp believes NIST’s photonics chips could help efficiently shuttle signals between the specialized chips used by tech firms, potentially making AI-based tools more powerful and efficient. Tech companies are also interested in using photonics to improve virtual reality displays.
While NIST’s chips aren’t yet ready for mass production, the technique used to create them provides a path forward, Papp and Brodnik say. The NIST scientists collaborated with experts at Octave Photonics, a Louisville, Colorado-based startup company founded by former NIST researchers that’s now working to scale up the technology.
“When you see the chip glowing in the lab, taking in invisible light and making all this visible light in one integrated chip — it’s obvious how many potential applications there could be,” says Papp.
...
Read the original on www.nist.gov »
The scene is right out of the 1950s with students pecking away at manual typewriters, the machines dinging at the end of each line.
Once each semester, Grit Matthias Phelps, a German language instructor at Cornell University, introduces her students to the raw feeling of typing without online assistance. No screens, online dictionaries, spellcheckers or delete keys.
The exercise started in spring 2023 as Phelps grew frustrated with the reality that students were using generative AI and online translation platforms to churn out grammatically perfect assignments.
“What’s the point of me reading it if it’s already correct anyway, and you didn’t write it yourself? Could you produce it without your computer?” said Phelps.
She wanted students to understand what writing, thinking and classrooms were like before everything turned digital. So, she found a few dozen old manual typewriters in thrift shops and online marketplaces, and created what her syllabus calls an “analog” assignment.
It might be premature to say that typewriters are making a comeback beyond Cornell’s campus. But the revival is part of a national trend toward old-school testing methods like in-class pen-and-paper exams and oral tests to prevent AI use for assignments on laptops.
Typewriters bring ‘old days’ taste of doing one thing at a time
Students arrived for class on a recent analog day to find typewriters at the desks, some with German and some with QWERTY keyboards.
“I was so confused. I had no idea what was happening. I’d seen typewriters in movies, but they don’t tell you how a typewriter works,” said Catherine Mong, 19, a freshman in Phelps’ Intro to German class. “I didn’t know there was a whole science to using a typewriter.”
Like a rotary phone, the manual typewriter appears simple but is not intuitive to the smartphone generation. Phelps demonstrated how to feed the paper manually, striking the keys with force but not so hard the letters would smudge. She explained that the dinging bell signifies the end of a line and the need to manually return the carriage to start the next line. (“Oh,” said one student, “that’s why it’s called ‘return.’”)
“Everything slows down. It’s like back in the old days when you really did one thing at a time. And there was joy in doing it,” said Phelps, who brings in her two children, aged 7 and 9, to serve as “tech support” and ensure no one has their phones out.
The assignment carries lessons beyond simply how to use a typewriter, which is the whole point.
“It dawned on me that the difference with typing on a typewriter is not just how you interact with the typewriter, but how you interact with the world around you,” said computer science major Ratchaphon Lertdamrongwong, a sophomore, whose class had to write a critique of a German movie they’d watched.
In the absence of screens, there are no notifications to distract you as you write. Without every answer readily available at his fingertips, he asked his classmates for help, which Phelps heartily encouraged.
“While writing the essay, I had to talk a lot more, socialize a lot more, which I guess was normal back then,” Lertdamrongwong said, referring to the typewriter era. “But it’s drastically different from how we interact within the classroom in modern times. People are always on a laptop, always on the phone.”
Without a delete key and the ability to correct every mistake, he paused to think more intentionally about his writing.
“This might sound bad, but I was forced to actually think about the problem on my own instead of delegating to AI or Google search,” he said.
Most students found their pinkies weren’t strong enough to touch-type, so they typed more slowly, pecking at the keyboard with their index fingers.
Mong, the freshman, faced the added challenge of a recently broken wrist, requiring her to use just one hand. The self-described perfectionist was initially frustrated with how messy her page looked with odd spacing between certain letters and misspellings. (Phelps told students to backspace and type ’X’s over errors.)
“This thing I handed in had pencil marks all over it and definitely did not look clean or finished. But it’s part of the process of learning that you’re going to make mistakes,” said Mong, who found the assignment of typing a poem “fun and challenging.”
She embraced the odd spacing and played with the visual boundaries of the page to indent and fragment lines in the style of poet E. E. Cummings. It took several sheets of paper and many mistakes, all of which Mong saved.
“I’m probably going to hang them on my wall,” Mong said. “I’m kind of fascinated by typewriters. I told all my friends, I did a German test on a typewriter!”
The Associated Press’ education coverage receives financial support from multiple private foundations. AP is solely responsible for all content. Find AP’s standards for working with philanthropies, a list of supporters and funded coverage areas at AP.org.
...
Read the original on sentinelcolorado.com »
Sixteen bets made $100,000 each accurately predicting the timing of the US airstrikes against Iran on 27 February. Later, a single user would make over $550,000 after betting that Ayatollah Ali Khamenei would topple, just moments before his assassination by Israeli forces. On 7 April, right before Donald Trump announced a temporary ceasefire with Iran, traders bet $950m that oil prices would come down. They did.
These bets and other well-timed wagers accurately predicted the precise timing of major developments in the US-Israel war with Iran, creating huge windfalls and raising concerns among lawmakers and experts over potential insider trading.
Betting — once largely siloed to sporting events — has now spread to include contracts on news events where insider information could give some traders an advantage.
The proliferation of online betting markets like Polymarket and Kalshi has allowed bets on virtually any news event. It’s also easier than ever to buy commodity derivatives like oil futures, where traders gamble on what the price of oil will be in the future.
Leaders of some US federal agencies and some members of Congress said they want to crack down on suspicious trading taking place across different marketplaces, but it’s unclear how much headway regulators will make.
“Is the problem that we don’t have legislation or that we don’t have enforcement capabilities?” said Joshua Mitts, a law professor at Columbia University. “To have a law that can’t really be enforced effectively given the technological limitations, it’s sort of putting the cart before the horse.”
On the night of 27 February, the day before the US and Israel would carry out strikes on Iran, an unusual influx of about 150 accounts on Polymarket placed bets that the US would strike Iran the next day. A New York Times analysis found the bets totaled $855,000, with 16 accounts pocketing more than $100,000 each.
Soon after, a single anonymous Polymarket user, under an account named “Magamyman”, made over $553,000 after betting that Khamenei would be “removed” from power just moments before he was killed by an Israeli airstrike, according to a complaint filed to the Commodity Futures Trading Commission (CFTC), the federal agency that regulates futures markets, by Public Citizen, a consumer advocacy group. The complaint also cites a crypto-analytics firm that identified six “suspected insiders” who made a total of $1.2m on Polymarket after Khamenei was killed.
The well-timed surge of wagers were seen again on 7 April, when at least 50 Polymarket accounts placed bets that the US and Iran would reach a ceasefire hours before Trump would announce it in a Truth Social post. Earlier, the president had said “a whole civilization will die tonight” if Iran did not open the strait of Hormuz.
But traders weren’t just active on Polymarket: there were similar surges of oil futures trading activity just hours before Trump announced updates to the conflict that would lower oil prices.
On 23 March, traders placed $580m in bets on the oil futures market just 15 minutes before Trump said on social media that the US was having “productive” talks with Iran, according to the Financial Times. The traders made a windfall after Trump’s comments triggered a sell-off in the oil markets that made oil prices plummet.
The same thing happened again on 7 April, this time when traders spent $950m on oil futures, betting that the price of oil would fall just hours before the ceasefire with Iran was announced.
“We can’t say from the outset whether any of these trades were illegal. Any one of them could be lucky, and any one of them could be based on lawful information,” said Andrew Verstein, a law professor at the University of California at Los Angeles. “But many of them bear the hallmarks of suspicious trades that would naturally warrant investigation.”
For those who closely follow trading patterns, the rush of activity that happened before these events seem too big to simply be bets hedging on luck.
“Not only the timing, but the amount of these bets makes it look very likely that someone had insider knowledge … and placed very, very substantial bets on it,” said Craig Holman, a government affairs lobbyist for Public Citizen who filed the group’s complaint to the CFTC.
Holman said he is skeptical about how bold the CFTC will be in its investigations given its current structure under the Trump administration. The commission typically has five bipartisan members that are appointed by the president. Now, the CFTC has only one commissioner: Michael Selig, whom Trump appointed at the end of 2025 and who has positioned himself as friendly toward prediction markets.
Over the last few months, the CFTC has been roiled in fights with state legislatures who argue that regulation of these online betting marketplaces belongs to the states.
Kalshi, Polymarket’s competitor, was temporarily banned in Nevada after the state sued the company for offering contacts in the state without a gambling license. Arizona meanwhile filed criminal charges against the company for allowing people to place bets on elections. In both cases, Kalshi denied any wrongdoing and has argued that the CFTC has exclusive jurisdiction over online prediction markets.
“It’s a wild west phase, when we’re talking about the prediction market industry, and now it’s spilled over into the stock market as well,” Holman said.
Anonymous sources told Reuters and Bloomberg that the CFTC launched an investigation into the oil futures trades that were placed on 27 March and 7 April, though the agency has not publicly announced it is conducting an investigation.
Speaking to Congress this week, Selig said that the agency is prepared to go after those who are suspected of insider trading, warning “we will find you and you will face the full force of the law”, but said that the commission would not issue any new regulations until it had five seated commissioners.
Polymarket did not respond to request for comment. In a statement, White House spokesperson Davis Ingle said “federal employees are subject to government ethics guidelines that prohibit the use of nonpublic information for financial benefit”.
“Any implication that administration officials are engaged in such activity without evidence is baseless and irresponsible reporting,” Ingle said. “The CFTC will always uphold its duty to monitor fraud, manipulation and illicit activity daily.”
Federal law prohibits government employees, including those working for Congress or the White House, from using non-public information for personal profit.
In late March, a bipartisan group of representatives introduced a bill that would ban members of Congress and senior staff within the federal government from participating in prediction market contracts related to political events or policy decisions.
But experts warn that insider trading law is complicated, and the new technology that makes it easier to place bets online leaves a complicated paper trail that can be hard to follow.
Historically, insider trading takes place when a person uses exclusive information about a company to buy or sell stocks right before information becomes public. These types of illegal trades are regulated by the Securities and Exchange Commission (SEC), which regulates the stock exchanges.
Insider futures trading could be seen as a subset of this typical insider trading, but the territory is new.
“The trick is that there are essentially no clean cases of people getting in trouble for commodity futures insider trading,” Verstein said. “The law there is just not well-developed.”
In a paper published last month, Mitts, the Columbia law professor, and other researchers screened more than 200,000 “suspicious wallet-market pairs” from February 2024 to February 2026 and found that traders in this group achieved a nearly 70% win rate, making $143m in well-timed bets tied to everything from the capture of former Venezuelan leader Nicolás Maduro to Taylor Swift’s engagement to Travis Kelce. The paper notes that informed traders face fewer legal constraints by trading on platforms like Polymarket or Kalshi because these markets still operate in a legal gray area.
“The challenge here is that this trading is occurring through the blockchain or other anonymized means, so it is going to be quite difficult for a regulator enforcement authority or prosecutor to determine the identity of the trader,” Mitts said. “They would also have to prove the trader traded on the basis of information that had been wrongly misappropriated.”
But the stakes are high. Insider trading involving classified military information can lead to distrust of both markets and governments.
“Unlike corporate insider trading, there’s a lot of ways for the government to make itself be correct. You can just make the war that would occur, and that’s concerning because then the real economy is being distorted,” Verstein said. “Real decisions, including perhaps financial decisions, are being distorted by financial bets.”
...
Read the original on www.theguardian.com »
...
Read the original on www.sumida-aquarium.com »
On April 17, engineers at NASA’s Jet Propulsion Laboratory (JPL) in Southern California sent commands to shut down an instrument aboard Voyager 1 called the Low-energy Charged Particles experiment, or LECP. The nuclear-powered spacecraft is running low on power, and turning off the LECP is considered the best way to keep humanity’s first interstellar explorer going.
Mission engineers at NASA’s Jet Propulsion Laboratory in Southern California turned off the Low-energy Charged Particles experiment aboard Voyager 1 on April 17, 2026.
The LECP has been operating almost without interruption since Voyager 1 launched in 1977 — almost 49 years. It measures low-energy charged particles, including ions, electrons, and cosmic rays originating from our solar system and galaxy. The instrument has provided critical data about the structure of the interstellar medium, detecting pressure fronts and regions of varying particle density in the space beyond our heliosphere. The twin Voyagers are the only spacecraft that are far enough from Earth to provide this information.
Like Voyager 2, Voyager 1 relies on a radioisotope thermoelectric generator, a device that converts heat from decaying plutonium into electricity. Both probes lose about 4 watts of power each year. After almost a half-century in space, power margins have grown razor thin, requiring the team to conserve energy by shutting off heaters and instruments while making sure the spacecraft don’t get so cold that their fuel lines freeze.
During a routine, planned roll maneuver on Feb. 27, Voyager 1’s power levels fell unexpectedly. Mission engineers knew any additional drop in power could trigger the spacecraft’s undervoltage fault protection system, which would shut down components on its own to safeguard the probe, requiring recovery by the flight team — a lengthy process that carries its own risks.
The Voyager team needed to act first.
“While shutting down a science instrument is not anybody’s preference, it is the best option available,” said Kareem Badaruddin, Voyager mission manager at JPL. “Voyager 1 still has two remaining operating science instruments — one that listens to plasma waves and one that measures magnetic fields. They are still working great, sending back data from a region of space no other human-made craft has ever explored. The team remains focused on keeping both Voyagers going for as long as possible.”
The choice of which instrument to turn off next wasn’t made in the heat of the moment. Years ago, the Voyager science and engineering teams sat down together and agreed on the order in which they would shut off parts of the spacecraft while ensuring the mission can continue to conduct its unique science. Of the 10 identical sets of instruments that each spacecraft carries, seven have been shut off so far. For Voyager 1, the LECP was next on that list. The team shut off the LECP on Voyager 2 in March 2025.
Because Voyager 1 is more than 15 billion miles (25 billion kilometers) from Earth, the sequence of commands to shut down the instrument will take 23 or so hours to reach the spacecraft, and the shutdown process itself will take about three hours and 15 minutes to complete. One part of the LECP — a small motor that spins the sensor in a circle to scan in all directions — will remain on. It uses little power (0.5 watts), and keeping it running gives the team the best chance of being able to turn the instrument back on someday if they find extra power.
Engineers are confident that shutting down the LECP will give Voyager 1 about a year of breathing room. They are using the time to finalize a more ambitious energy-saving fix for both Voyagers they call “the Big Bang,” which is designed to further extend Voyager operations. The idea is to swap out a group of powered devices all at once — hence the nickname — turning some things off and replacing them with lower-power alternatives to keep the spacecraft warm enough to continue gathering science data.
The team will implement the Big Bang on Voyager 2 first, which has a little more power to spare and is closer to Earth, making it the safer test subject. Tests are planned for May and June 2026. If they go well, the team will attempt the same fix on Voyager 1 no sooner than July. If it works, there is even a chance that Voyager 1’s LECP could be switched back on.
...
Read the original on science.nasa.gov »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.