10 interesting stories served every morning and every evening.
A real-world production migration from DigitalOcean to Hetzner dedicated, handling 248 GB of MySQL data across 30 databases, 34 Nginx sites, GitLab EE, Neo4j, and live mobile app traffic — with zero downtime.
Running a software company in Turkey has become increasingly expensive over the last few years. Skyrocketing inflation and a dramatically weakening Turkish Lira against the US dollar have turned dollar-denominated infrastructure costs into a serious burden. A bill that felt manageable two years ago now hits very differently when the exchange rate has multiplied several times over.
Every month, we were paying $1,432 to DigitalOcean for a droplet with 192GB RAM, 32 vCPUs, 600GB SSD, two block volumes (1TB each), and backups enabled. The server was fine — but the price-to-performance ratio had stopped making sense.
Then we discovered the Hetzner AX162-R.
That’s $14,388 saved per year — for a server that’s objectively more powerful in every dimension. The decision was easy.
I’ve been a DigitalOcean customer for nearly 8 years. They have a great product and I have no complaints about reliability or developer experience. But looking at those numbers now, I cannot help feeling a bit sad about all the extra money I left on the table over the years. If you are running steady-state workloads and not actively using DO’s ecosystem features, do yourself a favor and check dedicated server pricing before your next renewal.
* Several live mobile apps serving hundreds of thousands of users
Old server: CentOS 7 — long past its end-of-life, but still running in production. New server: AlmaLinux 9.7 — a RHEL 9 compatible distribution and the natural successor to CentOS. This migration was also an opportunity to finally escape an OS that hadn’t received security updates in years.
The naive approach — change DNS, restart everything, hope for the best — wasn’t acceptable. Instead, we designed a proper migration path with six phases:
Phase 1 — Full stack installation on the new server
Nginx (compiled from source with identical flags), PHP (via Remi repo, with the same .ini config files from the old server), MySQL 8.0, Neo4J Graph DB, GitLab EE, Node.js, Supervisor, and Gearman. Every service had to be configured to match the old server’s behavior before we touched a single DNS record.
SSL certificates were handled by rsyncing the entire /etc/letsencrypt/ directory from the old server to the new one. After the migration was complete and all traffic was flowing through the new server, we force-renewed all certificates in one shot:
Phase 2 — Web files cloned with rsync
The entire /var/www/html directory (~65 GB, 1.5 million files) was cloned to the new server using rsync over SSH with the –checksum flag for integrity verification. We ran a final incremental sync right before cutover to catch any files changed after the initial clone.
Phase 3 — MySQL master to slave replication
Rather than taking the database offline for a dump-and-restore, we set up live replication. The old server became master, the new server a read-only slave. We used mydumper for the initial bulk load, then started replication from the exact binlog position recorded in the dump metadata. This kept both databases in real-time sync until the moment of cutover.
Phase 4 — DNS TTL reduction
We scripted the DigitalOcean DNS API to lower all A and AAAA record TTLs from 3600 to 300 seconds — without touching MX or TXT records (changing mail record TTLs can cause deliverability issues). After waiting one hour for old TTLs to expire globally, we were ready to cut over in under 5 minutes.
Phase 5 — Old server nginx converted to reverse proxy
We wrote a Python script that parsed every server {} block across all 34 Nginx site configs, backed up the originals, and replaced them with proxy configurations pointing to the new server. This meant that during DNS propagation, any request still hitting the old IP was silently forwarded. No user would see a disruption.
Phase 6 — DNS cutover and decommission
A single Python script hit the DigitalOcean API and flipped all A records to the new server IP in seconds. The old server remained as a cold standby for one week, then was shut down.
The key insight: at no point did we have a window where the service was unavailable. Traffic was always being served — either directly or through the proxy.
This was the most complex part of the entire operation.
We used mydumper instead of the standard mysqldump — and it made an enormous difference. By leveraging the new server’s 48 CPU cores for parallel export and import, what would have taken days with a traditional single-threaded mysqldump was completed in hours. If you’re migrating a large MySQL database and you’re not using mydumper/myloader, you’re doing it the hard way.
The main dump’s metadata file recorded the binlog position at the time of the snapshot:
File: mysql-bin.000004
Position: 21834307
This would be our replication starting point.
Once the dump was complete, we transferred it to the new server using rsync over SSH. With 248 GB of compressed chunks, this was significantly faster than any other transfer method:
The –compress flag in mydumper paid off here — compressed chunks transferred much faster over the wire.
Being stuck on CentOS 7 meant we were also stuck on MySQL 5.7 — an outdated version that had been running in production for years. Before the migration, we ran mysqlcheck –check-upgrade to verify that our data was compatible with MySQL 8.0. It came back clean, so we installed the latest MySQL 8.0 Community on the new server. The performance improvement across all our projects was immediately noticeable — query execution times dropped significantly thanks to MySQL 8.0’s improved optimizer and InnoDB enhancements.
That said, the version jump did introduce one tricky problem.
After import, the mysql.user table had the wrong column structure — 45 columns instead of the expected 51. This caused mysql.infoschema to be missing, breaking user authentication.
But this failed the first time with:
ERROR: ‘sys.innodb_buffer_stats_by_schema’ is not VIEW
The sys schema had been imported as regular tables instead of views. Solution:
With both dumps imported, we configured the new server as a replica of the old one:
Almost immediately, replication stopped with error 1062 (Duplicate Key). This happened because our dump was taken in two passes — during the gap between them, rows were written to certain tables, and now both the imported dump and the binlog replay were trying to insert the same rows.
IDEMPOTENT mode silently skips duplicate key and missing row errors. All critical databases synced without a single error. Within a few minutes, Seconds_Behind_Master dropped to 0.
Before touching a single DNS record, we needed to verify that all services were working correctly on the new server. The trick: we temporarily edited the /etc/hosts file on our local machine to point our domain names to the new server’s IP.
# /etc/hosts (local machine)
NEW_SERVER_IP yourdomain1.com
NEW_SERVER_IP yourdomain2.com
# … and so on for all your domains
With this in place, our browsers and Postman would hit the new server while the rest of the world was still going to the old one. We ran through our API endpoints, checked admin panels, and verified that every service was responding correctly. Only after this confirmation did we proceed with the cutover.
Once master-slave replication was fully synchronized, we noticed that INSERT statements were succeeding on the new server when they shouldn’t have been — read_only = 1 was set, but writes were going through.
The reason: all PHP application users had been granted SUPER privilege. In MySQL, SUPER bypasses read_only.
We revoked it from all 24 application users:
After this, read_only = 1 correctly blocked all writes from application users while allowing replication to continue.
All domains were managed through DigitalOcean DNS (with nameservers pointed from GoDaddy). We scripted the TTL reduction against the DigitalOcean API, only touching A and AAAA records — not MX or TXT records, since changing mail record TTLs can cause deliverability issues with Google Workspace.
After waiting one hour for old TTLs to expire, we were ready.
Rather than editing 34 config files by hand, we wrote a Python script that parsed every server {} block in every config file, identified the main content blocks, replaced them with proxy configs, and backed up originals as .backup files.
The key: proxy_ssl_verify off — the new server’s SSL cert is valid for the domain, not for the IP address. Disabling verification here is fine because we control both ends.
With replication at Seconds_Behind_Master: 0 and the reverse proxy ready, we executed the cutover in order:
1. New server: STOP SLAVE;
2. New server: SET GLOBAL read_only = 0;
3. New server: RESET SLAVE ALL;
4. New server: supervisorctl start all
5. Old server: nginx -t && systemctl reload nginx (proxy goes live)
6. Old server: supervisorctl stop all
7. Mac: python3 do_cutover.py (DNS: all A records to new server IP)
8. Wait: ~5 minutes for propagation
9. Old server: comment out all crontab entries
The DNS cutover script hit the DigitalOcean API and changed every A record to the new server IP — in about 10 seconds.
After migration, we discovered many GitLab project webhooks were still pointing to the old server IP. We wrote a script to scan all projects via the GitLab API and update them in bulk.
We went from $1,432/month down to $233/month — saving $14,388 per year. And we ended up with a more powerful machine:
The entire migration took roughly 24 hours. No users were affected.
MySQL replication is your best friend for zero-downtime migrations. Set it up early, let it catch up, then cut over with confidence.
Check your MySQL user privileges before migration. SUPER privilege bypasses read_only — if your app users have it, your slave environment isn’t actually read-only.
Script everything. DNS updates, nginx config rewrites, webhook updates — doing these by hand across 34+ sites would have taken hours and introduced errors.
mydumper + myloader dramatically outperforms mysqldump for large datasets. Parallel dump/restore with 32 threads cut what would have been days of work down to hours.
Cloud providers are expensive for steady-state workloads. If you’re not using autoscaling or ephemeral infrastructure, a dedicated server often delivers better performance at a fraction of the cost.
All Python scripts used in this migration are open-sourced and available on GitHub:
* do_list_domains_ttl.py — List all DigitalOcean domains with their A records, IPs, and TTLs
* do_to_hetzner_bulk_dns_records_import.py — Migrate all DNS zones from DigitalOcean to Hetzner DNS
* do_cutover_to_new_ip.py — Flip all A records from old server IP to new server IP
* mysql_compare.py — Compare row counts across all tables on two MySQL servers
* final_gitlab_webhook_update.py — Update all GitLab project webhooks to the new server IP
All scripts support a DRY_RUN = True mode so you can safely preview changes before applying them.
...
Read the original on isayeter.com »
Anonymous request-token comparisons from the community, showing how Opus 4.6 and Opus 4.7 differ on real inputs
...
Read the original on tokens.billchambers.me »
Japan is the land of the train. 28 percent of passenger kilometers in Japan are travelled by rail, more than anywhere else in the developed world. France achieves 10 percent, Germany 6.4 percent, and the United States just 0.25 percent. Travel in Japan is over a hundred times more likely to be by rail than travel in the United States.
Japan’s vast railway network is divided between dozens of companies, nearly all of them private. The largest of these, JR East, carries more passengers than the entire railway system of every country other than China and India. Each year, JR East carries four times as many passengers as the whole British railway system, even though it has fewer kilometers of track, serves about ten million fewer people, and competes with eight other companies. Japan’s railway system turns a large operating profit and receives far less public subsidy than European and American railways.
Subscribe for $100 to receive six beautiful issues per year.
In most developed countries, the railways have struggled since the rise of the automobile in the 1950s. From this point on, North America saw the near-total replacement of passenger trains with cars and planes. In Europe, it meant vast government financial support to keep the lines open.
Japan’s different trajectory is often attributed to culture: the Japanese are conformists who are content to take public transport, unlike freedom-loving Americans who prefer to drive everywhere. Europeans are somewhere in between. Culture is also used to explain the incredible punctuality of Japanese railways.
These cultural explanations are wrong. The Japanese love cars, but they take trains because they have the best railway system in the world. And their system excels because of good public policy: business structure, land use rules, driving rules, superior models for privatization, and sound regulation have given Japan its outstanding railways.
This is good news for friends of rail. Culture is built over centuries, and replicating it is hard. But successful public policies can be emulated by one good government. Much about Japan’s railway system could be replicable around the world.
Today, the most striking institutional feature of Japanese rail is that it is privately owned by a throng of competing companies.
The railway arrived in Japan in 1872, during the Meiji Restoration, which opened the country up to foreign trade, ideas, and technologies. Like most Western countries, Japan nationalized its railways in the early twentieth century, creating what became known as Japanese National Railways (JNR). But it did not nationalize all of the lines, focusing only on mainline railways of national importance, and new private railways were still permitted.
Between 1907 and World War II, Japan saw a boom in new private electric railways, coinciding with rapid urbanization. Technologically, most of these private railways were similar to the famous interurbans in the United States: they were basically electric trams, but running between cities as well as within them. The American network eventually withered, and almost nothing of it survives today. In Japan, however, the network consolidated, and the light tramlines gradually evolved into heavy-rail intercity connections.
These companies are today known as ‘legacy private railways’ on account of their having been private since their inception. There are eight legacy private railways in the Tokyo metropolitan area, five in the Osaka–Kobe–Kyoto megalopolis, two in Nagoya, and one in the fourth city of Fukuoka. There are also dozens of smaller ones elsewhere. In the three largest urban areas, these operators account for nearly half of railway track and stations, as well as a plurality of ridership. The largest, Kintetsu, not only operates urban services, but a whole intercity network stretching from Osaka to Nagoya.
These companies often compete head-to-head. At its most extreme, three separate commuter lines compete for the traffic between Osaka and the port city of Kobe, running in parallel, sometimes fewer than 500 meters apart.
Meanwhile, the nationalized railways were managed by JNR. In the postwar era, JNR was responsible for building the famous Shinkansen system, as well as running commuter and long-distance lines throughout Japan. But in 1988, it was largely privatized, broken into six regional monopolies for passenger services together with a single national freight operator. These are collectively known as the Japan Railways Group (JR).
This means that Japan has ended up with six railway companies that trace their descent to the nationalized railways, the sixteen big legacy companies that have always been private, and a host of minor legacy railways, as well as numerous underground metros (some private, some municipally owned), monorails, and tram systems. This institutional diversity is striking enough. But equally striking is the consistent business model that has evolved amidst this pluralism: the railway that builds a city.
If I take a train to go for a solitary walk in the countryside, the railway company can capture some of the value it creates by charging me for the journey, just as other companies capture the value of the goods and services they provide by charging for them. However, if I take a train to visit family, clients, a theater, or a shop, an important difference appears. The railway can capture the value it creates for me by charging me a fare, but it cannot capture the value it creates for those at my destination. As transport infrastructure creates benefits that produce no revenue for providers, free markets rarely build enough of it.
Japan has partly solved this problem by enabling railway companies to do a great deal beside running railways. Take the example of the Tokyu corporation, one of the legacy private railways in southern Tokyo. You can not only travel on its trains, but also ride a Tokyu bus, live in a Tokyu-built house, work in a Tokyu office complex, see a doctor in a Tokyu hospital, buy groceries in a Tokyu supermarket, spend an afternoon at a Tokyu museum-theater-cinema complex, take your children to their amusement park, and even die in a Tokyu retirement home. The positive spillover effects of the railway on these things are captured by Tokyu because it owns them. The president of Tokyu has said:
I think that though we are a railway company, we consider ourselves a city-shaping company. In Europe for instance, railway companies simply connect cities through their terminals. That is a pretty normal way of operating in this industry, whereas what we do is completely different: we create cities and then, as a utility facility, we add the stations and the railways to connect them one with another.
This model was pioneered in the 1950s by what became Hankyu Railways. Hankyu’s network connects central Osaka to its northern suburbs, as well as Kyoto and Kobe. Its innovative founder Kobayashi Ichizo first built suburban housing, then a department store at the terminal station; he then created a hot spring resort, a zoo, and his own distinctive brand of all-women musical theater, the Takarazuka Revue. He also began to run bus services to and from his stations. Other companies emulated Hankyu’s example: Tokyo Disneyland is a collaboration between Disney and the Keisei Railway, while Hanshin in Osaka owns the Hanshin Tigers baseball team.
Core rail operations are profitable for every Japanese private railway company, but they usually only account for a plurality or a small majority of revenue. The rest is contributed by their portfolio of side businesses. There is a natural financial synergy between the reliable but unremarkable cash flow of train fares and the profitable but riskier real estate and commercial side of the business. Railway companies’ side businesses also attract people to live and work on their rail corridor, reinforcing the customer base for the railway services themselves.
This virtuous circle is enabled by transit-oriented development. Japan’s liberal land use regulation makes it straightforward to build new neighborhoods next to railway lines, giving commuters easy access to city centers. It also enables the densification of these centers, which means that commuters have more places they want to go.
Railways cost a lot to build, but once they are built, they can move enormous numbers of people, far more than a road of similar size. This means that they work best in cities with a high density of people, jobs, and other activities. In 2019, New York City was the only American city where rail had a higher modal share than cars, in part because Manhattan has 2.5 million jobs, two million residents, and 50 million tourist visits crammed into 59 square kilometers.
This does not mean that rail-oriented cities must be structured like Chinese cities: islands of high-rise apartments connected by metros and separated by motorways. Japanese cities have the lowest residential density in Asia, and a plurality of the Japanese live in houses, usually detached ones. The urban area of Tokyo, the densest Japanese city, has a weighted population density less than that of many European cities, including Paris, Madrid, or Athens. Japanese cities have vast low-rise, predominantly residential suburbs, built at densities that might be higher than what is typical in the United States, but that would be quite normal in Northern Europe.
What makes Japan’s cities particularly suited to rail is thus not their residential districts, but their huge and hyperdense centers. These really are special: the cores of Tokyo or Osaka are unlike anything that exists in Europe or North America. Many of their features are famous worldwide: the vertical street zakkyo buildings, underground streets, shopping streets under rail tracks, covered arcades, elevated station squares, and vertical cities. Getting millions of commuters and shoppers into these downtowns is where rail excels because its extreme spatial efficiency means that infrastructure with a relatively modest footprint can transport vast numbers of people into a small area.
None of this emerged from a coherent masterplan of transit-oriented development like Copenhagen’s Finger Plan or Curitiba’s Trinary System. Postwar Japanese opinion was committed to decentralization both to rural peripheries and to the suburbs through greenbelts, motorways, and new towns.
Instead, this variety and adaptability around railways is possible because of the way Japanese urban planning works. Since 1919, Japan has had a standardized national zoning system, but it is much more liberal than development control systems in Western countries. The Japanese authorities did not intend or even desire dense urban centers, but they did not prevent them, rather like nineteenth-century governments in the West.
This liberal zoning system is reinforced by private access to city planning powers. Thirty percent of Japan’s urban land has been subject to land readjustment, where agreement among two thirds of residents and landowners in an area is enough to allow its replanning, including compulsorily taking and demolishing land for amenities and infrastructure. Initially land readjustment was used only to assemble rural land for urbanization, but over time it was increasingly used to redevelop already urbanized areas, and new variants were created to build the skyscrapers that surround the major stations of central Tokyo.
The history of the private railway companies could be written as a story of land readjustment projects: the initial building of the lines in the interwar years proceeded through one land readjustment project after another. Postwar improvements such as double-tracking, platform lengthening, and constant redevelopment of stations and their immediate thresholds were only possible because the railways could secure land takings cooperatively with local businesses and landowners.
Perhaps the greatest example of this phenomenon involved Tokyu. In 1953 the company decided to build the Den’en Toshi Line, or Garden City Line, to serve a rural area southwest of Tokyo. This would be enabled by a series of land readjustment projects collectively among the largest in Japanese history.
Over 30 years, 3,100 hectares were covered, of which only 36 percent was devoted to residential and commercial development, with 20 percent for forest and parks, 17 percent for roads, and much of the rest for watercourses. The population of the land readjustment zone would rise from 42,000 in 1954 to over 500,000 in 2003.
By connecting the affluent southwestern suburbs to Tokyu’s main real estate hub next to Shibuya station, now the second busiest in the world, the Den’en Toshi Line allowed Tokyu to become the largest private railway by revenue and ridership. The Japanese government and academics generally consider the Den’en Toshi Line to be the best corridor of transit–oriented development in Japan.
But the railway-as-city-builder model is not the only reason Japanese railways have been able to thrive. European countries usually prohibited railways from running real estate side businesses, but in the United States and Canada the practice was extremely widespread in the nineteenth and early twentieth centuries, and many famous railway suburbs were developed this way. Despite this, passenger rail in these countries collapsed in the mid-twentieth century. Part of the difference was that Japan did not extend the same implicit subsidies to cars as Western governments did.
The land of Toyota, Nissan, and Honda is not an anti-car nirvana. In fact, Japan has excellent motorways, and across the country as a whole a small majority of journeys are made by car. But Japan is a place where cars and car-oriented lifestyles compete on a level playing field.
Japan is one of the only countries to have privatized parking. In Europe and North America, vast quantities of parking space is socialized: municipalities own the streets and allow people to park on them at low or zero cost. Initially with the intention of encouraging the provision of more parking spaces, Japan made it illegal to park on public roads or pavements without special permission. Before someone buys a car, they must prove that they have a reserved night-time space on private land, either owned or leased.
Since parking on public land is banned, municipalities are not worried about overspill parking from developments with inadequate private parking. They therefore have no reason to impose parking minimums on developments: the market is left to decide whether parking is the most valuable use of private land. Where land is abundant, as in rural areas, suburbs, or small towns, private parking is plentiful. But in city centers, it is outcompeted by other land uses. According to the late Donald Shoup, central Tokyo has 23 parking spaces per hectare and 0.04 parking spaces per job, compared with 263 and 0.52 for Los Angeles. Even Manhattan, the densest urban area in North America with the lowest levels of car ownership, has about 60 parking spaces per hectare.
Japanese roads are expected to be self-financing. Motorways are run by self-contained public cooperatives, very similar to the statutory authorities that ran English roads and canals between 1660 and the late 1800s, and funded by tolls on their users. Vehicle registration taxes, which are allocated to localities for road construction and maintenance, are worth three percent of the Japanese government budget.
These measures, adopted in the 1950s, were not intended to suppress car use — the point was to fund a massive road expansion — but they have forced private vehicles to internalize many of their hidden costs. In the Tokyo urban area, the average household spends 71,000 yen ($450) each year on public transport fares and 210,000 yen ($1,350) on car purchase and maintenance costs.
But the private car was not the only competitor faced by the private railways. For eight decades in the twentieth century, they also had to face the juggernaut of Japanese National Railways. Its privatization in 1988 removed the final obstacle to creating the world’s best railway system.
Railway privatization in Britain, New Zealand, Argentina, and Sweden has had a mixed reception, and all of those countries, apart from Sweden, have taken steps to reverse it. In Japan, it has been so successful that the government subsequently privatized the metro systems in Tokyo and Osaka.
In the postwar period, JNR enjoyed real successes. It built the revolutionary Shinkansen, the first high-speed railway in the world. It also aggressively electrified and double-tracked major trunk lines, quadruple-tracked lines into and out of major cities, and added city-center loops and freight bypasses. But these achievements were overshadowed by two problems.
The first was politics. Many countries adapted to the rise of the car by closing the least profitable parts of their passenger rail network, like the consolidation of American freight rail into the Class I operators or the Beeching Axe in Britain. In Japan, however, the ruling Liberal Democratic Party drew its support from rural constituencies, whose support it retained with pork–barrel politics. Its ‘rail tribe’ group, led by rural MPs, prevented JNR from adapting itself to mass motorization.
JNR therefore did not amputate gangrenous rural and freight services that imposed heavy costs with few benefits. Worse, it continued to build new loss-making rural railway lines, known in Japanese as Gaden-intetsu, or railways pulled into the rice field.
The second problem was organized labor. In general, Japanese trade unions are known for their moderation and responsibility, a generalisation that also held true for the unions at the legacy private railways. The JNR unions, however, became highly militant, secure in the knowledge that their nationalized employers could never go bankrupt. Their largest series of strikes in 1973 provoked riots from commuters.
The railway unions imposed overstaffing on revenue-generating urban services, at a time when both international and private domestic operators were reducing staffing requirements against a backdrop of higher wages and the growing automation of signaling and ticketing. As a result, 78 percent of JNR’s costs were related to labor, compared to 40 percent for other Japanese railways. The average worker at a private railway was 121 percent more productive than their JNR counterpart.
By the early 1980s, only seven out of 200 JNR lines made a profit. Successive governments deferred serious reform, running up debt, cutting down investments in new urban lines, raising ticket prices to twice those of comparable private railways, and increasing subsidies — which rose until annual subsidies equaled the total cost of the Shinkansen.
In 1982, Prime Minister Yasuhiro Nakasone started to privatize the railways. Unlike other countries, Japan simply returned to the traditional private railway model of the nineteenth and early twentieth centuries: tracks, trains, stations, and yards were owned by vertically integrated regional conglomerates.
There are substantial advantages to vertical integration. Railways are a closed system that has to be planned as a single unit. Changing the timetable at station A can affect the timetable at station Z; buying new trains that can travel faster might require changes to the infrastructure so they can reach their top speed, which in turn requires rewriting the timetables. This becomes especially complicated if different services share tracks. To prevent delays from propagating from one service to another, the timetable needs to be carefully designed to make best use of the available infrastructure.
The starkest effect of privatization was a massive and immediate increase in labor productivity and profitability relative to the legacy private railways. In fact, this began before privatization: its mere threat strengthened the government’s hand when bargaining with the unions and forced JNR to begin closing rural lines.
Privatization saw a general trend of productivity improvements, following a big one-time improvement between 1982 and 1990, when the workforce was cut by more than half, 83 loss-making lines were removed, and JNR’s debts were transferred to a holding company.
The second great advantage of privatization was to allow the JR companies to emulate the railway-as-city-builder model of the legacy private railways: for instance, JR East owns two shopping center brands, a ski resort, a coffee chain, and even a vending machine drink company. The JR companies have not ignored their rail business: they have continued to build new high-speed lines and urban tunnels, upgrade stations, and implement a host of other improvements such as the introduction in the 1990s of smart cards that allow passengers to pay their fare with a tap.
This does not mean that the Japanese railway industry is a pure creature of free enterprise. No railway system ever has been. The Japanese system has found an equilibrium that makes rail policy explicit and limited. Leaving aside railway safety and business regulation, there are two main policy levers: fare maximums and capital expansion subsidies.
Price controls are often cited as a classic example of misguided government intervention, whether through rent controls, caps on the price of gasoline, wage freezes, or minimum agricultural prices. Tokyo’s infamously crammed trains are a symptom of underpriced rush hour traffic.
Railways have market power because the substitutes for railway trips – coaches, cars and planes – are quite a different product. This monopolistic position has historically meant trouble: monopoly systems, whether private or public, have a tendency to abuse their position to charge higher prices and run bad services. For this reason, the private monopolies that were common in the Western world before World War I often had price controls imposed on them. For example, most of the American streetcar networks were operated as long-term, price-controlled franchises granted by the city.
Price maximums, if set too low, could have ruined Japan’s railways. This is exactly what happened to many Western transit services after the First World War. But the postwar Japanese practice has capped fares generously. The system is explicitly designed to maintain profitability per rider, which in turn incentivizes the companies to maximize ridership. That buys political legitimacy for the privatized system, which is necessary for the continued provision of capital expansion subsidies. Indeed, during the long deflation era between 1992 and 2022, it was common for operators to charge below the maximum, and the real value of railway fares continued to rise. Fare maximums are set on the basis of the average cost structures of all railway operators in a region, so companies with below-average costs like Tokyu would often charge below the cap to maintain a competitive edge, prevent public backlash, and maximize traffic to their side-businesses.
Other than the fare maximums, the railways are free to make their own decisions about timetables, service patterns and day-to-day operations, a highly specialized and technical task which requires deep expertise. This contrasts with the government meddling with, say, Amtrak’s routes.
Carefully designed public subsidies also play a useful role. Although Japanese railways do not receive subsidies for day-to-day operations, they do receive government loans and grants for capital investments. These are typically tied to public priorities, such as disability access or earthquake-proofing, or to projects that have large spillovers that the railway company would be unable to internalize, like removing level crossings, or elevating at-grade railways or trams in order to reduce road congestion and accident risk. Generally, the local prefectural government will match the contribution of the national government. Larger new build projects are subject to lease back or debt-payment conditions that fare revenue is expected to pay back.
Railway companies invested heavily in real estate businesses, often funding lines through selling land for housing around new stations. Liberal spatial policy meant that such development happened easily, even as it enabled dense development in urban cores where radial rail lines converged. Rail companies were generally vertically integrated regional monopolies, owning the land, track, and rolling stock, setting their own timetables, and employing their staff. The state imposed controls to stop them exploiting their monopoly position, but it did so cautiously, allowing them to make sufficient profit that incentives to invest were preserved. Capital subsidies were targeted at providing specific public goods that normal commercial operations overlooked.
The above paragraph could be written by a historian of the future about contemporary Japan. But every word in it could also be written by a historian today about the United States in the nineteenth century — usually seen as the epitome of capitalist individualism. This striking fact contradicts the idea that America’s supposed individualism foreordains it to be the land of the car, or that Japan’s supposed communitarianism foreordained it to be the land of rail.
It also puts pressure on the idea that the demise of rail is the inevitable consequence of cars. All countries saw some shift to cars in the twentieth century, and all rail industries had to respond to that. But public policy had an enormous effect on how successfully they did so. The rise of zoning restrictions on density, excessive price controls, nationalization, and vertically disintegrated privatization have hampered Western rail in remaining competitive against cars since the 1920s. By maintaining and restoring the institutions that built the first railway systems in the nineteenth century, the Japanese have created the mightiest railway system of the twenty-first.
...
Read the original on worksinprogress.co »
In 2025, the Kdenlive team continued grinding to push the project forward through steady development, collaboration, and community support. Over the past year we’ve found a nice balance between adding new features, bug fixing, polishing the user interface, and improving performance and workflow, with stability taking priority over feature creep. We relaunched the website with a new content management system, refreshed some content and the design, and restored historic content dating back to 2002. We also strengthened upstream collaboration with the MLT developers and contributed several improvements to OpenTimelineIO.Here’s a look at what we’ve been up to and what is ahead.As part of KDE Apps, we follow the KDE Gear release cycle, with three major releases each year—in April, August, and December—each followed by three point maintenance releases.This release added a powerful automatic masking tool and brought the last batch of features from our last fundraiser.The new Object Segmentation plugin based on the [SAM2][4] model allows to remove any selected object from the background.We rewrote our OpenTimelineIO import and export function using the C++ library. Now you can exchange projects with other editing applications that support this open source file format.Audio waveform generation got a 300% performance boost, along with a refactored sampling method that accurately renders the audio signal and higher-resolution waveforms for greater precision.This release focused heavily on stabilization, bringing over 300 commits and fixing more than 15 crashes. Instead of major new features, the effort went into polishing and bug fixing.We redesigned the audio mixer bringing levels with clearer visuals and thresholds. We also did some code refactoring and cleanup. This change fixes issues with HiDPI displays with fractional scaling.Guides and Markers got a major overhaul this release to improve the project organization.This release the titler received some much needed love like improved SVG and image support with ability to move and resize items, added center resize with Shift + Drag, and renamed the Pattern tab to Templates and moved the templates dropdown to itThe focus of this release cycle was on improving the user experience and polishing the user interface.We added a new first-run launch screen for first time users as well as added a Welcome Screen allowing to easily launch recent projects.We added a new, more flexible docking system that lets you group widgets, show or hide them on demand, and save layouts as separate files that can be shared or stored within projects.The audio waveform in the Project Monitor got a revamped interface with an added minimap.This next release is just around the corner and brings a nice batch of nifty new features like monitor mirroring and animated transition previews, making it much easier to visualize how they will look before applying them. Additionally, dropping a transition onto the timeline can now automatically adjust its duration to match the clips above and below, saving time and reducing manual tweaking.This feature allows you to mirror any monitor while working in fullscreen mode. It’s especially useful when working with multiple displays or collaborating with others in the editing room.Change the playback speed of multiple clips at onceImport a clip directly from the timeline context menu and insert it at the click positionOption to always zoom toward the mouse position instead of the timeline playheadOur roadmap is constantly being reviewed and updated, and some of the upcoming highlights include implementing the new features in MLT, the multimedia framework which powers Kdenlive. Some exciting upcoming features include 10/12 bit color support, playback optimizations (decoding), and OpenFX support. (Shoutout to a Kdenlive community member for leading this effort). Also expected is a refactoring of the subtitle system as well as continuing to develop the Advanced Trimming Tools.We are currently working on refactoring the keyframing system and implementing a Dopesheet, basically it is a dedicated timeline for managing and viewing keyframes from multiple effects simultaneously. This work will also introduce per-parameter keyframing (currently, once you add a keyframe to an effect, it is applied to all parameters by default). More info can be found in the last status report. This work is made possible through an NGI Zero Commons grant via NLnet.We have been working on enabling and fixing multiple modules in MLT to compile with MSVC allowing us to ship Kdenlive in the Microsoft Store soon. Another advantage is that it will allow to run unit tests on our CI for Windows.Currently, the Kdenlive core team is made up of 8 active members, including 2 developers.In 2025, 38 people contributed code to Kdenlive (including the core dev team and other KDE devs), a truly impressive number! Even more exciting, about half of them were first-time contributors, which is always great. We hope to see many of them continue contributing in the future. On behalf of the Kdenlive team, we salute you all!Note that these numbers refer specifically to contributions to the Kdenlive application. Other projects such as the test suite and website are hosted in separate repositories and are not included in these figures.In February, part of the Kdenlive core team met in Amsterdam for a short sprint, highlighted by a visit to the Blender Foundation, where we met with Francesco Siddi and he shared valuable insights into Blender’s history and offered advice on product management for Kdenlive. We also attended their weekly open session, where artists and developers present progress on ongoing projects. During the sprint, we discussed and advanced several technical topics, some highlights include:Finishing an MLT Framework patch to enable rendering without a display server (needed for Flatpak testing)The Berlin sprint was one of our most productive gatherings to date. Most of the team was there in person, and we also connected online with those who couldn’t make it. We discussed just about every aspect of the project, from roadmap planning to upcoming features and workflow improvements. Some of the highlights include:Evaluated the current state of the Titler and discussed possible integration with GlaxnimateDeveloped a proof of concept for using KDDockWidgetsRedesigned and started development of the audio clip view in the Clip MonitorThanks to the nice folks at c-base who kindly hosted us.Akademy is always a great opportunity to exchange ideas with the broader KDE and Qt communities. One of the highlights was meeting the maintainer of Glaxnimate, where we discussed common goals and ways to collaborate. This year, Akademy will be in Graz on the 19-24 of September, and we hope to see you there.We’re very happy to see more YouTube channels talking about Kdenlive. Here are some examples of what the community has been creating.We’d love to see what you’ve been working on in the past year. Share your videos productions in the comments!Help us grow the community by organizing meetups, talks, or workshops in your local area. Don’t hesitate to contact us if you need guidance, materials, or support to get started.Below are photos from a workshop with indigenous communities in Paraguay.Kdenlive was downloaded 11,500,714 times from our download page in 2025. Do note that many additional installs happen through Linux distribution package managers, the Snap Store, Flathub, and other third-party servers, where statistics are not always available or reliably measurable.The Flatpak package from Flathub gets 41,499 downloads per month.25.04.2 got the most number of downloads.Files With Most Code ChangesTo the 5 of you in Antarctica, let us know what you are editing. ;)Ever since our last, and very successful, fundraiser in 2022, we haven’t actively asked for donations, yet the community has continued to support us. We are very grateful for that.In 2025, we received a total of €9,344.80 from donations (down from €11,526.61 in 2024). Around 30% of the amount was given by donors who kindly set up a recurring plan. The average donation was about €25, with the lowest amount being €10 and the highest €500.We allocate 20% of our budget to KDE e.V. to support infrastructure costs (servers and related expenses), as well as administration, legal support, and travel. As in previous years, your contributions enable us to continue supporting Jean-Baptiste (Kdenlive’s maintainer), allowing him to dedicate several days each month to Kdenlive in addition to his volunteer work.WE NEED YOUR SUPPORTKdenlive needs your support to keep growing and improving. If just a quarter of the people who downloaded Kdenlive in 2025 contributed €5, our maintainers would be able to dedicate more time to the project, and it would even allow us to hire more develpers to speed up development and improve stability. Small amounts can make a big difference, please consider making a donation.You may also contribute by getting involved and helping in:
...
Read the original on kdenlive.org »
...
Read the original on www.righto.com »
Computer chips that cram billions of electronic devices into a few square inches have powered the digital economy and transformed the world. Scientists may be on the cusp of launching a similar technological revolution — this time using light.
In a significant advance toward that goal, National Institute of Standards and Technology (NIST) scientists and collaborators have pioneered a way to make integrated circuits for light by depositing complex patterns of specialized materials onto silicon wafers. These so-called photonics chips use optical devices such as lasers, waveguides, filters and switches to shuttle light around and process information. The new advance could provide a big boost for emerging technologies such as artificial intelligence, quantum computers and optical atomic clocks.
Making circuitry for light as powerful and ubiquitous as circuitry for electrons is one of today’s technological frontiers, says Scott Papp, a NIST physicist whose group led the research, published this week in Nature. “We’re learning to make complex circuits with many functions, cutting across many application areas.”
When it comes to information transfer and processing, light can do things that electricity can’t. Photons — particles of light — are far zippier than electrons at working their way through circuits.
Laser light is also essential for controlling powerful, emerging quantum technologies such as optical atomic clocks and quantum computers.
But several hurdles remain before integrated photonics can truly hit its stride. One involves lasers. High-quality, compact and efficient lasers exist in only a few wavelengths, or colors, of light. For example, semiconductor lasers are very good at generating infrared light with a wavelength of 980 nanometers, or billionths of a meter — a color just outside the range of human vision.
Emerging technologies such as optical atomic clocks and quantum computers need laser light in many other colors as well. The lasers that produce those colors are big, costly and power-hungry, effectively confining these quantum technologies to a handful of special-purpose labs.
By integrating lasers into circuits on chips, scientists hope to help quantum technologies become cheaper and more portable, so they can start to fulfill their vast promise.
The new NIST photonics chip is a bit like a layer cake. NIST physicists Papp and Grant Brodnik, along with colleagues, started with a standard wafer of silicon coated with silicon dioxide (glass) and lithium niobate, a so-called nonlinear material that can change the color of light coming into it.
The researchers then added pieces of metal to electrically control how the circuits convert one color of light to others. The scientists also created other metal-lithium niobate interfaces that allowed them to rapidly turn light on and off within the circuits — a crucial ability for data processing and high-speed routing.
The icing on the cake, so to speak, was a second nonlinear material called tantalum pentoxide, or tantala. Tantala can transform light in ways that feel like magic, taking in a single laser color and putting out the full rainbow of visible light colors plus a wide range of infrared wavelengths. Papp and colleagues have spent years developing techniques to fabricate circuits out of tantala without heating it up, allowing the material to be deposited onto other materials without damaging them.
By patterning the different materials on top of each other in a three-dimensional stack, the researchers produced a single chip that efficiently routes light between layers. That allowed them to merge the light-manipulating wizardry of tantala with the controllability of lithium niobate. The new technique “allows seamless integration,” says Brodnik. “The real power is that tantala can be added to existing circuitry.”
Ultimately, the researchers were able to fit roughly 50 fingernail-sized chips containing 10,000 photonic circuits, each outputting a unique color, onto a wafer roughly the size of a beer coaster. “We can create all these different colors, just by designing circuits,” says Papp.
Quantum technologies such as clocks and computers could be among the biggest beneficiaries of integrated photonics. These devices often use arrays of atoms to store and process information. For each type of atom, physicists need lasers tailored to the atom’s internal quantum energy levels. For example, rubidium atoms, commonly used in quantum computers and clocks, respond to red light with a wavelength of 780 nanometers. Strontium atoms, another popular choice, “see” blue light at 461 nanometers. Shine other colors on the atoms and nothing happens.
The bulky, costly and complicated lasers needed to produce these bespoke colors have been a major hindrance to getting quantum computers and optical clocks out of the lab and into the field, where they could have big impacts. Cheap, low-power, portable optical clocks, for example, could help predict volcanic eruptions and earthquakes, offer an alternative to GPS for positioning and navigation, and help scientists investigate scientific mysteries such as the nature of dark matter. Quantum computers could offer new ways to study the physics and chemistry of drugs and materials.
Integrated photonic circuits aren’t just for quantum. Papp believes NIST’s photonics chips could help efficiently shuttle signals between the specialized chips used by tech firms, potentially making AI-based tools more powerful and efficient. Tech companies are also interested in using photonics to improve virtual reality displays.
While NIST’s chips aren’t yet ready for mass production, the technique used to create them provides a path forward, Papp and Brodnik say. The NIST scientists collaborated with experts at Octave Photonics, a Louisville, Colorado-based startup company founded by former NIST researchers that’s now working to scale up the technology.
“When you see the chip glowing in the lab, taking in invisible light and making all this visible light in one integrated chip — it’s obvious how many potential applications there could be,” says Papp.
...
Read the original on www.nist.gov »
I tried Claude Design yesterday and I have a theory for how this whole thing shakes out.
As product teams scaled and design needed to justify itself inside engineering orgs, it was pushed toward systematization — and Figma invented its own primitives to make that work: components, styles, variables, props, and so on. Some concepts are borrowed from programming, some aren’t, and the whole thing doesn’t neatly map onto anything. Guidance evolves, migrations pile up, and if you want to automate any of it you’re stuck with a handful of shoddy plugins. The beast is hairy enough that entire design roles now specialize in wrangling the system itself.
There’s always been a tense push-pull between Figma and code over what the source of truth should be. Figma won over Sketch partially by staking its claim there — their tooling would be canonical.
That victory had a hidden cost. By nature of having a locked-down, largely-undocumented format that’s painful to work with programmatically, Figma accidentally excluded themselves from the training data that would have made them relevant in the agentic era. LLMs were trained on code, not Figma primitives, so models never learned them. As code becomes easier for designers to write and agents keep improving, the source of truth will naturally migrate back to code. And all the baroque infrastructure Figma had to introduce over the past decade will look nuts by comparison. Why fuss around in a lossy approximation of the thing when you can work directly in the medium where it will actually live? If we want to make pottery, why are we painting watercolors of the pot instead of just throwing the clay?
At work, we’ve spent quite a bit of time back-porting design changes made directly in code back to Figma and it is not fun. I can’t share that file, but for a fair comparison, this is Figma’s own design system file for their product. I have to assume it was built by the most competent design system team you can find. And yet…
These are Figma’s own files. Built by their own team. This is the gold standard.
Imagine debugging a color that looks wrong. You check the component. The component uses a variable. The variable is aliased to another variable. That variable references a mode. The mode is overridden at the instance level. The instance lives inside a nested component with a library swap applied. At this point, you’re either considering picking up code or moving to the countryside and becoming a sheep farmer because one more minute of this will make you lose your goddamn mind.
So as the source of truth shifts back to code, Figma is left in an odd spot: holding a largely manual, pre-agentic system that nobody in their right mind would design from scratch today.
I think design tooling forks into two distinct shapes from here — and there’s almost a clock resetting between Figma and every other tool competing to answer the same question they answered in 2016: who can help me, a designer, get my ideas out fastest?
Spoiler: it’s not Figma Make. Figma Make feels like it primarily benefits people who have already drunk the Kool-Aid — it reads from Figma styles, component libraries, and proprietary props (or, as I like to call them, Prop Props), and it’s the only tool in this new landscape still pretending the design file is canonical. It’s the tool for people who want to (or have no choice but to) stay inside the system.
Claude Design is the first of those two tools, and takes the opposite bet. There’s an Arts and Crafts principle called “truth to materials” — the idea that a thing should be honest about what it is and how it’s made, rather than masquerading as something else. Figma ended up being the opposite of this: a set of extremely rigid schemas with a free-form “just vibes, man” costume over the top. Like a Type-A personality physically incapable of relaxing, forced to perform chill while internally screaming that your frames aren’t nested and your tokens are detached and nothing is on the grid. Claude Design, for all its roughness, is at least honest about what it is: HTML and JS all the way down.
And it has a massive structural advantage: its sibling is Claude Code. Eventually, I can see Claude Design just dumping things directly into Claude Code and vice versa. Claude Design’s onboarding already lets you import your repos. The feedback loop between design and implementation — which has been a source of friction since the beginning of time — becomes a single conversation.
The other tool that emerges from this moment will have no expectation of code at all. It’ll be a pure exploration environment — somewhere to drop rectangles, stack layer styles, fuss with blend modes and gradients, and go completely nuts, unconstrained by systems or prompting conventions. Maybe it’s an iPad app with Pencil support where you just quickly sketch a bunch of rectangles. 37signals could do something really funny right now. Or maybe it goes in the opposite direction — something more like Photoshop that goes all-in on high-fidelity compositing and lets our imaginations run wild, now that we’re no longer beholden to the ceiling of what you can do with CSS effects. Doesn’t it seem kinda weird how for 90% of its life, Figma’s only layer effect was a drop shadow or a blur?
Figma’s Sketch moment is rapidly approaching. And if you said that sentence to a Victorian child, they would probably have a stroke.
The following are messages meant only for the teams behind Sketch and Figma. If neither apply to you, you can skedaddle.
To Figma: I can see a world where this post does numbers in the Figma internal Slack. If that’s the case and you’re reading this from Figma: this wouldn’t have happened if you hired me last year when I was interviewing. Your loss, big dawg.
To Sketch: GET YOUR HEADS OUTTA YOUR ASSES AND GIVE EM HELL. ADD PARTICLE EFFECTS. ADD DEBOSSING EFFECTS. MESH TRANSFORMS. FUCK IT, ADD METAL SHADERS. GO NUTS. STOP COASTING OFF OF BEING MAC NATIVE. QUIT DRINKING COCOA AND GET THIRSTY FOR BLOOD.
To mom: Sorry for cursing.
@jonnyburch on Twitter shared a link to their blog post with similar thoughts, it’s quite good if you wanna go deeper.
...
Read the original on samhenri.gold »
Sixteen bets made $100,000 each accurately predicting the timing of the US airstrikes against Iran on 27 February. Later, a single user would make over $550,000 after betting that Ayatollah Ali Khamenei would topple, just moments before his assassination by Israeli forces. On 7 April, right before Donald Trump announced a temporary ceasefire with Iran, traders bet $950m that oil prices would come down. They did.
These bets and other well-timed wagers accurately predicted the precise timing of major developments in the US-Israel war with Iran, creating huge windfalls and raising concerns among lawmakers and experts over potential insider trading.
Betting — once largely siloed to sporting events — has now spread to include contracts on news events where insider information could give some traders an advantage.
The proliferation of online betting markets like Polymarket and Kalshi has allowed bets on virtually any news event. It’s also easier than ever to buy commodity derivatives like oil futures, where traders gamble on what the price of oil will be in the future.
Leaders of some US federal agencies and some members of Congress said they want to crack down on suspicious trading taking place across different marketplaces, but it’s unclear how much headway regulators will make.
“Is the problem that we don’t have legislation or that we don’t have enforcement capabilities?” said Joshua Mitts, a law professor at Columbia University. “To have a law that can’t really be enforced effectively given the technological limitations, it’s sort of putting the cart before the horse.”
On the night of 27 February, the day before the US and Israel would carry out strikes on Iran, an unusual influx of about 150 accounts on Polymarket placed bets that the US would strike Iran the next day. A New York Times analysis found the bets totaled $855,000, with 16 accounts pocketing more than $100,000 each.
Soon after, a single anonymous Polymarket user, under an account named “Magamyman”, made over $553,000 after betting that Khamenei would be “removed” from power just moments before he was killed by an Israeli airstrike, according to a complaint filed to the Commodity Futures Trading Commission (CFTC), the federal agency that regulates futures markets, by Public Citizen, a consumer advocacy group. The complaint also cites a crypto-analytics firm that identified six “suspected insiders” who made a total of $1.2m on Polymarket after Khamenei was killed.
The well-timed surge of wagers were seen again on 7 April, when at least 50 Polymarket accounts placed bets that the US and Iran would reach a ceasefire hours before Trump would announce it in a Truth Social post. Earlier, the president had said “a whole civilization will die tonight” if Iran did not open the strait of Hormuz.
But traders weren’t just active on Polymarket: there were similar surges of oil futures trading activity just hours before Trump announced updates to the conflict that would lower oil prices.
On 23 March, traders placed $580m in bets on the oil futures market just 15 minutes before Trump said on social media that the US was having “productive” talks with Iran, according to the Financial Times. The traders made a windfall after Trump’s comments triggered a sell-off in the oil markets that made oil prices plummet.
The same thing happened again on 7 April, this time when traders spent $950m on oil futures, betting that the price of oil would fall just hours before the ceasefire with Iran was announced.
“We can’t say from the outset whether any of these trades were illegal. Any one of them could be lucky, and any one of them could be based on lawful information,” said Andrew Verstein, a law professor at the University of California at Los Angeles. “But many of them bear the hallmarks of suspicious trades that would naturally warrant investigation.”
For those who closely follow trading patterns, the rush of activity that happened before these events seem too big to simply be bets hedging on luck.
“Not only the timing, but the amount of these bets makes it look very likely that someone had insider knowledge … and placed very, very substantial bets on it,” said Craig Holman, a government affairs lobbyist for Public Citizen who filed the group’s complaint to the CFTC.
Holman said he is skeptical about how bold the CFTC will be in its investigations given its current structure under the Trump administration. The commission typically has five bipartisan members that are appointed by the president. Now, the CFTC has only one commissioner: Michael Selig, whom Trump appointed at the end of 2025 and who has positioned himself as friendly toward prediction markets.
Over the last few months, the CFTC has been roiled in fights with state legislatures who argue that regulation of these online betting marketplaces belongs to the states.
Kalshi, Polymarket’s competitor, was temporarily banned in Nevada after the state sued the company for offering contacts in the state without a gambling license. Arizona meanwhile filed criminal charges against the company for allowing people to place bets on elections. In both cases, Kalshi denied any wrongdoing and has argued that the CFTC has exclusive jurisdiction over online prediction markets.
“It’s a wild west phase, when we’re talking about the prediction market industry, and now it’s spilled over into the stock market as well,” Holman said.
Anonymous sources told Reuters and Bloomberg that the CFTC launched an investigation into the oil futures trades that were placed on 27 March and 7 April, though the agency has not publicly announced it is conducting an investigation.
Speaking to Congress this week, Selig said that the agency is prepared to go after those who are suspected of insider trading, warning “we will find you and you will face the full force of the law”, but said that the commission would not issue any new regulations until it had five seated commissioners.
Polymarket did not respond to request for comment. In a statement, White House spokesperson Davis Ingle said “federal employees are subject to government ethics guidelines that prohibit the use of nonpublic information for financial benefit”.
“Any implication that administration officials are engaged in such activity without evidence is baseless and irresponsible reporting,” Ingle said. “The CFTC will always uphold its duty to monitor fraud, manipulation and illicit activity daily.”
Federal law prohibits government employees, including those working for Congress or the White House, from using non-public information for personal profit.
In late March, a bipartisan group of representatives introduced a bill that would ban members of Congress and senior staff within the federal government from participating in prediction market contracts related to political events or policy decisions.
But experts warn that insider trading law is complicated, and the new technology that makes it easier to place bets online leaves a complicated paper trail that can be hard to follow.
Historically, insider trading takes place when a person uses exclusive information about a company to buy or sell stocks right before information becomes public. These types of illegal trades are regulated by the Securities and Exchange Commission (SEC), which regulates the stock exchanges.
Insider futures trading could be seen as a subset of this typical insider trading, but the territory is new.
“The trick is that there are essentially no clean cases of people getting in trouble for commodity futures insider trading,” Verstein said. “The law there is just not well-developed.”
In a paper published last month, Mitts, the Columbia law professor, and other researchers screened more than 200,000 “suspicious wallet-market pairs” from February 2024 to February 2026 and found that traders in this group achieved a nearly 70% win rate, making $143m in well-timed bets tied to everything from the capture of former Venezuelan leader Nicolás Maduro to Taylor Swift’s engagement to Travis Kelce. The paper notes that informed traders face fewer legal constraints by trading on platforms like Polymarket or Kalshi because these markets still operate in a legal gray area.
“The challenge here is that this trading is occurring through the blockchain or other anonymized means, so it is going to be quite difficult for a regulator enforcement authority or prosecutor to determine the identity of the trader,” Mitts said. “They would also have to prove the trader traded on the basis of information that had been wrongly misappropriated.”
But the stakes are high. Insider trading involving classified military information can lead to distrust of both markets and governments.
“Unlike corporate insider trading, there’s a lot of ways for the government to make itself be correct. You can just make the war that would occur, and that’s concerning because then the real economy is being distorted,” Verstein said. “Real decisions, including perhaps financial decisions, are being distorted by financial bets.”
...
Read the original on www.theguardian.com »
The scene is right out of the 1950s with students pecking away at manual typewriters, the machines dinging at the end of each line.
Once each semester, Grit Matthias Phelps, a German language instructor at Cornell University, introduces her students to the raw feeling of typing without online assistance. No screens, online dictionaries, spellcheckers or delete keys.
The exercise started in spring 2023 as Phelps grew frustrated with the reality that students were using generative AI and online translation platforms to churn out grammatically perfect assignments.
“What’s the point of me reading it if it’s already correct anyway, and you didn’t write it yourself? Could you produce it without your computer?” said Phelps.
She wanted students to understand what writing, thinking and classrooms were like before everything turned digital. So, she found a few dozen old manual typewriters in thrift shops and online marketplaces, and created what her syllabus calls an “analog” assignment.
It might be premature to say that typewriters are making a comeback beyond Cornell’s campus. But the revival is part of a national trend toward old-school testing methods like in-class pen-and-paper exams and oral tests to prevent AI use for assignments on laptops.
Typewriters bring ‘old days’ taste of doing one thing at a time
Students arrived for class on a recent analog day to find typewriters at the desks, some with German and some with QWERTY keyboards.
“I was so confused. I had no idea what was happening. I’d seen typewriters in movies, but they don’t tell you how a typewriter works,” said Catherine Mong, 19, a freshman in Phelps’ Intro to German class. “I didn’t know there was a whole science to using a typewriter.”
Like a rotary phone, the manual typewriter appears simple but is not intuitive to the smartphone generation. Phelps demonstrated how to feed the paper manually, striking the keys with force but not so hard the letters would smudge. She explained that the dinging bell signifies the end of a line and the need to manually return the carriage to start the next line. (“Oh,” said one student, “that’s why it’s called ‘return.’”)
“Everything slows down. It’s like back in the old days when you really did one thing at a time. And there was joy in doing it,” said Phelps, who brings in her two children, aged 7 and 9, to serve as “tech support” and ensure no one has their phones out.
The assignment carries lessons beyond simply how to use a typewriter, which is the whole point.
“It dawned on me that the difference with typing on a typewriter is not just how you interact with the typewriter, but how you interact with the world around you,” said computer science major Ratchaphon Lertdamrongwong, a sophomore, whose class had to write a critique of a German movie they’d watched.
In the absence of screens, there are no notifications to distract you as you write. Without every answer readily available at his fingertips, he asked his classmates for help, which Phelps heartily encouraged.
“While writing the essay, I had to talk a lot more, socialize a lot more, which I guess was normal back then,” Lertdamrongwong said, referring to the typewriter era. “But it’s drastically different from how we interact within the classroom in modern times. People are always on a laptop, always on the phone.”
Without a delete key and the ability to correct every mistake, he paused to think more intentionally about his writing.
“This might sound bad, but I was forced to actually think about the problem on my own instead of delegating to AI or Google search,” he said.
Most students found their pinkies weren’t strong enough to touch-type, so they typed more slowly, pecking at the keyboard with their index fingers.
Mong, the freshman, faced the added challenge of a recently broken wrist, requiring her to use just one hand. The self-described perfectionist was initially frustrated with how messy her page looked with odd spacing between certain letters and misspellings. (Phelps told students to backspace and type ’X’s over errors.)
“This thing I handed in had pencil marks all over it and definitely did not look clean or finished. But it’s part of the process of learning that you’re going to make mistakes,” said Mong, who found the assignment of typing a poem “fun and challenging.”
She embraced the odd spacing and played with the visual boundaries of the page to indent and fragment lines in the style of poet E. E. Cummings. It took several sheets of paper and many mistakes, all of which Mong saved.
“I’m probably going to hang them on my wall,” Mong said. “I’m kind of fascinated by typewriters. I told all my friends, I did a German test on a typewriter!”
The Associated Press’ education coverage receives financial support from multiple private foundations. AP is solely responsible for all content. Find AP’s standards for working with philanthropies, a list of supporters and funded coverage areas at AP.org.
...
Read the original on sentinelcolorado.com »
Christopher Pleasants, nothing felt “revolutionary” about the way they were raising their two kids. Then a stranger called child protective services.
It started last November in Atlanta. With school closed on Election Day, the couple’s 6-year-old son, Jake (not his real name), wanted to ride his scooter by himself to a nearby playground while Mallerie and Christopher worked their tech jobs from home. They had recently begun allowing Jake to play outside alone, and other kids and a group of parents working a charity drive would be waiting for him at the park.
Permission granted. Jake strapped on his helmet, got on his scooter, and rode one-third of a mile on a paved recreational path to the playground. On his way back, a woman stopped him. She asked for his name, age, and where he lived. “He felt like the woman was just demanding answers,” Mallerie says. “And then when she started following him, it scared him.”
Two days later, a caseworker from Georgia’s Division of Family and Children Services (DFCS) rang their doorbell.
The caseworker said Jake was too young to be on the path unsupervised. “How old does he need to be?” Christopher asked. “Like, 13,” she replied. He asked where that number came from. “I’ll have to look it up,” Christopher recalls her saying. When he pressed further, she opined that things aren’t like they used to be. “People are weirder now.”
“Then she informed me that she was going to go interview the kids at their schools — that she would come back later to look inside the house, make sure we had food, running water,” Christopher says.
The family didn’t lack basic necessities. But weeks later, they received a letter from the agency stating it had “substantiated” a finding of neglect against Mallerie. It was a letter they had long dreaded.
“My fear has never been that Jake will be unsafe being out there by himself,” Mallerie says. “My fear has always been that the state will intervene.”
The case wasn’t a bureaucratic fluke. It reflects a broader pattern: Vague child-neglect laws, combined with a culture that increasingly believes children need constant supervision, have expanded the government’s reach into once-ordinary parenting decisions, reshaping the boundaries of American childhood in the process.
That expanded reach sometimes ends in handcuffs. In 2024, a Georgia mother named Brittany Patterson was arrested after her then-10-year-old son walked a mile into town by himself. A sheriff’s deputy drove him home. Brittany chastised him — not for walking alone, but for not telling anyone where he was going. She thought that was the end of it, but later that night, deputies jailed her for reckless endangerment.
The case helped persuade Georgia legislators to pass a so-called “reasonable childhood independence” (RCI) law, enacted last summer. These laws are part of a national movement to tighten vague language in states’ neglect laws. Georgia’s old law, for instance, defined neglect as the failure to provide “proper” parental care. The new law replaces that with “necessary” care and sets a higher bar for neglect: Parents must demonstrate “blatant disregard” for their child’s safety — putting them in imminent, obvious danger. The law also explicitly states that allowing a reasonably capable child to walk to school or travel to a nearby park unsupervised does not, by itself, constitute neglect.
Since 2018, 11 states have passed some form of RCI legislation. The movement generally has bipartisan support, though it travels differently depending on the audience. Diane Redleaf, a family defense attorney, notes that in red states, arguments focused on government overreach tend to land best, while in blue states, the more persuasive case centers on equity — who can afford a babysitter, and whether neglect investigations fall disproportionately on families of color.
Mallerie and Christopher say they “felt empowered” by Georgia’s new law, which took effect four months before the scooter incident. The problem: DFCS didn’t seem to know the law existed when they began investigating Mallerie’s family, even though it was designed to prevent reports like the one against them from being investigated in the first place.
When Mallerie raised the law with a DFCS supervisor, the response felt personal: Regardless of any law, how could you, as a mother, let your “baby” do that?
“Common sense has just gone out the window.”
Redleaf has spent years trying to fix the underlying system that makes such responses possible. “We’re not saying [concerned citizens shouldn’t] make the call,” says Redleaf, who works as a legal consultant for Let Grow, a nonprofit that supports childhood independence and helped draft Georgia’s law. “We are saying: Don’t go and investigate something that’s not neglect.”
Child welfare agencies field more than 4 million abuse and neglect reports each year — a number that has ballooned since 1974, when the Child Abuse Prevention and Treatment Act made certain federal funding contingent on states establishing reporting systems. The result has been state-run systems that absorb many reports but generally lack a mechanism to separate serious cases from those like Jake’s.
“Common sense has just gone out the window,” says David DeLugas, attorney for Mallerie and Christopher, and executive director of ParentsUSA, which advocates for parents’ rights. DeLugas suggests the screening process for child welfare agencies should function like triage in an emergency room. “Let’s first eliminate the ones that are undeserving of any attention,” he says. “And then for the ones left, let’s prioritize in terms of the imminency of the danger.”
The stakes for getting that triage right are real. About 2,000 children in the U. S. die each year from abuse or neglect. But the dangers that drive many parents to keep their kids indoors, and that prompt strangers to call in reports like Jake’s, are a different story.
If you search for statistics on missing children in the U. S., you’ll find the claim that 800,000 kids go missing each year: more than 1% of America’s 72 million children. It’s an old and misleading statistic. The number comes from a 1999 Department of Justice report that used surveys to estimate missing children cases nationwide under broad definitions, including everything from abductions to runaways to brief scares where a kid gets lost for a couple of hours.
Current FBI data shows about 350,000 juvenile missing person reports per year, most of which are resolved quickly and do not involve abduction. Of cases that do involve abduction, the vast majority are committed by someone the child knows — often a parent in a custody dispute — rather than a stranger.
Stranger kidnappings are exceptionally rare. They occur roughly 100 times per year, which works out to a 1-in-720,000 annual risk of a child being kidnapped — less likely than being struck by lightning at some point in their life. Couple these odds with decreasing violent crime rates over the past several decades in the U. S., and you might think today’s parents would be generally comfortable letting kids be outside on their own.
Maybe not. A Pew Research Center survey from 2022 found that about 60% of U. S. parents were “very” or “somewhat” concerned about their children being kidnapped, while a 2025 Harris Poll of kids ages 8 to 12 in the U.S. found that about two-thirds had never walked or biked to a nearby place without their parents. A similar portion said they wanted to spend more time playing with friends outside of adult supervision.
The risks of letting kids do things by themselves are real and easy to imagine. But keeping kids under constant supervision carries its own risks — ones that are subtler but perhaps no less consequential. As Mallerie puts it: “The risks of not trusting my child, not training them to be a responsible, accountable human being, far outweigh the risks of someone abducting them from the playground.”
Christopher frames it as a question of odds. He notes that driving a child to school carries its own dangers — car accidents kill far more children each year than stranger kidnappings — but nobody questions whether driving is worth it. “Nobody seems to be convinced by that argument because driving is a necessary part of life,” he says. “And I always tell people independence is a necessary part of life.”
The debates often boil down to a simple question: How old is old enough? “Parents probably know their kids better than anybody,” one self-described “helicopter grandparent” told Georgia’s 13WMAZ. “But I don’t believe that there’s a 7-year-old that’s mature enough to make a decision to walk to a store.”
As a kid in the early 1990s, Mallerie roamed Chicago with a level of freedom that would be “unthinkable” for children today. At 7, she was riding the train to school without her parents. She and her friends would bike the city streets, making a game out of getting lost in strange neighborhoods and finding their way back home.
Nobody called this a “free-range childhood” back then. It was just how everyone grew up, say Mallerie and Christopher. At least, it’s how the two of them grew up, and it’s how they’ve decided to raise Jake and their 4-year-old daughter. The aim isn’t to shoo the kids outside until dinnertime, but to raise “resilient, independent, capable children,” Mallerie says. “At the end of the day, we are raising people who are going to grow up, leave the nest, and we won’t be there every day to guide them.”
They started early. When Jake was 12 months old, Mallerie and Christopher taught him to clean up after himself by turning it into a game: dump Legos on the floor, then have him put them back in the box. Today, Jake folds his own laundry. “At six?” other parents sometimes ask. “I’m like, ‘Yes, it’s safe — he has hands,’” Mallerie says.
“We have been very intentional about, ‘Okay, what can we teach you? How can you show us that you’re ready? And then what independence can we give you that you deserve?’” says Mallerie, who holds a master’s in social work and has worked for child protective services.
“It feels like we are under more pressure as parents.”
The couple’s philosophy was shaped in part by two books. One was Free-Range Kids, a sort of manifesto against “helicopter parenting” and for age-appropriate childhood independence, by Lenore Skenazy, president of Let Grow. (Skenazy broke the story of Mallerie’s case for Reason.) If you read the headlines in 2008, you might remember Skenazy being dubbed “America’s worst mom” after she wrote about letting her 9-year-old ride the New York City subway by himself.
The couple was also “reinvigorated” after reading The Anxious Generation by social psychologist Jonathan Haidt — it claims that the rise of smartphones and social media in the 2010s has driven a “great rewiring of childhood,” fueling record rates of anxiety, depression, and other mental health problems among young people. Mallerie and Christopher already had clear views on that part. “We work in tech,” she says. “Our kids [aren’t] getting any cell phones, no smartphones, no Instagrams. I write the algorithms. I don’t want my kids to touch those algorithms.”
But what really spoke to them were Haidt’s views on the decline of childhood independence. He argues that children born since about 1995 have suffered from “overprotection in the real world and underprotection in the virtual world” as American childhood shifted from unstructured time outside to unstructured time online.
Mallerie likens the cultural pressure modern parents face to a kind of panic. “It feels like we are under more pressure as parents,” she says. “Our kids have to be perfect. They’ve got to be well-spoken, well-dressed, clean, polite. But they can’t do any of the things that they need to do to get those skills. So they can’t be outside. They can’t experience conflicts with kids and kind of fight and figure it out amongst themselves. They can’t walk to school.”
For nearly all of human history, unsupervised childhood was not a parenting philosophy. It was childhood. Peter Gray, a psychologist and researcher on child play, has described this shift bluntly: Children today are “less free” than at any point in human history, except for periods of childhood slavery or sweatshop labor.
In the U. S., the first half of the 20th century was “the golden age of unstructured play,” wrote the historian Howard Chudacoff. Child labor laws gave kids less work and more free time. Schools assigned less homework and didn’t take up as much of the year. And parents were generally more willing to let kids do things by themselves, not only play outside but also help out in the community.
Those back-in-my-day clichés about growing up in midcentury America — walking a mile to school, working a paper route, playing outside until the streetlights clicked on — paint a fairly accurate picture of a kind of American childhood that’s all but vanished.
“Every adult is like a little sentinel.”
What changed? In a 2023 article for Psychology Today, Gray proposed some factors that began reshaping parents’ attitudes and children’s behavior around the middle of the century: “the arrival of television, the rise of adult-directed kids’ sports, the gradual exclusion of kids from public spaces, the declining opportunities for gainful employment or meaningful contributions to the family economy, and, finally, the increased mandate that kids must be constantly monitored and protected.”
This shift may have had major consequences. In a 2023 paper published in the Journal of Pediatrics, Gray and his coauthors argued that the decline of children’s independent activity in recent decades is not only correlated with the concurrent rise of mental health problems among kids — it probably played a causal role, too. The authors wrote that allowing kids to play and do other self-directed activities builds “mental characteristics that provide a foundation for dealing effectively with the stresses of life.”
Mallerie says she can already see the consequences of the opposite approach in the generation coming of age around her. “We’ve got this new group of adults who are coming of age who have never been on a date, still live at home with their parents, [have] high suicide rates, high depression and anxiety rates,” she says. “That worries me more than the chance that my kids can become victims of crime ever could.”
In February, Mallerie and Christopher received a message from DFCS saying it had reversed its finding of neglect. The agency didn’t offer a reason but said it was working to educate its staff on Georgia’s RCI law. Mallerie asked DFCS whether it would expunge her record. In an email, an agency director said “records are not able to be ‘expunged,’” but that Mallerie could challenge the finding through an administrative review process. The case could still surface on certain background checks.
Mallerie described the investigation as one of the worst experiences of her life. Before the finding was reversed, she and Christopher stopped letting Jake play outside for about a month, fearing another report to DFCS could land Mallerie in jail. “Maybe our culture is going to get even more risk-averse,” she says. “I just feel like every adult is like a little sentinel. Like they’re going to spot us, and they’re going to report us if they see anything that they don’t agree with.”
This article is part of Big Think’s monthly issue The Roots of Resilience.
Editor’s note: This article was updated on April 2, 2026, to reflect that Lenore Skenazy broke the story of Mallerie’s case for Reason.
...
Read the original on bigthink.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.