10 interesting stories served every morning and every evening.
A real-world production migration from DigitalOcean to Hetzner dedicated, handling 248 GB of MySQL data across 30 databases, 34 Nginx sites, GitLab EE, Neo4j, and live mobile app traffic — with zero downtime.
Running a software company in Turkey has become increasingly expensive over the last few years. Skyrocketing inflation and a dramatically weakening Turkish Lira against the US dollar have turned dollar-denominated infrastructure costs into a serious burden. A bill that felt manageable two years ago now hits very differently when the exchange rate has multiplied several times over.
Every month, we were paying $1,432 to DigitalOcean for a droplet with 192GB RAM, 32 vCPUs, 600GB SSD, two block volumes (1TB each), and backups enabled. The server was fine — but the price-to-performance ratio had stopped making sense.
Then we discovered the Hetzner AX162-R.
That’s $14,388 saved per year — for a server that’s objectively more powerful in every dimension. The decision was easy.
I’ve been a DigitalOcean customer for nearly 8 years. They have a great product and I have no complaints about reliability or developer experience. But looking at those numbers now, I cannot help feeling a bit sad about all the extra money I left on the table over the years. If you are running steady-state workloads and not actively using DO’s ecosystem features, do yourself a favor and check dedicated server pricing before your next renewal.
* Several live mobile apps serving hundreds of thousands of users
Old server: CentOS 7 — long past its end-of-life, but still running in production. New server: AlmaLinux 9.7 — a RHEL 9 compatible distribution and the natural successor to CentOS. This migration was also an opportunity to finally escape an OS that hadn’t received security updates in years.
The naive approach — change DNS, restart everything, hope for the best — wasn’t acceptable. Instead, we designed a proper migration path with six phases:
Phase 1 — Full stack installation on the new server
Nginx (compiled from source with identical flags), PHP (via Remi repo, with the same .ini config files from the old server), MySQL 8.0, Neo4J Graph DB, GitLab EE, Node.js, Supervisor, and Gearman. Every service had to be configured to match the old server’s behavior before we touched a single DNS record.
SSL certificates were handled by rsyncing the entire /etc/letsencrypt/ directory from the old server to the new one. After the migration was complete and all traffic was flowing through the new server, we force-renewed all certificates in one shot:
Phase 2 — Web files cloned with rsync
The entire /var/www/html directory (~65 GB, 1.5 million files) was cloned to the new server using rsync over SSH with the –checksum flag for integrity verification. We ran a final incremental sync right before cutover to catch any files changed after the initial clone.
Phase 3 — MySQL master to slave replication
Rather than taking the database offline for a dump-and-restore, we set up live replication. The old server became master, the new server a read-only slave. We used mydumper for the initial bulk load, then started replication from the exact binlog position recorded in the dump metadata. This kept both databases in real-time sync until the moment of cutover.
Phase 4 — DNS TTL reduction
We scripted the DigitalOcean DNS API to lower all A and AAAA record TTLs from 3600 to 300 seconds — without touching MX or TXT records (changing mail record TTLs can cause deliverability issues). After waiting one hour for old TTLs to expire globally, we were ready to cut over in under 5 minutes.
Phase 5 — Old server nginx converted to reverse proxy
We wrote a Python script that parsed every server {} block across all 34 Nginx site configs, backed up the originals, and replaced them with proxy configurations pointing to the new server. This meant that during DNS propagation, any request still hitting the old IP was silently forwarded. No user would see a disruption.
Phase 6 — DNS cutover and decommission
A single Python script hit the DigitalOcean API and flipped all A records to the new server IP in seconds. The old server remained as a cold standby for one week, then was shut down.
The key insight: at no point did we have a window where the service was unavailable. Traffic was always being served — either directly or through the proxy.
This was the most complex part of the entire operation.
We used mydumper instead of the standard mysqldump — and it made an enormous difference. By leveraging the new server’s 48 CPU cores for parallel export and import, what would have taken days with a traditional single-threaded mysqldump was completed in hours. If you’re migrating a large MySQL database and you’re not using mydumper/myloader, you’re doing it the hard way.
The main dump’s metadata file recorded the binlog position at the time of the snapshot:
File: mysql-bin.000004
Position: 21834307
This would be our replication starting point.
Once the dump was complete, we transferred it to the new server using rsync over SSH. With 248 GB of compressed chunks, this was significantly faster than any other transfer method:
The –compress flag in mydumper paid off here — compressed chunks transferred much faster over the wire.
Being stuck on CentOS 7 meant we were also stuck on MySQL 5.7 — an outdated version that had been running in production for years. Before the migration, we ran mysqlcheck –check-upgrade to verify that our data was compatible with MySQL 8.0. It came back clean, so we installed the latest MySQL 8.0 Community on the new server. The performance improvement across all our projects was immediately noticeable — query execution times dropped significantly thanks to MySQL 8.0’s improved optimizer and InnoDB enhancements.
That said, the version jump did introduce one tricky problem.
After import, the mysql.user table had the wrong column structure — 45 columns instead of the expected 51. This caused mysql.infoschema to be missing, breaking user authentication.
But this failed the first time with:
ERROR: ‘sys.innodb_buffer_stats_by_schema’ is not VIEW
The sys schema had been imported as regular tables instead of views. Solution:
With both dumps imported, we configured the new server as a replica of the old one:
Almost immediately, replication stopped with error 1062 (Duplicate Key). This happened because our dump was taken in two passes — during the gap between them, rows were written to certain tables, and now both the imported dump and the binlog replay were trying to insert the same rows.
IDEMPOTENT mode silently skips duplicate key and missing row errors. All critical databases synced without a single error. Within a few minutes, Seconds_Behind_Master dropped to 0.
Before touching a single DNS record, we needed to verify that all services were working correctly on the new server. The trick: we temporarily edited the /etc/hosts file on our local machine to point our domain names to the new server’s IP.
# /etc/hosts (local machine)
NEW_SERVER_IP yourdomain1.com
NEW_SERVER_IP yourdomain2.com
# … and so on for all your domains
With this in place, our browsers and Postman would hit the new server while the rest of the world was still going to the old one. We ran through our API endpoints, checked admin panels, and verified that every service was responding correctly. Only after this confirmation did we proceed with the cutover.
Once master-slave replication was fully synchronized, we noticed that INSERT statements were succeeding on the new server when they shouldn’t have been — read_only = 1 was set, but writes were going through.
The reason: all PHP application users had been granted SUPER privilege. In MySQL, SUPER bypasses read_only.
We revoked it from all 24 application users:
After this, read_only = 1 correctly blocked all writes from application users while allowing replication to continue.
All domains were managed through DigitalOcean DNS (with nameservers pointed from GoDaddy). We scripted the TTL reduction against the DigitalOcean API, only touching A and AAAA records — not MX or TXT records, since changing mail record TTLs can cause deliverability issues with Google Workspace.
After waiting one hour for old TTLs to expire, we were ready.
Rather than editing 34 config files by hand, we wrote a Python script that parsed every server {} block in every config file, identified the main content blocks, replaced them with proxy configs, and backed up originals as .backup files.
The key: proxy_ssl_verify off — the new server’s SSL cert is valid for the domain, not for the IP address. Disabling verification here is fine because we control both ends.
With replication at Seconds_Behind_Master: 0 and the reverse proxy ready, we executed the cutover in order:
1. New server: STOP SLAVE;
2. New server: SET GLOBAL read_only = 0;
3. New server: RESET SLAVE ALL;
4. New server: supervisorctl start all
5. Old server: nginx -t && systemctl reload nginx (proxy goes live)
6. Old server: supervisorctl stop all
7. Mac: python3 do_cutover.py (DNS: all A records to new server IP)
8. Wait: ~5 minutes for propagation
9. Old server: comment out all crontab entries
The DNS cutover script hit the DigitalOcean API and changed every A record to the new server IP — in about 10 seconds.
After migration, we discovered many GitLab project webhooks were still pointing to the old server IP. We wrote a script to scan all projects via the GitLab API and update them in bulk.
We went from $1,432/month down to $233/month — saving $14,388 per year. And we ended up with a more powerful machine:
The entire migration took roughly 24 hours. No users were affected.
MySQL replication is your best friend for zero-downtime migrations. Set it up early, let it catch up, then cut over with confidence.
Check your MySQL user privileges before migration. SUPER privilege bypasses read_only — if your app users have it, your slave environment isn’t actually read-only.
Script everything. DNS updates, nginx config rewrites, webhook updates — doing these by hand across 34+ sites would have taken hours and introduced errors.
mydumper + myloader dramatically outperforms mysqldump for large datasets. Parallel dump/restore with 32 threads cut what would have been days of work down to hours.
Cloud providers are expensive for steady-state workloads. If you’re not using autoscaling or ephemeral infrastructure, a dedicated server often delivers better performance at a fraction of the cost.
All Python scripts used in this migration are open-sourced and available on GitHub:
* do_list_domains_ttl.py — List all DigitalOcean domains with their A records, IPs, and TTLs
* do_to_hetzner_bulk_dns_records_import.py — Migrate all DNS zones from DigitalOcean to Hetzner DNS
* do_cutover_to_new_ip.py — Flip all A records from old server IP to new server IP
* mysql_compare.py — Compare row counts across all tables on two MySQL servers
* final_gitlab_webhook_update.py — Update all GitLab project webhooks to the new server IP
All scripts support a DRY_RUN = True mode so you can safely preview changes before applying them.
...
Read the original on isayeter.com »
Anonymous request-token comparisons from the community, showing how Opus 4.6 and Opus 4.7 differ on real inputs
...
Read the original on tokens.billchambers.me »
In 2025, the Kdenlive team continued grinding to push the project forward through steady development, collaboration, and community support. Over the past year we’ve found a nice balance between adding new features, bug fixing, polishing the user interface, and improving performance and workflow, with stability taking priority over feature creep. We relaunched the website with a new content management system, refreshed some content and the design, and restored historic content dating back to 2002. We also strengthened upstream collaboration with the MLT developers and contributed several improvements to OpenTimelineIO.Here’s a look at what we’ve been up to and what is ahead.As part of KDE Apps, we follow the KDE Gear release cycle, with three major releases each year—in April, August, and December—each followed by three point maintenance releases.This release added a powerful automatic masking tool and brought the last batch of features from our last fundraiser.The new Object Segmentation plugin based on the [SAM2][4] model allows to remove any selected object from the background.We rewrote our OpenTimelineIO import and export function using the C++ library. Now you can exchange projects with other editing applications that support this open source file format.Audio waveform generation got a 300% performance boost, along with a refactored sampling method that accurately renders the audio signal and higher-resolution waveforms for greater precision.This release focused heavily on stabilization, bringing over 300 commits and fixing more than 15 crashes. Instead of major new features, the effort went into polishing and bug fixing.We redesigned the audio mixer bringing levels with clearer visuals and thresholds. We also did some code refactoring and cleanup. This change fixes issues with HiDPI displays with fractional scaling.Guides and Markers got a major overhaul this release to improve the project organization.This release the titler received some much needed love like improved SVG and image support with ability to move and resize items, added center resize with Shift + Drag, and renamed the Pattern tab to Templates and moved the templates dropdown to itThe focus of this release cycle was on improving the user experience and polishing the user interface.We added a new first-run launch screen for first time users as well as added a Welcome Screen allowing to easily launch recent projects.We added a new, more flexible docking system that lets you group widgets, show or hide them on demand, and save layouts as separate files that can be shared or stored within projects.The audio waveform in the Project Monitor got a revamped interface with an added minimap.This next release is just around the corner and brings a nice batch of nifty new features like monitor mirroring and animated transition previews, making it much easier to visualize how they will look before applying them. Additionally, dropping a transition onto the timeline can now automatically adjust its duration to match the clips above and below, saving time and reducing manual tweaking.This feature allows you to mirror any monitor while working in fullscreen mode. It’s especially useful when working with multiple displays or collaborating with others in the editing room.Change the playback speed of multiple clips at onceImport a clip directly from the timeline context menu and insert it at the click positionOption to always zoom toward the mouse position instead of the timeline playheadOur roadmap is constantly being reviewed and updated, and some of the upcoming highlights include implementing the new features in MLT, the multimedia framework which powers Kdenlive. Some exciting upcoming features include 10/12 bit color support, playback optimizations (decoding), and OpenFX support. (Shoutout to a Kdenlive community member for leading this effort). Also expected is a refactoring of the subtitle system as well as continuing to develop the Advanced Trimming Tools.We are currently working on refactoring the keyframing system and implementing a Dopesheet, basically it is a dedicated timeline for managing and viewing keyframes from multiple effects simultaneously. This work will also introduce per-parameter keyframing (currently, once you add a keyframe to an effect, it is applied to all parameters by default). More info can be found in the last status report. This work is made possible through an NGI Zero Commons grant via NLnet.We have been working on enabling and fixing multiple modules in MLT to compile with MSVC allowing us to ship Kdenlive in the Microsoft Store soon. Another advantage is that it will allow to run unit tests on our CI for Windows.Currently, the Kdenlive core team is made up of 8 active members, including 2 developers.In 2025, 38 people contributed code to Kdenlive (including the core dev team and other KDE devs), a truly impressive number! Even more exciting, about half of them were first-time contributors, which is always great. We hope to see many of them continue contributing in the future. On behalf of the Kdenlive team, we salute you all!Note that these numbers refer specifically to contributions to the Kdenlive application. Other projects such as the test suite and website are hosted in separate repositories and are not included in these figures.In February, part of the Kdenlive core team met in Amsterdam for a short sprint, highlighted by a visit to the Blender Foundation, where we met with Francesco Siddi and he shared valuable insights into Blender’s history and offered advice on product management for Kdenlive. We also attended their weekly open session, where artists and developers present progress on ongoing projects. During the sprint, we discussed and advanced several technical topics, some highlights include:Finishing an MLT Framework patch to enable rendering without a display server (needed for Flatpak testing)The Berlin sprint was one of our most productive gatherings to date. Most of the team was there in person, and we also connected online with those who couldn’t make it. We discussed just about every aspect of the project, from roadmap planning to upcoming features and workflow improvements. Some of the highlights include:Evaluated the current state of the Titler and discussed possible integration with GlaxnimateDeveloped a proof of concept for using KDDockWidgetsRedesigned and started development of the audio clip view in the Clip MonitorThanks to the nice folks at c-base who kindly hosted us.Akademy is always a great opportunity to exchange ideas with the broader KDE and Qt communities. One of the highlights was meeting the maintainer of Glaxnimate, where we discussed common goals and ways to collaborate. This year, Akademy will be in Graz on the 19-24 of September, and we hope to see you there.We’re very happy to see more YouTube channels talking about Kdenlive. Here are some examples of what the community has been creating.We’d love to see what you’ve been working on in the past year. Share your videos productions in the comments!Help us grow the community by organizing meetups, talks, or workshops in your local area. Don’t hesitate to contact us if you need guidance, materials, or support to get started.Below are photos from a workshop with indigenous communities in Paraguay.Kdenlive was downloaded 11,500,714 times from our download page in 2025. Do note that many additional installs happen through Linux distribution package managers, the Snap Store, Flathub, and other third-party servers, where statistics are not always available or reliably measurable.The Flatpak package from Flathub gets 41,499 downloads per month.25.04.2 got the most number of downloads.Files With Most Code ChangesTo the 5 of you in Antarctica, let us know what you are editing. ;)Ever since our last, and very successful, fundraiser in 2022, we haven’t actively asked for donations, yet the community has continued to support us. We are very grateful for that.In 2025, we received a total of €9,344.80 from donations (down from €11,526.61 in 2024). Around 30% of the amount was given by donors who kindly set up a recurring plan. The average donation was about €25, with the lowest amount being €10 and the highest €500.We allocate 20% of our budget to KDE e.V. to support infrastructure costs (servers and related expenses), as well as administration, legal support, and travel. As in previous years, your contributions enable us to continue supporting Jean-Baptiste (Kdenlive’s maintainer), allowing him to dedicate several days each month to Kdenlive in addition to his volunteer work.WE NEED YOUR SUPPORTKdenlive needs your support to keep growing and improving. If just a quarter of the people who downloaded Kdenlive in 2025 contributed €5, our maintainers would be able to dedicate more time to the project, and it would even allow us to hire more develpers to speed up development and improve stability. Small amounts can make a big difference, please consider making a donation.You may also contribute by getting involved and helping in:
...
Read the original on kdenlive.org »
Japan is the land of the train. 28 percent of passenger kilometers in Japan are travelled by rail, more than anywhere else in the developed world. France achieves 10 percent, Germany 6.4 percent, and the United States just 0.25 percent. Travel in Japan is over a hundred times more likely to be by rail than travel in the United States.
Japan’s vast railway network is divided between dozens of companies, nearly all of them private. The largest of these, JR East, carries more passengers than the entire railway system of every country other than China and India. Each year, JR East carries four times as many passengers as the whole British railway system, even though it has fewer kilometers of track, serves about ten million fewer people, and competes with eight other companies. Japan’s railway system turns a large operating profit and receives far less public subsidy than European and American railways.
Subscribe for $100 to receive six beautiful issues per year.
In most developed countries, the railways have struggled since the rise of the automobile in the 1950s. From this point on, North America saw the near-total replacement of passenger trains with cars and planes. In Europe, it meant vast government financial support to keep the lines open.
Japan’s different trajectory is often attributed to culture: the Japanese are conformists who are content to take public transport, unlike freedom-loving Americans who prefer to drive everywhere. Europeans are somewhere in between. Culture is also used to explain the incredible punctuality of Japanese railways.
These cultural explanations are wrong. The Japanese love cars, but they take trains because they have the best railway system in the world. And their system excels because of good public policy: business structure, land use rules, driving rules, superior models for privatization, and sound regulation have given Japan its outstanding railways.
This is good news for friends of rail. Culture is built over centuries, and replicating it is hard. But successful public policies can be emulated by one good government. Much about Japan’s railway system could be replicable around the world.
Today, the most striking institutional feature of Japanese rail is that it is privately owned by a throng of competing companies.
The railway arrived in Japan in 1872, during the Meiji Restoration, which opened the country up to foreign trade, ideas, and technologies. Like most Western countries, Japan nationalized its railways in the early twentieth century, creating what became known as Japanese National Railways (JNR). But it did not nationalize all of the lines, focusing only on mainline railways of national importance, and new private railways were still permitted.
Between 1907 and World War II, Japan saw a boom in new private electric railways, coinciding with rapid urbanization. Technologically, most of these private railways were similar to the famous interurbans in the United States: they were basically electric trams, but running between cities as well as within them. The American network eventually withered, and almost nothing of it survives today. In Japan, however, the network consolidated, and the light tramlines gradually evolved into heavy-rail intercity connections.
These companies are today known as ‘legacy private railways’ on account of their having been private since their inception. There are eight legacy private railways in the Tokyo metropolitan area, five in the Osaka–Kobe–Kyoto megalopolis, two in Nagoya, and one in the fourth city of Fukuoka. There are also dozens of smaller ones elsewhere. In the three largest urban areas, these operators account for nearly half of railway track and stations, as well as a plurality of ridership. The largest, Kintetsu, not only operates urban services, but a whole intercity network stretching from Osaka to Nagoya.
These companies often compete head-to-head. At its most extreme, three separate commuter lines compete for the traffic between Osaka and the port city of Kobe, running in parallel, sometimes fewer than 500 meters apart.
Meanwhile, the nationalized railways were managed by JNR. In the postwar era, JNR was responsible for building the famous Shinkansen system, as well as running commuter and long-distance lines throughout Japan. But in 1988, it was largely privatized, broken into six regional monopolies for passenger services together with a single national freight operator. These are collectively known as the Japan Railways Group (JR).
This means that Japan has ended up with six railway companies that trace their descent to the nationalized railways, the sixteen big legacy companies that have always been private, and a host of minor legacy railways, as well as numerous underground metros (some private, some municipally owned), monorails, and tram systems. This institutional diversity is striking enough. But equally striking is the consistent business model that has evolved amidst this pluralism: the railway that builds a city.
If I take a train to go for a solitary walk in the countryside, the railway company can capture some of the value it creates by charging me for the journey, just as other companies capture the value of the goods and services they provide by charging for them. However, if I take a train to visit family, clients, a theater, or a shop, an important difference appears. The railway can capture the value it creates for me by charging me a fare, but it cannot capture the value it creates for those at my destination. As transport infrastructure creates benefits that produce no revenue for providers, free markets rarely build enough of it.
Japan has partly solved this problem by enabling railway companies to do a great deal beside running railways. Take the example of the Tokyu corporation, one of the legacy private railways in southern Tokyo. You can not only travel on its trains, but also ride a Tokyu bus, live in a Tokyu-built house, work in a Tokyu office complex, see a doctor in a Tokyu hospital, buy groceries in a Tokyu supermarket, spend an afternoon at a Tokyu museum-theater-cinema complex, take your children to their amusement park, and even die in a Tokyu retirement home. The positive spillover effects of the railway on these things are captured by Tokyu because it owns them. The president of Tokyu has said:
I think that though we are a railway company, we consider ourselves a city-shaping company. In Europe for instance, railway companies simply connect cities through their terminals. That is a pretty normal way of operating in this industry, whereas what we do is completely different: we create cities and then, as a utility facility, we add the stations and the railways to connect them one with another.
This model was pioneered in the 1950s by what became Hankyu Railways. Hankyu’s network connects central Osaka to its northern suburbs, as well as Kyoto and Kobe. Its innovative founder Kobayashi Ichizo first built suburban housing, then a department store at the terminal station; he then created a hot spring resort, a zoo, and his own distinctive brand of all-women musical theater, the Takarazuka Revue. He also began to run bus services to and from his stations. Other companies emulated Hankyu’s example: Tokyo Disneyland is a collaboration between Disney and the Keisei Railway, while Hanshin in Osaka owns the Hanshin Tigers baseball team.
Core rail operations are profitable for every Japanese private railway company, but they usually only account for a plurality or a small majority of revenue. The rest is contributed by their portfolio of side businesses. There is a natural financial synergy between the reliable but unremarkable cash flow of train fares and the profitable but riskier real estate and commercial side of the business. Railway companies’ side businesses also attract people to live and work on their rail corridor, reinforcing the customer base for the railway services themselves.
This virtuous circle is enabled by transit-oriented development. Japan’s liberal land use regulation makes it straightforward to build new neighborhoods next to railway lines, giving commuters easy access to city centers. It also enables the densification of these centers, which means that commuters have more places they want to go.
Railways cost a lot to build, but once they are built, they can move enormous numbers of people, far more than a road of similar size. This means that they work best in cities with a high density of people, jobs, and other activities. In 2019, New York City was the only American city where rail had a higher modal share than cars, in part because Manhattan has 2.5 million jobs, two million residents, and 50 million tourist visits crammed into 59 square kilometers.
This does not mean that rail-oriented cities must be structured like Chinese cities: islands of high-rise apartments connected by metros and separated by motorways. Japanese cities have the lowest residential density in Asia, and a plurality of the Japanese live in houses, usually detached ones. The urban area of Tokyo, the densest Japanese city, has a weighted population density less than that of many European cities, including Paris, Madrid, or Athens. Japanese cities have vast low-rise, predominantly residential suburbs, built at densities that might be higher than what is typical in the United States, but that would be quite normal in Northern Europe.
What makes Japan’s cities particularly suited to rail is thus not their residential districts, but their huge and hyperdense centers. These really are special: the cores of Tokyo or Osaka are unlike anything that exists in Europe or North America. Many of their features are famous worldwide: the vertical street zakkyo buildings, underground streets, shopping streets under rail tracks, covered arcades, elevated station squares, and vertical cities. Getting millions of commuters and shoppers into these downtowns is where rail excels because its extreme spatial efficiency means that infrastructure with a relatively modest footprint can transport vast numbers of people into a small area.
None of this emerged from a coherent masterplan of transit-oriented development like Copenhagen’s Finger Plan or Curitiba’s Trinary System. Postwar Japanese opinion was committed to decentralization both to rural peripheries and to the suburbs through greenbelts, motorways, and new towns.
Instead, this variety and adaptability around railways is possible because of the way Japanese urban planning works. Since 1919, Japan has had a standardized national zoning system, but it is much more liberal than development control systems in Western countries. The Japanese authorities did not intend or even desire dense urban centers, but they did not prevent them, rather like nineteenth-century governments in the West.
This liberal zoning system is reinforced by private access to city planning powers. Thirty percent of Japan’s urban land has been subject to land readjustment, where agreement among two thirds of residents and landowners in an area is enough to allow its replanning, including compulsorily taking and demolishing land for amenities and infrastructure. Initially land readjustment was used only to assemble rural land for urbanization, but over time it was increasingly used to redevelop already urbanized areas, and new variants were created to build the skyscrapers that surround the major stations of central Tokyo.
The history of the private railway companies could be written as a story of land readjustment projects: the initial building of the lines in the interwar years proceeded through one land readjustment project after another. Postwar improvements such as double-tracking, platform lengthening, and constant redevelopment of stations and their immediate thresholds were only possible because the railways could secure land takings cooperatively with local businesses and landowners.
Perhaps the greatest example of this phenomenon involved Tokyu. In 1953 the company decided to build the Den’en Toshi Line, or Garden City Line, to serve a rural area southwest of Tokyo. This would be enabled by a series of land readjustment projects collectively among the largest in Japanese history.
Over 30 years, 3,100 hectares were covered, of which only 36 percent was devoted to residential and commercial development, with 20 percent for forest and parks, 17 percent for roads, and much of the rest for watercourses. The population of the land readjustment zone would rise from 42,000 in 1954 to over 500,000 in 2003.
By connecting the affluent southwestern suburbs to Tokyu’s main real estate hub next to Shibuya station, now the second busiest in the world, the Den’en Toshi Line allowed Tokyu to become the largest private railway by revenue and ridership. The Japanese government and academics generally consider the Den’en Toshi Line to be the best corridor of transit–oriented development in Japan.
But the railway-as-city-builder model is not the only reason Japanese railways have been able to thrive. European countries usually prohibited railways from running real estate side businesses, but in the United States and Canada the practice was extremely widespread in the nineteenth and early twentieth centuries, and many famous railway suburbs were developed this way. Despite this, passenger rail in these countries collapsed in the mid-twentieth century. Part of the difference was that Japan did not extend the same implicit subsidies to cars as Western governments did.
The land of Toyota, Nissan, and Honda is not an anti-car nirvana. In fact, Japan has excellent motorways, and across the country as a whole a small majority of journeys are made by car. But Japan is a place where cars and car-oriented lifestyles compete on a level playing field.
Japan is one of the only countries to have privatized parking. In Europe and North America, vast quantities of parking space is socialized: municipalities own the streets and allow people to park on them at low or zero cost. Initially with the intention of encouraging the provision of more parking spaces, Japan made it illegal to park on public roads or pavements without special permission. Before someone buys a car, they must prove that they have a reserved night-time space on private land, either owned or leased.
Since parking on public land is banned, municipalities are not worried about overspill parking from developments with inadequate private parking. They therefore have no reason to impose parking minimums on developments: the market is left to decide whether parking is the most valuable use of private land. Where land is abundant, as in rural areas, suburbs, or small towns, private parking is plentiful. But in city centers, it is outcompeted by other land uses. According to the late Donald Shoup, central Tokyo has 23 parking spaces per hectare and 0.04 parking spaces per job, compared with 263 and 0.52 for Los Angeles. Even Manhattan, the densest urban area in North America with the lowest levels of car ownership, has about 60 parking spaces per hectare.
Japanese roads are expected to be self-financing. Motorways are run by self-contained public cooperatives, very similar to the statutory authorities that ran English roads and canals between 1660 and the late 1800s, and funded by tolls on their users. Vehicle registration taxes, which are allocated to localities for road construction and maintenance, are worth three percent of the Japanese government budget.
These measures, adopted in the 1950s, were not intended to suppress car use — the point was to fund a massive road expansion — but they have forced private vehicles to internalize many of their hidden costs. In the Tokyo urban area, the average household spends 71,000 yen ($450) each year on public transport fares and 210,000 yen ($1,350) on car purchase and maintenance costs.
But the private car was not the only competitor faced by the private railways. For eight decades in the twentieth century, they also had to face the juggernaut of Japanese National Railways. Its privatization in 1988 removed the final obstacle to creating the world’s best railway system.
Railway privatization in Britain, New Zealand, Argentina, and Sweden has had a mixed reception, and all of those countries, apart from Sweden, have taken steps to reverse it. In Japan, it has been so successful that the government subsequently privatized the metro systems in Tokyo and Osaka.
In the postwar period, JNR enjoyed real successes. It built the revolutionary Shinkansen, the first high-speed railway in the world. It also aggressively electrified and double-tracked major trunk lines, quadruple-tracked lines into and out of major cities, and added city-center loops and freight bypasses. But these achievements were overshadowed by two problems.
The first was politics. Many countries adapted to the rise of the car by closing the least profitable parts of their passenger rail network, like the consolidation of American freight rail into the Class I operators or the Beeching Axe in Britain. In Japan, however, the ruling Liberal Democratic Party drew its support from rural constituencies, whose support it retained with pork–barrel politics. Its ‘rail tribe’ group, led by rural MPs, prevented JNR from adapting itself to mass motorization.
JNR therefore did not amputate gangrenous rural and freight services that imposed heavy costs with few benefits. Worse, it continued to build new loss-making rural railway lines, known in Japanese as Gaden-intetsu, or railways pulled into the rice field.
The second problem was organized labor. In general, Japanese trade unions are known for their moderation and responsibility, a generalisation that also held true for the unions at the legacy private railways. The JNR unions, however, became highly militant, secure in the knowledge that their nationalized employers could never go bankrupt. Their largest series of strikes in 1973 provoked riots from commuters.
The railway unions imposed overstaffing on revenue-generating urban services, at a time when both international and private domestic operators were reducing staffing requirements against a backdrop of higher wages and the growing automation of signaling and ticketing. As a result, 78 percent of JNR’s costs were related to labor, compared to 40 percent for other Japanese railways. The average worker at a private railway was 121 percent more productive than their JNR counterpart.
By the early 1980s, only seven out of 200 JNR lines made a profit. Successive governments deferred serious reform, running up debt, cutting down investments in new urban lines, raising ticket prices to twice those of comparable private railways, and increasing subsidies — which rose until annual subsidies equaled the total cost of the Shinkansen.
In 1982, Prime Minister Yasuhiro Nakasone started to privatize the railways. Unlike other countries, Japan simply returned to the traditional private railway model of the nineteenth and early twentieth centuries: tracks, trains, stations, and yards were owned by vertically integrated regional conglomerates.
There are substantial advantages to vertical integration. Railways are a closed system that has to be planned as a single unit. Changing the timetable at station A can affect the timetable at station Z; buying new trains that can travel faster might require changes to the infrastructure so they can reach their top speed, which in turn requires rewriting the timetables. This becomes especially complicated if different services share tracks. To prevent delays from propagating from one service to another, the timetable needs to be carefully designed to make best use of the available infrastructure.
The starkest effect of privatization was a massive and immediate increase in labor productivity and profitability relative to the legacy private railways. In fact, this began before privatization: its mere threat strengthened the government’s hand when bargaining with the unions and forced JNR to begin closing rural lines.
Privatization saw a general trend of productivity improvements, following a big one-time improvement between 1982 and 1990, when the workforce was cut by more than half, 83 loss-making lines were removed, and JNR’s debts were transferred to a holding company.
The second great advantage of privatization was to allow the JR companies to emulate the railway-as-city-builder model of the legacy private railways: for instance, JR East owns two shopping center brands, a ski resort, a coffee chain, and even a vending machine drink company. The JR companies have not ignored their rail business: they have continued to build new high-speed lines and urban tunnels, upgrade stations, and implement a host of other improvements such as the introduction in the 1990s of smart cards that allow passengers to pay their fare with a tap.
This does not mean that the Japanese railway industry is a pure creature of free enterprise. No railway system ever has been. The Japanese system has found an equilibrium that makes rail policy explicit and limited. Leaving aside railway safety and business regulation, there are two main policy levers: fare maximums and capital expansion subsidies.
Price controls are often cited as a classic example of misguided government intervention, whether through rent controls, caps on the price of gasoline, wage freezes, or minimum agricultural prices. Tokyo’s infamously crammed trains are a symptom of underpriced rush hour traffic.
Railways have market power because the substitutes for railway trips – coaches, cars and planes – are quite a different product. This monopolistic position has historically meant trouble: monopoly systems, whether private or public, have a tendency to abuse their position to charge higher prices and run bad services. For this reason, the private monopolies that were common in the Western world before World War I often had price controls imposed on them. For example, most of the American streetcar networks were operated as long-term, price-controlled franchises granted by the city.
Price maximums, if set too low, could have ruined Japan’s railways. This is exactly what happened to many Western transit services after the First World War. But the postwar Japanese practice has capped fares generously. The system is explicitly designed to maintain profitability per rider, which in turn incentivizes the companies to maximize ridership. That buys political legitimacy for the privatized system, which is necessary for the continued provision of capital expansion subsidies. Indeed, during the long deflation era between 1992 and 2022, it was common for operators to charge below the maximum, and the real value of railway fares continued to rise. Fare maximums are set on the basis of the average cost structures of all railway operators in a region, so companies with below-average costs like Tokyu would often charge below the cap to maintain a competitive edge, prevent public backlash, and maximize traffic to their side-businesses.
Other than the fare maximums, the railways are free to make their own decisions about timetables, service patterns and day-to-day operations, a highly specialized and technical task which requires deep expertise. This contrasts with the government meddling with, say, Amtrak’s routes.
Carefully designed public subsidies also play a useful role. Although Japanese railways do not receive subsidies for day-to-day operations, they do receive government loans and grants for capital investments. These are typically tied to public priorities, such as disability access or earthquake-proofing, or to projects that have large spillovers that the railway company would be unable to internalize, like removing level crossings, or elevating at-grade railways or trams in order to reduce road congestion and accident risk. Generally, the local prefectural government will match the contribution of the national government. Larger new build projects are subject to lease back or debt-payment conditions that fare revenue is expected to pay back.
Railway companies invested heavily in real estate businesses, often funding lines through selling land for housing around new stations. Liberal spatial policy meant that such development happened easily, even as it enabled dense development in urban cores where radial rail lines converged. Rail companies were generally vertically integrated regional monopolies, owning the land, track, and rolling stock, setting their own timetables, and employing their staff. The state imposed controls to stop them exploiting their monopoly position, but it did so cautiously, allowing them to make sufficient profit that incentives to invest were preserved. Capital subsidies were targeted at providing specific public goods that normal commercial operations overlooked.
The above paragraph could be written by a historian of the future about contemporary Japan. But every word in it could also be written by a historian today about the United States in the nineteenth century — usually seen as the epitome of capitalist individualism. This striking fact contradicts the idea that America’s supposed individualism foreordains it to be the land of the car, or that Japan’s supposed communitarianism foreordained it to be the land of rail.
It also puts pressure on the idea that the demise of rail is the inevitable consequence of cars. All countries saw some shift to cars in the twentieth century, and all rail industries had to respond to that. But public policy had an enormous effect on how successfully they did so. The rise of zoning restrictions on density, excessive price controls, nationalization, and vertically disintegrated privatization have hampered Western rail in remaining competitive against cars since the 1920s. By maintaining and restoring the institutions that built the first railway systems in the nineteenth century, the Japanese have created the mightiest railway system of the twenty-first.
...
Read the original on worksinprogress.co »
...
Read the original on www.righto.com »
When they received the call to respond to an Israeli airstrike in the city of Mayfadoun, in southern Lebanon, most of the paramedics held back, having previously seen colleagues killed by double-tap attacks targeting rescuers. But the medics from the Islamic Health Association (IHA) rushed to the scene.
By the time the other emergency workers arrived at the site, they found the IHA medics had indeed been caught in a second strike. They started evacuating their wounded colleagues, only for their ambulances to be hit in two further attacks.
One of the paramedics covered his ears and screamed, convulsing in pain as shrapnel shattered the back window of the ambulance.
The rescue mission on Wednesday afternoon had turned into a nightmare as Israel carried out three consecutive strikes on three sets of ambulances and medical workers.
In total, the attacks killed four medics and wounded six more, from three different ambulance corps, according to medical sources. Three of the medics were from the Hezbollah-affiliated IHA and Amal-affiliated medical corps, while one was from the Nabatieh emergency services organisation. Under international law, all medics are protected and are considered non-combatants, regardless of political affiliation.
Rescuers in Lebanon have long been wary of the double-tap attack, when Israeli forces target a location, wait until people gather to help survivors, and then strike again. Wednesday’s three-wave attack after the initial one prompted the coining of a fearsome new term: the quadruple tap.
In a video taken by one of the paramedics at the site, rescuers are seen loading two wounded people into their ambulances when a bomb lands next to their vehicle. Paramedics rush to extract the driver, who is motionless and limp as they pull him from the ambulance, which is splashed with blood. “Oh God, oh God,” the man filming can be heard saying. They carry two more blood-covered medics out of their vehicle and on to stretchers.
Among the paramedics killed was Fadel Sarhan, 43, who is survived by his eight-year-old daughter.
“Fadel was a very loved person. He had a bold personality, but at the same time, he was emotional. He was well liked and responsible,” said Ali Nasr al-Deen, the head of the Mayfadoun civil defence centre who grew up with Sarhan.
“He used to feed the cats and dogs. He would bring pet food from Beirut so they wouldn’t go hungry. He was that kind of person, caring and attentive. It’s a huge loss for us,” said Nasr al-Deen.
Medics mourned their colleagues on Thursday at funerals in Nabatieh, a city near Mayfadoun. Such events have become increasingly common, with healthcare workers killed by Israeli bombings on a near daily basis.
Mohammed Suleiman, whose 16-year-old son, Joud, was killed while on duty as a paramedic by an Israeli strike weeks earlier, joined his peers in burying another of his friends on Thursday. A few hours after the funerals, Israel carried out another wave of airstrikes on Nabatieh.
Israel has so far killed 91 healthcare workers and wounded 214 more in Lebanon since the Israel-Hezbollah war started on 2 March. It has given little justification for its repeated attacks on medical infrastructure and workers, apart from accusing Hezbollah of using ambulances and hospitals to transport fighters and weapons, without providing evidence for the claim.
The Lebanese ministry of health accused Israel of deliberately targeting ambulance crews. “Paramedics have become direct targets, pursued relentlessly in a blatant violation that confirms a total disregard for all norms and principles established by international humanitarian law,” the ministry said in a statement.
The Israeli military did not immediately respond to a request for comment.
In the video taken of the quadruple tap on Wednesday, the frame was frozen on the interior of the ambulances, as the Nabatieh emergency services highlighted that the vehicle clearly contained no weapons.
A few hours after Israel hit the ambulances outside Nabatieh, it bombed the vicinity of the governmental hospital in Tebnine, south Lebanon. It was the second time in two days that Israeli bombings damaged the healthcare facility, which is the only remaining public hospital in the area. The strikes injured 11 hospital workers and damaging the emergency department, according to the World Health Organization (WHO).
A video of Tebnine hospital from 14 April showed workers trying to clear shattered concrete and debris from the emergency department after a strike blew in the windows.
Commenting on the strike in Tebnine, the head of the WHO, Tedros Adhanom Ghebreyesus, said: “I reiterate the call for the immediate protection of healthcare facilities, health workers, ambulances and patients. There must be safe, sustained and unhindered humanitarian access across Lebanon.”
An ambulance in Tebnine was also struck on Thursday, leading to the critical injury of two medics, according to the Lebanese ministry of health. As healthcare workers watched their colleagues and friends being killed by Israel, the mental toll was becoming almost too much to bear.
“We have to go to places to rescue people, but then we get double tapped,” said Abbas Atwi, the head of the IHA’s emergency department in Nabatieh, shortly after a medical centre was targeted in March, killing his friends and colleagues. “But we will stay and keep going, we will not leave.”
...
Read the original on www.theguardian.com »
Sixteen bets made $100,000 each accurately predicting the timing of the US airstrikes against Iran on 27 February. Later, a single user would make over $550,000 after betting that Ayatollah Ali Khamenei would topple, just moments before his assassination by Israeli forces. On 7 April, right before Donald Trump announced a temporary ceasefire with Iran, traders bet $950m that oil prices would come down. They did.
These bets and other well-timed wagers accurately predicted the precise timing of major developments in the US-Israel war with Iran, creating huge windfalls and raising concerns among lawmakers and experts over potential insider trading.
Betting — once largely siloed to sporting events — has now spread to include contracts on news events where insider information could give some traders an advantage.
The proliferation of online betting markets like Polymarket and Kalshi has allowed bets on virtually any news event. It’s also easier than ever to buy commodity derivatives like oil futures, where traders gamble on what the price of oil will be in the future.
Leaders of some US federal agencies and some members of Congress said they want to crack down on suspicious trading taking place across different marketplaces, but it’s unclear how much headway regulators will make.
“Is the problem that we don’t have legislation or that we don’t have enforcement capabilities?” said Joshua Mitts, a law professor at Columbia University. “To have a law that can’t really be enforced effectively given the technological limitations, it’s sort of putting the cart before the horse.”
On the night of 27 February, the day before the US and Israel would carry out strikes on Iran, an unusual influx of about 150 accounts on Polymarket placed bets that the US would strike Iran the next day. A New York Times analysis found the bets totaled $855,000, with 16 accounts pocketing more than $100,000 each.
Soon after, a single anonymous Polymarket user, under an account named “Magamyman”, made over $553,000 after betting that Khamenei would be “removed” from power just moments before he was killed by an Israeli airstrike, according to a complaint filed to the Commodity Futures Trading Commission (CFTC), the federal agency that regulates futures markets, by Public Citizen, a consumer advocacy group. The complaint also cites a crypto-analytics firm that identified six “suspected insiders” who made a total of $1.2m on Polymarket after Khamenei was killed.
The well-timed surge of wagers were seen again on 7 April, when at least 50 Polymarket accounts placed bets that the US and Iran would reach a ceasefire hours before Trump would announce it in a Truth Social post. Earlier, the president had said “a whole civilization will die tonight” if Iran did not open the strait of Hormuz.
But traders weren’t just active on Polymarket: there were similar surges of oil futures trading activity just hours before Trump announced updates to the conflict that would lower oil prices.
On 23 March, traders placed $580m in bets on the oil futures market just 15 minutes before Trump said on social media that the US was having “productive” talks with Iran, according to the Financial Times. The traders made a windfall after Trump’s comments triggered a sell-off in the oil markets that made oil prices plummet.
The same thing happened again on 7 April, this time when traders spent $950m on oil futures, betting that the price of oil would fall just hours before the ceasefire with Iran was announced.
“We can’t say from the outset whether any of these trades were illegal. Any one of them could be lucky, and any one of them could be based on lawful information,” said Andrew Verstein, a law professor at the University of California at Los Angeles. “But many of them bear the hallmarks of suspicious trades that would naturally warrant investigation.”
For those who closely follow trading patterns, the rush of activity that happened before these events seem too big to simply be bets hedging on luck.
“Not only the timing, but the amount of these bets makes it look very likely that someone had insider knowledge … and placed very, very substantial bets on it,” said Craig Holman, a government affairs lobbyist for Public Citizen who filed the group’s complaint to the CFTC.
Holman said he is skeptical about how bold the CFTC will be in its investigations given its current structure under the Trump administration. The commission typically has five bipartisan members that are appointed by the president. Now, the CFTC has only one commissioner: Michael Selig, whom Trump appointed at the end of 2025 and who has positioned himself as friendly toward prediction markets.
Over the last few months, the CFTC has been roiled in fights with state legislatures who argue that regulation of these online betting marketplaces belongs to the states.
Kalshi, Polymarket’s competitor, was temporarily banned in Nevada after the state sued the company for offering contacts in the state without a gambling license. Arizona meanwhile filed criminal charges against the company for allowing people to place bets on elections. In both cases, Kalshi denied any wrongdoing and has argued that the CFTC has exclusive jurisdiction over online prediction markets.
“It’s a wild west phase, when we’re talking about the prediction market industry, and now it’s spilled over into the stock market as well,” Holman said.
Anonymous sources told Reuters and Bloomberg that the CFTC launched an investigation into the oil futures trades that were placed on 27 March and 7 April, though the agency has not publicly announced it is conducting an investigation.
Speaking to Congress this week, Selig said that the agency is prepared to go after those who are suspected of insider trading, warning “we will find you and you will face the full force of the law”, but said that the commission would not issue any new regulations until it had five seated commissioners.
Polymarket did not respond to request for comment. In a statement, White House spokesperson Davis Ingle said “federal employees are subject to government ethics guidelines that prohibit the use of nonpublic information for financial benefit”.
“Any implication that administration officials are engaged in such activity without evidence is baseless and irresponsible reporting,” Ingle said. “The CFTC will always uphold its duty to monitor fraud, manipulation and illicit activity daily.”
Federal law prohibits government employees, including those working for Congress or the White House, from using non-public information for personal profit.
In late March, a bipartisan group of representatives introduced a bill that would ban members of Congress and senior staff within the federal government from participating in prediction market contracts related to political events or policy decisions.
But experts warn that insider trading law is complicated, and the new technology that makes it easier to place bets online leaves a complicated paper trail that can be hard to follow.
Historically, insider trading takes place when a person uses exclusive information about a company to buy or sell stocks right before information becomes public. These types of illegal trades are regulated by the Securities and Exchange Commission (SEC), which regulates the stock exchanges.
Insider futures trading could be seen as a subset of this typical insider trading, but the territory is new.
“The trick is that there are essentially no clean cases of people getting in trouble for commodity futures insider trading,” Verstein said. “The law there is just not well-developed.”
In a paper published last month, Mitts, the Columbia law professor, and other researchers screened more than 200,000 “suspicious wallet-market pairs” from February 2024 to February 2026 and found that traders in this group achieved a nearly 70% win rate, making $143m in well-timed bets tied to everything from the capture of former Venezuelan leader Nicolás Maduro to Taylor Swift’s engagement to Travis Kelce. The paper notes that informed traders face fewer legal constraints by trading on platforms like Polymarket or Kalshi because these markets still operate in a legal gray area.
“The challenge here is that this trading is occurring through the blockchain or other anonymized means, so it is going to be quite difficult for a regulator enforcement authority or prosecutor to determine the identity of the trader,” Mitts said. “They would also have to prove the trader traded on the basis of information that had been wrongly misappropriated.”
But the stakes are high. Insider trading involving classified military information can lead to distrust of both markets and governments.
“Unlike corporate insider trading, there’s a lot of ways for the government to make itself be correct. You can just make the war that would occur, and that’s concerning because then the real economy is being distorted,” Verstein said. “Real decisions, including perhaps financial decisions, are being distorted by financial bets.”
...
Read the original on www.theguardian.com »
I tried Claude Design yesterday and I have a theory for how this whole thing shakes out.
As product teams scaled and design needed to justify itself inside engineering orgs, it was pushed toward systematization — and Figma invented its own primitives to make that work: components, styles, variables, props, and so on. Some concepts are borrowed from programming, some aren’t, and the whole thing doesn’t neatly map onto anything. Guidance evolves, migrations pile up, and if you want to automate any of it you’re stuck with a handful of shoddy plugins. The beast is hairy enough that entire design roles now specialize in wrangling the system itself.
There’s always been a tense push-pull between Figma and code over what the source of truth should be. Figma won over Sketch partially by staking its claim there — their tooling would be canonical.
That victory had a hidden cost. By nature of having a locked-down, largely-undocumented format that’s painful to work with programmatically, Figma accidentally excluded themselves from the training data that would have made them relevant in the agentic era. LLMs were trained on code, not Figma primitives, so models never learned them. As code becomes easier for designers to write and agents keep improving, the source of truth will naturally migrate back to code. And all the baroque infrastructure Figma had to introduce over the past decade will look nuts by comparison. Why fuss around in a lossy approximation of the thing when you can work directly in the medium where it will actually live? If we want to make pottery, why are we painting watercolors of the pot instead of just throwing the clay?
At work, we’ve spent quite a bit of time back-porting design changes made directly in code back to Figma and it is not fun. I can’t share that file, but for a fair comparison, this is Figma’s own design system file for their product. I have to assume it was built by the most competent design system team you can find. And yet…
These are Figma’s own files. Built by their own team. This is the gold standard.
Imagine debugging a color that looks wrong. You check the component. The component uses a variable. The variable is aliased to another variable. That variable references a mode. The mode is overridden at the instance level. The instance lives inside a nested component with a library swap applied. At this point, you’re either considering picking up code or moving to the countryside and becoming a sheep farmer because one more minute of this will make you lose your goddamn mind.
So as the source of truth shifts back to code, Figma is left in an odd spot: holding a largely manual, pre-agentic system that nobody in their right mind would design from scratch today.
I think design tooling forks into two distinct shapes from here — and there’s almost a clock resetting between Figma and every other tool competing to answer the same question they answered in 2016: who can help me, a designer, get my ideas out fastest?
Spoiler: it’s not Figma Make. Figma Make feels like it primarily benefits people who have already drunk the Kool-Aid — it reads from Figma styles, component libraries, and proprietary props (or, as I like to call them, Prop Props), and it’s the only tool in this new landscape still pretending the design file is canonical. It’s the tool for people who want to (or have no choice but to) stay inside the system.
Claude Design is the first of those two tools, and takes the opposite bet. There’s an Arts and Crafts principle called “truth to materials” — the idea that a thing should be honest about what it is and how it’s made, rather than masquerading as something else. Figma ended up being the opposite of this: a set of extremely rigid schemas with a free-form “just vibes, man” costume over the top. Like a Type-A personality physically incapable of relaxing, forced to perform chill while internally screaming that your frames aren’t nested and your tokens are detached and nothing is on the grid. Claude Design, for all its roughness, is at least honest about what it is: HTML and JS all the way down.
And it has a massive structural advantage: its sibling is Claude Code. Eventually, I can see Claude Design just dumping things directly into Claude Code and vice versa. Claude Design’s onboarding already lets you import your repos. The feedback loop between design and implementation — which has been a source of friction since the beginning of time — becomes a single conversation.
The other tool that emerges from this moment will have no expectation of code at all. It’ll be a pure exploration environment — somewhere to drop rectangles, stack layer styles, fuss with blend modes and gradients, and go completely nuts, unconstrained by systems or prompting conventions. Maybe it’s an iPad app with Pencil support where you just quickly sketch a bunch of rectangles. 37signals could do something really funny right now. Or maybe it goes in the opposite direction — something more like Photoshop that goes all-in on high-fidelity compositing and lets our imaginations run wild, now that we’re no longer beholden to the ceiling of what you can do with CSS effects. Doesn’t it seem kinda weird how for 90% of its life, Figma’s only layer effect was a drop shadow or a blur?
Figma’s Sketch moment is rapidly approaching. And if you said that sentence to a Victorian child, they would probably have a stroke.
The following are messages meant only for the teams behind Sketch and Figma. If neither apply to you, you can skedaddle.
To Figma: I can see a world where this post does numbers in the Figma internal Slack. If that’s the case and you’re reading this from Figma: this wouldn’t have happened if you hired me last year when I was interviewing. Your loss, big dawg.
To Sketch: GET YOUR HEADS OUTTA YOUR ASSES AND GIVE EM HELL. ADD PARTICLE EFFECTS. ADD DEBOSSING EFFECTS. MESH TRANSFORMS. FUCK IT, ADD METAL SHADERS. GO NUTS. STOP COASTING OFF OF BEING MAC NATIVE. QUIT DRINKING COCOA AND GET THIRSTY FOR BLOOD.
To mom: Sorry for cursing.
@jonnyburch on Twitter shared a link to their blog post with similar thoughts, it’s quite good if you wanna go deeper.
...
Read the original on samhenri.gold »
Given a set of objects, there can be numerous criteria, based on which to order them (depending on the objects themselves) — size, weight, age, alphabetical order etc.
However, currently we are not interested in the criteria that we can use to order objects, but in the nature of the relationships that define order. Of which there can be several types as well.
Mathematically, the order as a construct is represented (much like a monoid) by two components.
An order is a set of elements, together with a binary relation between the elements of the set, which obeys certain laws.
We denote the elements of our set, as usual, like this.
And the binary relation is a relation between two elements, which is often denoted with an arrow.
As for the laws, they are different depending on the type of order.
Let’s start with an example — the most straightforward type of order that you think of is linear order i.e. one in which every object has its place depending on every other object. In this case the ordering criteria is completely deterministic and leaves no room for ambiguity in terms of which element comes before which. For example, order of colors, sorted by the length of their light-waves (or by how they appear in the rainbow).
Using set theory, we can represent this order, as well as any other order, as a sets of pairs of the order’s underlying set with itself (a subset of the product set).
And in programming, orders are defined by providing a function which, given two objects, tells us which one of them is “bigger” (comes first) and which one is “smaller”. It isn’t hard to see that this function defines a set of pairs (we are given a pair and we have to say whether or not it belongs to the set).
However (this is where it gets interesting) not all such functions (and not all sets of pairs) define orders. For such function to really define an order i.e. to have the same output every time, independent of how the objects were shuffled initially, it has to obey several rules.
Incidentally, (or rather not incidentally at all), these rules are nearly equivalent to the mathematical laws that define the criteria of the order relationship i.e. those are the rules that define which element can point to which.
A linear order is a set of elements, together with a binary relation between the elements of the set, which obeys the laws of reflexivity, transitivity, antisymmetry, totality.
Let’s check what they are.
Let’s get the boring law out of the way — each object has to be bigger or equal to itself, or $a ≤ a$ for all $a$ (the relationship between elements in an order is commonly denoted as $≤$ in formulas, but it can also be represented with an arrow from first object to the second.)
This law only exist to cover the “base case”: we can formulate it the opposite way too and say that each object should not have the relationship to itself, in which case we would have a relation than resembles bigger than, as opposed to bigger or equal to and a slightly different type of order, sometimes called a strict order.
The second law is maybe the least obvious, (but probably the most essential) — it states that if object $a$ is bigger than object $b$, it is automatically bigger than all objects that are smaller than $b$ or $a ≤ b \land b ≤ c \to a ≤ c$.
This is the law that to a large extend defines what an order is: if I am better at playing soccer than my grandmother, then I would also be better at it than my grandmother’s friend, whom she beats, otherwise I wouldn’t really be better than her.
The third law is called antisymmetry. It states that the function that defines the order should not give contradictory results (or in other words you have $x ≤ y$ and $y ≤ x$ only if $x = y$).
It also means that no ties are permitted — either I am better than my grandmother at soccer or she is better at it than me.
The last law is called totality (or connexity) and it mandates that all elements that belong to the order should be comparable ($a ≤ b \lor b ≤ a$). That is, for any two elements, one would always be “bigger” than the other.
By the way, the law of totality makes the reflexivity law redundant, as reflexivity is just a special case of totality when $a$ and $b$ are one and the same object, but I still want to present it for reasons that will become apparent soon.
Actually, here are the reasons: the law of totality can be removed. Orders, that don’t follow the totality law are called partial orders, (and linear orders are also called total orders.)
Task 1: Previously, we covered a relation that is pretty similar to this. Do you remember it? What is the difference?
Task 2: Think about some orders that you know about and figure out whether they are partial or total.
Partial orders are actually much more interesting than linear/total orders. But before we dive into them, let’s say a few things about numbers.
Natural numbers form a linear order under the operation bigger or equal to (the symbol of which we have been using in our formulas.)
In many ways, natural numbers are the quintessential order — every finite order of objects is isomorphic to a subset of the order of numbers, as we can map the first element of any order to the number $1$, the second one to the number $2$ etc (and we can do the opposite operation as well).
If we think about it, this isomorphism is actually closer to the everyday notion of a linear order, than the one defined by the laws — when most people think of order, they aren’t thinking of a transitive, antisymmetric and total relation, but are rather thinking about criteria based on which they can decide which object comes first, which comes second etc. So it’s important to notice that the two notions are equivalent.
From the fact that any finite order of objects is isomorphic to the natural numbers, it also follows that all linear orders of the same magnitude are isomorphic to one another.
So, the linear order is simple, but it is also (and I think that this isomorphism proves it) the most boring order ever, especially when looked from a category-theoretic viewpoint — all finite linear orders (and most infinite ones) are just isomorphic to the natural numbers and so all of their diagrams look the same way.
However, this is not the case with partial orders that we will look into next.
Law of totality does not look so “set in stone” as the rest of the laws i.e. we can probably think of some situations in which it does not apply. For example, if we aim to order all people based on soccer skills there are many ways in which we can rank a person compared to their friends their friend’s friends etc. but there isn’t a way to order groups of people who never played with one another.
Remove the law of totality from the laws of linear orders and we get a partial order (also a partially-ordered set, or poset).
An partial order is a set of elements, together with a binary relation between the elements of the set, which obeys the laws of reflexivity, transitivity and antisymmetry.
Every linear order is also a partial order (just as a group is still a monoid), but not the other way around.
We can even create an order of orders, based on which is more general.
Partial orders are also related to the concept of an equivalence relations that we covered in chapter 1, except that symmetry law is replaced with antisymmetry.
If we revisit the example of the soccer players rank list, we can see that the first version that includes just myself, my grandmother and her friend is a linear order.
However, including this other person whom none of us played yet, makes the hierarchy non-linear i.e. a partial order.
This is the main difference between partial and total orders — partial orders cannot provide us with a definite answer of the question who is better than who. But sometimes this is what we need — in sports, as well as in other domains, there isn’t always an appropriate way to rate elements linearly.
Before, we said that all linear orders can be represented by the same chain-like diagram, we can reverse this statement and say that all diagrams that look something different than the said diagram represent partial orders.
An example of this is a partial order that contains a bunch of linearly-ordered subsets, e.g. in our soccer example we can have separate groups of friends who play together and are ranked with each other, but not with anyone from other groups.
The different linear orders that make up the partial order are called chains. There are two chains in this diagram $m \to g \to f$ and $d \to o$.
The chains in an order don’t have to be completely disconnected from each other in order for it to be partial. They can be connected as long as the connections are not all one-to-one i.e. ones when the last element from one chain is connected to the first element of the other one (this would effectively unite them into one chain.)
The above set is not linearly-ordered — although we know that $d ≤ g$ and that $f ≤ g$, the relationship between $d$ and $f$ is not known — any element can be bigger than the other one.
Although partial orders don’t give us a definitive answer to “Who is better than who?”, some of them still can give us an answer to the more important question (in sports, as well as in other domains), namely “Who is number one?” i.e. who is the champion, the player who is better than anyone else. Or, more generally, the element that is bigger than all other elements.
The greatest element of an order is an element $a$, such that we have we have $x ≤ a$ for any other element $x$, Some (not all) partial orders do have such element — in our last diagram $m$ is the greatest element, in this diagram, the green element is the biggest one.
Sometimes we have more than one elements that are bigger than all other elements, in this case none of them is the greatest.
In addition to the greatest element, a partial order may also have a least (smallest) element, which is defined in the same way.
The least upper bound of two elements that are connected as part of an order is called the join of these elements, e.g. the green element is a join of the other two.
The join of $a$ and $b$ is the smallest element $c$ that is bigger than then, formally:
The join of objects $A$ and $B$ is an object $G$, such that:
It is bigger than both of these objects, so $A ≤ G$ and $B ≤ G$.
It is smaller than any other object that is bigger than them, so for any other object $P$ such that $P ≤ A$ and $P ≤ B$ then we should also have $G ≤ P$.
Given any two elements in which one is bigger than the other (e.g. $a ≤ b$), the join is this bigger element (in this case $b$)
e.g. in a linear orders, the join of any two elements is just the bigger element.
Like with the greatest element, if two elements have several upper bounds that are equally big, then none of them is a join (a join must be unique).
If, however, one of those elements is established as smaller than the rest of them, it immediately qualifies.
Task 3: Which concept in category theory reminds you of joins?
Given two elements, the biggest element that is smaller than both of them is called the meet of these elements.
The same rules as for the joins apply, but in reverse.
The diagrams that we use in this section are called “Hasse diagrams” and they work much like our usual diagrams, however they have an additional rule that is followed — “bigger” elements are always positioned above smaller ones.
In terms of arrows, the rule means that if you add an arrow to a point, the point to which the arrow points must always be above the one from which it points.
This arrangement allows us to compare any two points by just seeing which one is above the other e.g. we can determine the join of two elements, by just identifying the elements that they connect to and see which one is lowest.
We all know many examples of total orders (any form of chart or ranking is a total order), but there are probably not so many obvious examples of partial orders that we can think of off the top of our head. So let’s see some. This will gives us some context, and will help us understand what joins are.
To stay true to our form, let’s revisit our color-mixing monoid and create a color-mixing partial order in which all colors point to colors that contain them.
If you go through it, you will notice that the join of any two colors is the color that they make up when mixed. Nice, right?
The partial order of numbers by division
We saw that when we order numbers by “bigger or equal to”, they form a linear order. But numbers can also form a partial order, for example they form a partial order if we order them by which divides which, i.e. if $a$ divides $b$, then $a$ is before $b$ e.g. because $2 \times 5 = 10$, $2$ and $5$ come before $10$ (but $3$, for example, does not come before $10$.)
And it so happens (actually for very good reason) that the join operation again corresponds to an operation that is relevant in the context of the objects — the join of two numbers in this partial order is their least common multiple.
And the meet (the opposite of join) of two numbers is their greatest common divisor.
Given a collection of sets containing a combination of a given set of elements…
…we can define what is called the inclusion order of those sets.
The inclusion order of sets is a binary relation that we can use to order a collection of sets (usually sets that contain some common elements) in which $a$ comes before $b$ if $a$ includes $b$, or in other words if $b$ is a subset of $a$.
In this case the join operation of two sets is their union, and the meet operation is their set intersection.
This diagram might remind you of something — if we take the colors that are contained in each set and mix them into one color, we get the color-blending partial order that we saw earlier.
The order example with the number dividers is also isomorphic to an inclusion order, namely the inclusion order of all possible sets of prime numbers, including repeating ones (or alternatively the set of all prime powers). This is confirmed by the fundamental theory of arithmetic, which states that every number can be written as a product of primes in exactly one way.
So far, we saw two different partial orders, one based on color mixing, and one based on number division, that can be represented by the inclusion orders of all possible combinations of sets of some basic elements (the primary colors in the first case, and the prime numbers (or prime powers) in the second one.) Many other partial orders can be defined in this way. Which ones exactly, is a question that is answered by an amazing result called Birkhoff’s representation theorem. They are the finite partial orders that meet the following two criteria:
All elements have joins and meets.
Those meet and join operations distribute over one another, that is if we denote joins as meets as $∨$ or $∧$, then $x ∨ (y ∧ z) = (x ∨ y) ∧ (x ∨ z)$.
The partial orders that meet the first criteria are called lattices. The ones that meet the second one are called distributive lattices. Let’s write that down:
Partial orders in which all elements have joins and meets is called a lattice. A lattice whose meet and join operations distribute over one another is called a distributive lattice.
And the “prime” elements which we use to construct the inclusion order are the elements that are not the join of any other elements. They are also called join-irreducible elements.
So we may phrase the theorem like this:
Each distributive lattice is isomorphic to an inclusion order of its join-irreducible elements.
By the way, the partial orders that are not distributive lattices are also isomorphic to inclusion orders, it is just that they are isomorphic to inclusion orders that do not contain all possible combinations of elements.
We will now talk more about lattices (the orders for which Birkhoff’s theorem applies). Lattices are partial orders, in which every two elements have a join and a meet. So every lattice is also partial order, but not every partial order is a lattice (we will see even more members of this hierarchy).
Most partial orders that are created based on some sort of rule are distributive lattices, like for example the partial orders from the previous section are also distributive lattices when they are drawn in full, for example the color-mixing order.
Notice that we added the black ball at the top and the white one at the bottom. We did that because otherwise the top three elements wouldn’t have a join element, and the bottom three wouldn’t have a meet.
Our color-mixing lattice, has a greatest element (the black ball) and a least element (the white one). Lattices that have a least and greatest elements are called bounded lattices. It isn’t hard to see that all finite lattices are also bounded.
Task 4: Prove that all finite lattices are bounded.
We mentioned order isomorphisms several times already so this is about time to elaborate on what they are.
Given two sets (we will use partial order of numbers by division and the prime inclusion order as an example) an isomorphism between them is comprised of the following two functions:
One function from the prime inclusion order, to the number order (which in this case is just the multiplication of all the elements in the set)
One function from the number order to the prime inclusion order (which is an operation called prime factorization of a number, consisting of finding the set of prime numbers that result in that number when multiplied with one another).
An order isomorphism is essentially an isomorphism between the orders’ underlying sets (invertible function). However, besides their underlying sets, orders also have the arrows that connect them, so there is one more condition: in order for an invertible function to constitute an order isomorphism, it has to respect those arrows.
An isomorphism between two orders is an invertible function between their underlying sets, such that applying this function (let’s call it $F$) to any two elements that have a certain order in one set (let’s call them $a$ and $b$) should result in two elements that have a corresponding order in the other set (i.e. $a ≤ b$ if and only if $F(a) ≤ F(b)$).
In the previous section, we saw how removing the law of totality from the laws of (linear) order produces a different (and somewhat more interesting) structure, called partial order. Now let’s see what will happen if we remove another one of the laws, namely the antisymmetry law.
The antisymmetry law mandated that you cannot have an object that is at the same time smaller and bigger than another one. (or that $a ≤ b ⟺ b ≰ a$).
...
Read the original on abuseofnotation.github.io »
Computer chips that cram billions of electronic devices into a few square inches have powered the digital economy and transformed the world. Scientists may be on the cusp of launching a similar technological revolution — this time using light.
In a significant advance toward that goal, National Institute of Standards and Technology (NIST) scientists and collaborators have pioneered a way to make integrated circuits for light by depositing complex patterns of specialized materials onto silicon wafers. These so-called photonics chips use optical devices such as lasers, waveguides, filters and switches to shuttle light around and process information. The new advance could provide a big boost for emerging technologies such as artificial intelligence, quantum computers and optical atomic clocks.
Making circuitry for light as powerful and ubiquitous as circuitry for electrons is one of today’s technological frontiers, says Scott Papp, a NIST physicist whose group led the research, published this week in Nature. “We’re learning to make complex circuits with many functions, cutting across many application areas.”
When it comes to information transfer and processing, light can do things that electricity can’t. Photons — particles of light — are far zippier than electrons at working their way through circuits.
Laser light is also essential for controlling powerful, emerging quantum technologies such as optical atomic clocks and quantum computers.
But several hurdles remain before integrated photonics can truly hit its stride. One involves lasers. High-quality, compact and efficient lasers exist in only a few wavelengths, or colors, of light. For example, semiconductor lasers are very good at generating infrared light with a wavelength of 980 nanometers, or billionths of a meter — a color just outside the range of human vision.
Emerging technologies such as optical atomic clocks and quantum computers need laser light in many other colors as well. The lasers that produce those colors are big, costly and power-hungry, effectively confining these quantum technologies to a handful of special-purpose labs.
By integrating lasers into circuits on chips, scientists hope to help quantum technologies become cheaper and more portable, so they can start to fulfill their vast promise.
The new NIST photonics chip is a bit like a layer cake. NIST physicists Papp and Grant Brodnik, along with colleagues, started with a standard wafer of silicon coated with silicon dioxide (glass) and lithium niobate, a so-called nonlinear material that can change the color of light coming into it.
The researchers then added pieces of metal to electrically control how the circuits convert one color of light to others. The scientists also created other metal-lithium niobate interfaces that allowed them to rapidly turn light on and off within the circuits — a crucial ability for data processing and high-speed routing.
The icing on the cake, so to speak, was a second nonlinear material called tantalum pentoxide, or tantala. Tantala can transform light in ways that feel like magic, taking in a single laser color and putting out the full rainbow of visible light colors plus a wide range of infrared wavelengths. Papp and colleagues have spent years developing techniques to fabricate circuits out of tantala without heating it up, allowing the material to be deposited onto other materials without damaging them.
By patterning the different materials on top of each other in a three-dimensional stack, the researchers produced a single chip that efficiently routes light between layers. That allowed them to merge the light-manipulating wizardry of tantala with the controllability of lithium niobate. The new technique “allows seamless integration,” says Brodnik. “The real power is that tantala can be added to existing circuitry.”
Ultimately, the researchers were able to fit roughly 50 fingernail-sized chips containing 10,000 photonic circuits, each outputting a unique color, onto a wafer roughly the size of a beer coaster. “We can create all these different colors, just by designing circuits,” says Papp.
Quantum technologies such as clocks and computers could be among the biggest beneficiaries of integrated photonics. These devices often use arrays of atoms to store and process information. For each type of atom, physicists need lasers tailored to the atom’s internal quantum energy levels. For example, rubidium atoms, commonly used in quantum computers and clocks, respond to red light with a wavelength of 780 nanometers. Strontium atoms, another popular choice, “see” blue light at 461 nanometers. Shine other colors on the atoms and nothing happens.
The bulky, costly and complicated lasers needed to produce these bespoke colors have been a major hindrance to getting quantum computers and optical clocks out of the lab and into the field, where they could have big impacts. Cheap, low-power, portable optical clocks, for example, could help predict volcanic eruptions and earthquakes, offer an alternative to GPS for positioning and navigation, and help scientists investigate scientific mysteries such as the nature of dark matter. Quantum computers could offer new ways to study the physics and chemistry of drugs and materials.
Integrated photonic circuits aren’t just for quantum. Papp believes NIST’s photonics chips could help efficiently shuttle signals between the specialized chips used by tech firms, potentially making AI-based tools more powerful and efficient. Tech companies are also interested in using photonics to improve virtual reality displays.
While NIST’s chips aren’t yet ready for mass production, the technique used to create them provides a path forward, Papp and Brodnik say. The NIST scientists collaborated with experts at Octave Photonics, a Louisville, Colorado-based startup company founded by former NIST researchers that’s now working to scale up the technology.
“When you see the chip glowing in the lab, taking in invisible light and making all this visible light in one integrated chip — it’s obvious how many potential applications there could be,” says Papp.
...
Read the original on www.nist.gov »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.