10 interesting stories served every morning and every evening.
Written by me, proof-read by an LLM.
Details at end.
In one of my talks on assembly, I show a list of the 20 most executed instructions on an average x86 Linux desktop. All the usual culprits are there, mov, add, lea, sub, jmp, call and so on, but the surprise interloper is xor - “eXclusive OR”. In my 6502 hacking days, the presence of an exclusive OR was a sure-fire indicator you’d either found the encryption part of the code, or some kind of sprite routine. It’s surprising then, that a Linux machine just minding its own business, would be executing so many.
That is, until you remember that compilers love to emit a xor when setting a register to zero:
We know that exclusive-OR-ing anything with itself generates zero, but why does the compiler emit this sequence? Is it just showing off?
In the example above, I’ve compiled with -O2 and enabled Compiler Explorer’s “Compile to binary object” so you can view the machine code that the CPU sees, specifically:
If you change GCC’s optimisation level down to -O1 you’ll see:
The much clearer, more intention-revealing mov eax, 0 to set the EAX register to zero takes up five bytes, compared to the two of the exclusive OR. By using a slightly more obscure instruction, we save three bytes every time we need to set a register to zero, which is a pretty common operation. Saving bytes makes the program smaller, and makes more efficient use of the instruction cache.
It gets better though! Since this is a very common operation, x86 CPUs spot this “zeroing idiom” early in the pipeline and can specifically optimise around it: the out-of-order tracking systems knows that the value of “eax” (or whichever register is being zeroed) does not depend on the previous value of eax, so it can allocate a fresh, dependency-free zero register renamer slot. And, having done that it removes the operation from the execution queue - that is the xor takes zero execution cycles! It’s essentially optimised out by the CPU!
You may wonder why you see xor eax, eax but never xor rax, rax (the 64-bit version), even when returning a long:
In this case, even though rax is needed to hold the full 64-bit long result, by writing to eax, we get a nice effect: Unlike other partial register writes, when writing to an e register like eax, the architecture zeros the top 32 bits for free. So xor eax, eax sets all 64 bits to zero.
Interestingly, when zeroing the “extended” numbered registers (like r8), GCC still uses the d (double width, ie 32-bit) variant:
Note how it’s xor r8d, r8d (the 32-bit variant) even though with the REX prefix (here 45) it would be the same number of bytes to xor r8, r8 the full width. Probably makes something easier in the compilers, as clang does this too.
xor eax, eax saves you code space and execution time! Thanks compilers!
See the video that accompanies this post.
This post is day 1 of Advent of Compiler Optimisations 2025, a 25-day series exploring how compilers transform our code.
This post was written by a human (Matt Godbolt) and reviewed and proof-read by LLMs and humans.
Support Compiler Explorer on Patreon
or GitHub, or by buying CE products in the Compiler Explorer Shop.
Matt Godbolt is a C++ developer living in Chicago. He works for Hudson River Trading on super fun but secret things. He is one half of the Two’s Complement podcast. Follow him on Mastodon
or Bluesky.
...
Read the original on xania.org »
Cast your mind back to when you learned to drive, ride a bike, speak a foreign language, perform a tracheostomy, or acquire any other skill. Wasn’t confidence the key to your success? Whatever we do in life, confidence is always critical. If you run a business, one of the metrics that are likely to be collected is confidence in your business, as that’s such an important economic indicator. Confidence is every bit as important in computing.
Over the last few weeks I’ve been discovering problems that have been eroding confidence in macOS. From text files that simply won’t show up in Spotlight search, to Clock timers that are blank and don’t function, there’s one common feature: macOS encounters an error or fault, but doesn’t report that to the user, instead just burying it deep in the log.
When you can spare the time, the next step is to contact Apple Support, who seem equally puzzled. You’re eventually advised to reinstall macOS or, in the worst case, to wipe a fairly new Apple silicon Mac and restore it in DFU mode, but have no reason to believe that will stop the problem from recurring. You know that Apple Support doesn’t understand what’s going wrong, and despite the involvement of support engineers, they seem as perplexed as you.
One reason for this is that macOS so seldom reports errors, and when it does, it’s uninformative if not downright misleading. Here’s a small gallery of examples I’ve encountered over the last few years, to bring back unhappy memories.
Maybe you saved an important webpage in Safari 26.1 using its Web Archive format, then a couple of days later discovered you couldn’t open it. There’s no error message, just a blank window, so you try again with the same result. Another site shows the same problem, forcing you to conclude that it’s a bug in Safari. Are you now going to devote your time to obtaining sufficient information to report that to Apple using Feedback? Or to contact Apple Support and pursue its escalation to an engineer who might fortuitously discover the cause?
Silent failures like these are least likely to be reported to Apple. In most cases, we find ourselves a workaround, here to abandon Web Archives and switch to saving webpages as PDF instead. When someone else mentions they too have the same problem, we advise them that Web Archives are broken, and our loss of confidence spreads by contagion.
Honest and understandable error reporting is essential to confidence. It enables us to tackle problems rather than just giving up in frustration, assuming that it’s yet another feature we used to rely on that has succumbed in the rush to get the next version of macOS out of the door.
Eroding confidence is also a problem that the vendors of AI appear to have overlooked, or at least seriously underestimated. It’s all very well using the euphemism of hallucination to play down the severity of errors generated by LLMs. But those can only cause users to lose confidence, no matter how ‘intelligent’ you might think your AI is becoming. Go talk to the lawyers who have been caught out by courts submitting AI fabrications whether they still have full confidence in your product.
...
Read the original on eclecticlight.co »
Ghostty for the web with xterm.js API compatibility — giving you a proper VT100 implementation in the browser.
* Migrate from xterm by changing your import: @xterm/xterm → ghostty-web
* WASM-compiled parser from Ghostty—the same code that runs the native app
Originally created for Mux (a desktop app for isolated, parallel agentic development), but designed to be used anywhere.
Live Demo on an ephemeral VM (thank you to Greg from disco.cloud for hosting).
npx @ghostty-web/demo@next
This starts a local HTTP server with a real shell on http://localhost:8080. Works best on Linux and macOS.
xterm.js is everywhere—VS Code, Hyper, countless web terminals. But it has fundamental issues:
xterm.js reimplements terminal emulation in JavaScript. Every escape sequence, every edge case, every Unicode quirk—all hand-coded. Ghostty’s emulator is the same battle-tested code that runs the native Ghostty app.
npm install ghostty-web
ghostty-web aims to be API-compatible with the xterm.js API.
import { init, Terminal } from ‘ghostty-web’;
await init();
const term = new Terminal({
fontSize: 14,
theme: {
background: ‘#1a1b26’,
foreground: ‘#a9b1d6’,
term.open(document.getElementById(‘terminal’));
term.onData((data) => websocket.send(data));
websocket.onmessage = (e) => term.write(e.data);
For a comprehensive client server example, refer to the demo.
ghostty-web builds from Ghostty’s source with a patch to expose additional functionality.
bun run build
Mitchell Hashimoto (author of Ghostty) has been working on libghostty which makes this all possible. The patches are very minimal thanks to the work the Ghostty team has done, and we expect them to get smaller.
This library will eventually consume a native Ghostty WASM distribution once available, and will continue to provide an xterm.js compatible API.
At Coder we’re big fans of Ghostty, so kudos to that team for all the amazing work.
...
Read the original on github.com »
JavaScript isn’t enabled in your browser, so this file can’t be opened. Enable and reload.
...
Read the original on docs.google.com »
I’ve written about JPEG XL in the past. First, I noted Google’s move to kill the format in Chromium in favor of the homegrown and inferior AVIF. Then, I had a deeper look at the format, and visually compared JPEG XL with AVIF on a handful of images.
The latter post started with a quick support test:
“If you are browsing this page around 2023, chances are that your browser supports AVIF but does not support JPEG XL.”
Well, here we are at the end of 2025, and this very sentence still holds true. Unless you are one of the 17% of users using Safari, or are adventurous enough to use a niche browser like Thorium, LibreWolf or the newer Zen Browser, chances are you see the AVIF banner in green and the JPEG XL image in black/red.
The good news is, this will change soon. In a dramatic turn of events, the Chromium team has reversed its Obsolete tag, and has decided to support the format in Blink (the engine behind Chrome/Chromium/Edge). Given Chrome’s position in the browser market share, I predict the format will become a de factor standard for images in the near future.
I’ve been following JPEG XL since its experimental support in Blink. What started as a promising feature was quickly axed by the team in a bizarre and ridiculous manner. First, they asked the community for feedback on the format. Then, the community responded very positively. And I don’t only mean a couple of guys in their basement. Meta, Intel, Cloudinary, Adobe, ffmpeg, libvips, Krita, and many more. After that came the infamous comment:
Thank you everyone for your comments and feedback regarding JPEG XL. We will be removing the JPEG XL code and flag from Chromium for the following reasons:Experimental flags and code should not remain indefinitelyThere is not enough interest from the entire ecosystem to continue experimenting with JPEG XLThe new image format does not bring sufficient incremental benefits over existing formats to warrant enabling it by defaultBy removing the flag and the code in M110, it reduces the maintenance burden and allows us to focus on improving existing formats in Chrome
Yes, right, “not enough interest from the entire ecosystem”. Sure.
Anyway, following this comment, a steady stream of messages pointed out how wrong that was, from all the organizations mentioned above and many more. People were noticing in blog posts, videos, and social media interactions.
Strangely, the following few years have been pretty calm for JPEG XL. However, a few notable events did take place. First, the Firefox team showed interest in a JPEG XL Rust decoder, after describing their stance on the matter as “neutral”. They were concerned about the increased attack surface resulting from including the current 100K+ lines C++ libjxl reference decoder, even though most of those lines are testing code. In any case, they kind of requested a “memory-safe” decoder. This seems to have kick-started the Rust implementation, jxl-rs, from Google Research.
To top it off, a couple of weeks ago, the PDF Association announced their intent to adopt JPEG XL as a preferred image format in their PDF specification. The CTO of the PDF Association, Peter Wyatt, expressed their desire to include JPEG XL as the preferred format for HDR content in PDF files.
All of this pressure exerted steadily over time made the Chromium team reconsider the format. They tried to kill it in favor of AVIF, but that hasn’t worked out. Rick Byers, on behalf of Chromium, made a comment in the Blink developers Google group about the team welcoming a performant and memory-safe JPEG XL decoder in Chromium. He stated that the change of stance was in light of the positive signs from the community we have exposed above (Safari support, Firefox updating their position, PDF, etc.). Quickly after that, the Chromium issue state was changed from Obsolete to Assigned.
This is great news for the format, and I believe it will give it the final push for mass adoption. The format is excellent for all kinds of purposes, and I’ll be adopting it pretty much instantly for this and the Gaia Sky website when support is shipped. Some of the features that make it superior to the competition are:
* Lossless re-compression of JPEG images. This means you can re-compress your current JPEG library without losing information and benefit from a ~30% reduction in file size for free. This is a killer feature that no other format has.
* Support for image sizes of up to 1,073,741,823x1,073,741,824. You won’t run out of image space anytime soon. AVIF is ridiculous in this aspect, capping at 8,193x4,320. WebP goes up to 16K2, while the original 1992 JPEG supports 64K2.
* Maximum of 32 bits per channel. No other format (except for the defunct JPEG 2000) offers this.
* Maximum of 4,099 channels. Most other formats support 4 or 5, with the exception of JPEG 2000, which supports 16,384.
* JXL supports progressive decoding, which is essential for web delivery, IMO. WebP or HEIC have no such feature. Progressive decoding in AVIF was added a few years back.
For a full codec feature breakdown, see Battle of the Codecs.
JPEG XL is the future of image formats. It checks all the right boxes, and it checks them well. Support in the overwhelmingly most popular browser engine is probably going to be a crucial stepping stone in the format’s path to stardom. I’m happy that the Chromium team reconsidered their inclusion, but I am sad that it took so long and so much pressure from the community to achieve it.
...
Read the original on tonisagrista.com »
The first three dimensions—length, height, and depth—are included on all topographical maps. The “fourth dimension,” or time, is also available on the website of the Swiss Federal Office of Topography (Swisstopo). In the “Journey Through Time,” a timeline displays 175 years of the country’s cartographic history, advancing in increments of 5-10 years. Over the course of two minutes, Switzerland is drawn and redrawn with increasing precision: inky shapes take on hard edges, blues and browns appear after the turn of the century, and in 2016, the letters drop their serifs.
Watching a single place evolve over time reveals small histories and granular inconsistencies. Train stations and airports are built, a gunpowder factory disappears for the length of the Cold War. But on certain maps, in Switzerland’s more remote regions, there is also, curiously, a spider, a man’s face, a naked woman, a hiker, a fish, and a marmot. These barely-perceptible apparitions aren’t mistakes, but rather illustrations hidden by the official cartographers at Swisstopo in defiance of their mandate “to reconstitute reality.” Maps published by Swisstopo undergo a rigorous proofreading process, so to find an illicit drawing means that the cartographer has outsmarted his colleagues.
It also implies that the mapmaker has openly violated his commitment to accuracy, risking professional repercussions on account of an alpine rodent. No cartographer has been fired over these drawings, but then again, most were only discovered once their author had already left. (Many mapmakers timed the publication of their drawing to coincide with their retirement.) Over half of the known illustrations have been removed. The latest, the marmot drawing, was discovered by Swisstopo in 2016 and is likely to be eliminated from the next official map of Switzerland by next year. As the spokesperson for Swisstopo told me, “Creativity has no place on these maps.”
Errors—both accidental and deliberate—are not uncommon in maps (17th-century California as an island, the omission of Seattle in a 1960s AAA map). Military censors have long transformed nuclear bunkers into nondescript warehouses and routinely pixelate satellite images of sensitive sites. Many maps also contain intentional errors to trap would-be copyright violators. The work of recording reality is particularly vulnerable to plagiarism: if a cartographer is suspected of copying another’s work, he can simply claim to be duplicating the real world— ideally, the two should be the same. Mapmakers often rely on fictitious streets, typically no longer than a block, to differentiate their accounts of the truth (Oxygen Street in Edinburgh, for example).
Their entire professional life is spent at the magnification level of a postage stamp.
But there is another, less institutional reason to hide something in a map. According to Lorenz Hurni, professor of cartography at ETH Zurich, these illustrations are part inside joke, part coping mechanism. Cartographers are “quite meticulous, really high-precision people,” he says. Their entire professional life is spent at the magnification level of a postage stamp. To sustain this kind of concentration, Hurni suspects that they eventually “look for something to break out of their daily routine.” The satisfaction of these illustrations comes from their transgressive nature— the labor and secrecy required to conceal one of these visual puns.
And some of them enjoy remarkable longevity. The naked woman drawing, for example, remained hidden for almost sixty years in the municipality of Egg, in northern Switzerland. Her relatively understated shape was composed in 1958 from a swath of green countryside and the blue line of a river, her knees bending at the curve in the stream. She remained unnoticed, reclining peacefully, until 2012.
Several of the other drawings came about considerably later. In 1980, a Swisstopo cartographer traced the spider over an arachnid-shaped ice field on the Eiger mountain. It faded out over the course of the decade, retracting its spindly legs in the intermediary editions. Around the same time, another cartographer concealed a freshwater fish in a French nature preserve along the Swiss border. The fish lived in the blue circumference of a marshy lake until 1989 when, according to Swisstopo, “it disappeared from the surface of the lake, diving to the depths.”
It’s unclear how these drawings made it past the institute’s proofreaders in the first place. They may have been inserted only after the maps were approved, when cartographers are asked to apply the proofreaders’ final edits. When the maps were once printed as composite layers of different colors, cartographers could have built the drawings from the interplay of different topographical elements (the naked woman, for example, is composed of a blue line over a green-shaded area). Hurni also speculates that cartographers could have partitioned their illustrations over the corners of four separate map sheets, although no such example has (yet) been found.
Some of these clandestine drawings allude to actual topographical features: near the town of Interlaken, where an outcropping of stones approximates two eyes and a nose, the 1980 edition of the map features an angular cartoon face between the trees. (According to local legend, it’s a monk who was turned to stone as punishment for chasing a young girl off the cliff.) In the late 1990s, the same cartographer drew a hiker in the map’s margins. With boots each about the size of a house, the hiker serves a pragmatic purpose. Like a kind of topographic patch, he covers an area in the Italian Alps where the Swiss apparently lacked the necessary “information and data from the Italian geographical services.”
The marmot, the latest illustration, hides in plain sight in the Swiss Alps. His plump outline was concealed in the delicate relief shading above a glacier, which shielded him from detection for nearly five years. The mountain’s hachures— short, parallel lines that indicate the angle and orientation of a slope— double as his fur. He is mostly indistinguishable from the surrounding rock, except for his face, tail, and paws. He even fits ecologically: as an animal of the ice age, alpine marmots are comfortable at high altitudes, burrowing into frozen rock for their nine months of hibernation. In 2016, Hurni revealed his location to the public on behalf of an unnamed source.
There is a degree of winking tolerance for these drawings, which constitute something of an unofficial national tradition: the spokeswoman for Swisstopo referred me to a 1901 fish hidden in a well-known painting of Lake Lucerne at the National Council palace (probably in honor of the palace’s April 1st inauguration, which some European countries celebrate by attaching “April Fish” to the backs of shirts). Nevertheless, the marmot—along with the face and hiker—will likely be “eliminated” from Switzerland’s next official map (per a decision from the chief of cartography).
Swiss cartographers have a longstanding reputation for topographical rigor. A so-called “Seven Years War of Cartography” was even waged in the 1920s over the scale of the national maps, with the Swiss Alpine Club advocating greater topographical detail for its mountaineering members. Swisstopo is now an industry benchmark for the mountains, from its use of aerial photogrammetry (images taken first by balloons and then small planes) to aerial perspective (that natural haziness that renders distant peaks with less contrast). In 1988, they were commissioned to draw Mount Everest.
Still, the original drawings were never authorized in the first place. Perhaps a meticulous reading of next year’s Swiss maps may reveal some other nationally-celebrated animals in unfrequented bodies of water or alpine meadows. As Juerg Gilgen, a current cartographer at Swisstopo, told me “as a matter of fact, the proof-reader is also just a human being prone to failure. And cartographers are also just human beings trying to fool around.”
...
Read the original on eyeondesign.aiga.org »
Skip to content
Most metros are adding jobs more slowly than normal. Charlotte leads in job growth among major metros, while Austin and Denver fall far short of their historically strong pace.
High-income sectors are contracting, while Education and Healthcare are expanding faster than normal across most metros.
Employment composition matters as much as total growth for local housing market strength. Metros reliant on lower-wage job growth are likely to face softer for-sale demand.
The national labor market is softening, with implications for local housing markets. Most major metros are adding jobs more slowly than normal. We analyzed employment performance by metro and industry, comparing today’s growth to long-term trends since 2010. Red represents job losses, yellow shows slower-than-normal growth, and green represents faster-than-normal growth.
The job market drives housing demand, but the type of jobs created or lost impacts the type of housing. High-income sectors—Information, Professional Services, and Financial Activities—are shrinking across most major metros. Workers in these industries drive for-sale housing demand more than rental demand. Nationally, high-income sector employment remained flat YOY in August, well below its long-term compound annual growth of +1.6%.The Education and Healthcare sectors account for the bulk of new jobs added in most metros and are growing faster than normal in almost every market. Many of these jobs pay lower wages on average and often generate rental demand more than homebuying activity. Nationally, education and healthcare employment rose +3.3% YOY in August, well above its long-term compound annual growth of +2.1%
Philadelphia (+1.8% YOY) and New York (+1.7% YOY) show stronger job growth than their historical trends (+1.1% and +1.6%, respectively). However, this improvement reflects recovery from weak post-Great Financial Crisis baselines rather than genuine outperformance. Charlotte (+2.6% YOY) is a standout performer, maintaining robust job growth supported by Professional Services expansion (+4.5% YOY)—a rare bright spot for for-sale demand.Austin (+0.8% YOY) and Denver (+0.0% YOY) are growing much more slowly than their historically strong employment trends (+3.8% and +2.3%, respectively). Tech and Professional Services jobs are declining in both markets, and even healthcare—which is expanding faster than normal in most metros—shows weak growth here. This reduction in high-paying jobs is weakening demand for both home purchases and rentals.The Bay Area continues to lose jobs across high-income sectors (-0.4% YOY), driving modest overall employment declines. These job losses have slowed compared to a year ago but remain negative YOY. Despite generating substantial spending and wealth, the AI-driven tech boom hasn’t added meaningful employment to the region.
What this means for your business
Whether you build, invest, or advise in housing markets, these employment shifts will impact your growth opportunities in 2026 and beyond:Rental operators: Prepare for sustained demand from renters employed in healthcare and education.
Our Metro and Regional Housing research package includes analysis of the latest demand, supply, and affordability fundamentals for each metro and region as well as results from our proprietary surveys. Our consulting team continually evaluates market feasibility, absorption/pricing/product recommendations, and overall investment/expansion strategy in markets nationwide. Combining these two areas of expertise yields qualitative and quantitative insight for more intelligent decision-making.
This package provides a complete picture of housing supply, demand, and affordability through local insight, proprietary surveys, and extensive data analysis. We currently provide an overview of major housing and economic trends across 100 MSAs nationwide.
Our research services enable our clients to gauge housing market conditions and better align their business and strategic investments in the housing industry. We provide a thoughtful and unique holistic approach of both quantitative and qualitative analysis to help clients make informed housing investment decisions.
Our experienced team of consultants helps clients make sound housing investment decisions. We thrive on their success and work with many clients over multiple years and numerous projects.
Connect with me on LinkedIn
John leads JBREC’s Southern California market coverage for the Metro Analysis and Forecast reports, produces the Regional Analysis and Forecast and Homebuilder Analysis and Forecast reports, and assists with coverage of the public homebuilder space.
If you have any questions about our services or if you would like to speak to one of our experts about we can help your business, please contact Client Relations at clientservices@jbrec.com.
Want to interview one of our experts?
Media professionals seeking expert analysis and authoritative commentary on US housing market trends, policy impacts, and industry developments can email our team for interviews, quotes, and data-driven insights.
Every week, we deliver analysis to over 40,000 subscribers with our Building Market Intelligence™ newsletter. Subscribe to our weekly BMI newsletters to stay current on pressing topics in the housing industry.
What’s ahead for housing—Insights from our 2026 Housing Market Outlook conference
...
Read the original on jbrec.com »
Experiences with the Matrix protocol, Matrix Synapse server, bridges, and Element mobile apps.
I have been hosting a Matrix server for about five years now, mostly for text chats between a few relatives and close friends, and a bridge to WhatsApp for a few more people. These are my experiences.
I don’t have many thoughts on the protocol itself.
The only thing that I don’t really understand is the decision on data replication. If a user on server A joins a room on server B, recent room data is copied from server B to server A and then kept in sync on both servers. I suppose this reduces the load on the original server at the expense of federation overhead and space on other servers. However, this also creates a situation where anything said across federation cannot be unsaid, which is an ironic situation for a protocol/system that often comes up when talking about privacy.
Synapse is the only choice that supports bridges, which was why I wanted to try Matrix in the first place. And back in 2019-2020 this was the only choice anyway.
As of right now, I run Synapse, PostgreSQL, and coturn directly, without containerization, on a small VPS.
Works fairly reliably, supports bridges, and is more efficient that it was in 2020.
API is well documented, and allows authenticating and sending (unencrypted) messages via simple HTTP calls. At some point in time, I wanted to write a simple shell client to use with SXMO and such.
Does not have an admin panel
There is no admin page or panel. There was a third-party admin site, but it’s an entire site just for making HTTP calls. So I ended up writing my own.
While technically, Synapse can work with a sqlite database (and which at first seems like an OK choice for having
Initial setup presumes that the server is going to be federated, and there is no good way to turn it off. The best workaround involves a blank whitelist of federated servers.
I don’t know the implications of disabling it.
Message retention policy can be set up server-wide, but also per-room. There are specific lines in the configuration that need to be set to actually enable a service that runs the cleanup.
Synapse keeps the room even after all of the members leave it, including federated rooms. This results in many (sometimes large) rooms without local members orphaned on the server, taking up database space.
Deleting messages (events) with attachments does not delete the attachment (because another message might refer to it?), which means that the sent files continue existing on the server indefinitely. Another privacy implication. A simple “delete all files older than X” script works great until it deletes avatars. So yeah, seems like this is something that should be handled by the Synapse server instead of cobbled-together scripts.
Even after extensive cleanup, PostgreSQL database might need to be vacuumed to reduce the disk space it takes up.
Even for my small server with
Synapse keeps track of room states in an append-only (!) table named state_groups_state. Deleting a room does not delete the state_groups_state records. So it is never automatically cleaned up, and grows in size infinitely. It is possible to delete many of those records from the database directly, and Element (the company) provides some tool to “compress” those records, but again, something that should be handled by the server.
This is simply not an option in the API. Server admin can perform a “deactivate” (disable login) and “erase” (remove related data, which claims to be GDPR-compliant) on user accounts, but the accounts themselves stay on the server forever.
How this not considered a GDPR violation is a mystery to me. Even on my tiny server, I have users who use their first name as their ID and bridged WhatsApp users that use phone numbers as IDs.
While Matrix-Element ecosystem has been catering towards government and corporate entities for some time, there have been multiple recent announcements about its future.
Specifically, Element (the company) is now providing an all-in-one Element Server Suite (ESS) to replace the current setup, including
It is intended for non-professional use, evaluations, and small to mid-sized deployments (1–100 users).
ESS Community includes 7 components/services, now requires a minimum of 2 CPUs, 2GB of RAM, and runs using… Kubernetes? IMO, this is an overkill for dozen users.
For comparison, Snikket, an all-in-one solution with similar functionality using XMPP, requires a single CPU and 128MB (!) RAM for 10 or so users.
Yes, I have seen the ansible setup script setup recommended, but at this point, making setup easier does not address the issue of extra services being required in the first place.
Also, the ESS handles account creation and calls in an entirely different way, more on that later.
Pretty great. Easy to install and set up, works really well, and needs only occasional (semi-yearly or so) updates when WhatsApp changes their web API. Does not support calls.
Same on all platforms
Element exists and looks consistent on Android, iOS, and web, making it easier for regular users and for troubleshooting.
This is silly, but while (official?) bridges support image captions, official Element app does not. The answer in the FAQ? Get a better app. Well, OK.
Image with a caption in SchildiChat Classic (the better app).
Sometimes it can take up to a few minutes to get a message, even between two Android clients using Google Cloud Messaging. Sometimes it is nearly instant. Still unsure of the cause.
One unreliable way to tell that the server is unreachable is the endless loading bar. But even then, it eventually goes away without indicating any errors.
Then, when sending a message, the user receives “Unable to send message”. Frustration ensues.
But I know the app is trying to call the /sync endpoint. Why doesn’t it show any errors when that fails?
IIRC the first thing the app does is ask user to back up their signing keys and enter the key password, without a simple explanation. Not a great experience for regular users.
Some people reported issues with Element losing its keys or frequently requesting to be re-verified. Thankfully I have not encountered these.
Even if you connect to a self-hosted server, Element Classic could attempt to connect to vector.im integration server and matrix.org key backup server.
Element X is now recommended as the new and better client. It is not.
Somehow, it is slower. Clicking on a conversation takes 0.5-1.0 seconds to load it, compared to almost instant load on Classic.
Perhaps it does work better for accounts with many large rooms, but that is not my case.
Conversations are sorted by… who knows. It is not recent nor alphabetical.
Element X does not support periodic background sync, so you need to set up ntfy or something similar to use Element X on a de-googled device. Seems like a simple enough fail-safe (even WhatsApp does this), but it was dropped for some reason.
This “sliding sync” option is available only for newer Synapse versions, and only if running with PostgreSQL database (which should already be the case - see above). Probably not an issue unless the user tries to connect Element X to an outdated Synapse.
Calling with Element X requires Element Call (part of ESS). This supports group calls, but… only video calls at the moment.
You also might be asked to tell your contact to install the new app:
I don’t regularly use calls, but some people I would like to invite to my server would want to use them.
A few years ago, I ended up either temporarily enabling unrestricted registration (a terrible idea), or creating my users’ accounts manually, because the “invite” matrix.to link was broken, and registration tokens did not work correctly in mobile apps.
So let’s see how it works now. Keep in mind, I am still on standalone Synapse, not ESS.
I am a user, and I was to register an account on my friend’s server. I see that Element X is now a recommended app, so let’s try that.
Click “Create account” (which is a different style that does not look like a button for some reason).
But I want an account on a different server. Click “Change account provider”.
Now I can search for the server my friend is hosting, and it should appear in the list below the search.
As server admin: I do not remember if Synapse server has to enable/keep federation for this to work.
Yes! That is what I want, why is this so verbose?
WTF. So Element X cannot create even the simplest username+password account. That is all I want, I don’t want to sign in with Google, Apple, or any other form of third-party authentication.
I was unable to register an account using Element X, so Element Classic should work better.
What difference does this make? Skip.
The current official app is telling me to use Element X. Just tried that. Click “EDIT” where it says “matrix.org” (which does not say “server”, actually) and enter the server name.
Why not? No explanation. Sure, I’ll use a web client.
Well, fuck me, I guess. Why can’t I just create an account?
As a server admin: Synapse is set to allow registrations via registration tokens, because unrestricted registration is a bad idea. I did not find where the /static/client/register path is set.
IIRC it is possible to register an account by going to a web-hosted Element app, such as app.element.io, which will allow to register an account using a registration token. But then the user has to deal with the headache of cross-verifying their mobile device to the web app (which they might never use).
So now what?
Matrix-Element is growing, building new features, and acquiring large customers (mostly government entities AFAIK). However, the new corporatesque ESS Community is not worth it in my opinion. I don’t need fancy auth, third-party IDs, group video conferencing, or even federation for that matter. But it is clear that Synapse and Element X are severely crippled and are not designed to work without these services.
I will probably switch to Snikket, which is more efficient, has timely notifications, and very smooth onboarding.
...
Read the original on yaky.dev »
I had never heard of Michael Green before his now-infamous essay “Part 1: My Life Is a Lie - How a Broken Benchmark Quietly Broke America” went extremely viral on X.
Go read it. The short version: real poverty is closer to $140,000 than $31,000.
“The U. S. poverty line is calculated as three times the cost of a minimum food diet in 1963, adjusted for inflation.”
The composition of household spending transformed completely. In 2024, food-at-home is no longer 33% of household spending. For most families, it’s 5 to 7 percent.
Housing now consumes 35 to 45 percent. Healthcare takes 15 to 25 percent. Childcare, for families with young children, can eat 20 to 40 percent.
If you keep Orshansky’s logic—if you maintain her principle that poverty could be defined by the inverse of food’s budget share—but update the food share to reflect today’s reality, the multiplier is no longer three.
Which means if you measured income inadequacy today the way Orshansky measured it in 1963, the threshold for a family of four wouldn’t be $31,200.
It would be somewhere between $130,000 and $150,000.
And remember: Orshansky was only trying to define “too little.” She was identifying crisis, not sufficiency. If the crisis threshold—the floor below which families cannot function—is honestly updated to current spending patterns, it lands at $140,000.
This article resonated with me because I have had three children born since 2021 - well, technically, my third arrives in a week.
I have spent $30,000, $35,000, and now $40,000 for each child delivered.
That is my full out-of-pocket cash-paid cost as a self-employed entrepreneur who runs a small business. I do not have a corporate daddy to share costs with me. This is totally unsustainable and insane, yet every central bank-worshipping think tank economist who attacked Green had nothing to say when I asked them to justify my socialized cost for the public good of bringing a new tax-payer into this world.
America has a cost of living crisis; it’s not being taken seriously by “serious” economists; and the ongoing failure to address it will lead to political, social, and economic calamity.
The essential theme of Green’s piece is that “participation costs” - the price of admission you pay to simply be in the market, let alone win, have grown out of control. Food and shelter are participation costs for living. Having a $200/mo smartphone is now a participation cost for many things such as getting access to your banking information remotely, medical records, and work / school.
There’s no greater “participation cost” to human civilization than reproduction.
I run Petabridge - we’re a small, specialized software company. I have fewer than 5 employees and I own 100% of the company. Been in business for 11 years. I love what I do. We’re too small for most traditional insurance brokers / group marketplaces but use TriNet, one of the largest Professional Employment Organizations (PEO)s in the United States, to handle payroll / taxes / benefits. I also used them when I ran MarkedUp, my last company before Petabridge.
My wife and I got married in 2020 and she became a full-time home maker, so I’m the sole bread winner.
This is what my current health care costs look like per pay period, which is bimonthly.
Remember, I own 100% of the company - so it makes no real difference which side of the ledger the money comes from. I pay the full freight.
Before any of those magic benefits kick in though, there’s the sticky issue of my health insurance deductible:
I have to hit a $14,300 deductible first, which I will absolutely hit next week when my child is delivered (if I haven’t already.)
Thus I’ll spend $39,980 bringing my new daughter into this world in 2025, and there are assuredly things I’ve paid for that are not covered by insurance either (i.e. we paid for some tests and sonograms that aren’t covered at all by our plan) - so the real cost will be $40k+ when it’s all said and done.
Here’s what my insurance premiums look like for 2026:
The deductible is staying the same at $14,300, so now my max spend is $43,496 - an 8.8% increase in total cost over the previous year, but a 13.6% increase in premiums. I’ve had some version of this plan for about 5 years and this price increase has been fairly consistent over time - I think I was paying $1850 a month in premiums back in 2021, which was more than my mortgage.
My actual insurance cost is somewhat higher than the $40,000 I’ve laid out here.
I also pay $1250 per month to TriNet for the privilege of being able to buy their health insurance in the first place - sure, I get some other benefits too, but I’m the only US-based employee currently so this overhead is really 100% me. The only reason I stick with TriNet and don’t replace them with a significantly cheaper payroll processor like QuickBooks Payroll is for access to their health insurance benefits.
So my real participation cost is closer to $55,000 a year - the healthcare market is socializing enormous costs to me for public service of siring new taxpayers.
Health young people who can participate in Obamacare / eHealth Insurance (individual) markets; and
Poor people who get either subsidized ACA plans or Medicaid.
I have the misfortune of creating jobs for other people, so option 1 is out.
My wife and I are healthy, but we’re building our family and I have yet to see a marketplace plan that supports child-birth. Maybe the subsidized ones do, but I earn too much money to see those. All of the ones I’ve found through eHealth Insurance or Healthcare.gov never cover it - and I check every year. So options 2 and 3 are out. This leaves me with few options to continue running my company AND grow a family at the same time.
The Affordable Care Act (Obamacare) barred insurers from turning down applicants based on existing pre-conditions; the way insurers get around this for pregnancy and child-birth is not by rejecting pregnant applicants (illegal), but by simply refusing to cover the care those applicants need to survive pregnancy (legal and common.)
I’ve had the same version of this Aetna plan since late 2020 when my wife and I got married and she quit her job. It’s the cheapest PPO I can buy through TriNet that gives us access to our pediatrician and OBGYN. The other PPOs are significantly more expensive and usually have lower deductibles. The “cheaper” alternatives offered through TriNet are HMOs or EPOs that have some issues with them: co-insurance or none of our medical providers being in their network.
If you’re familiar with how healthcare charge masters work, then you’ll understand why co-insurance is a bad bet when you know for certain you’re going to need an expensive medical intervention (like child-birth.)
Earlier this month our 4 year old had a 15 minute procedure to treat bladder reflux - the “billed cost” to Aetna was roughly $32,000. That’s nowhere close to the “real” cost of the procedure, but the point stands: if you have a big medical event while you’re on co-insurance you might get exposed to the same heat levels that totally uninsured people have to tolerate.
I’ve also looked into buying plans directly from Aetna and other smaller brokers like SimplyInsured - similar problems there:
The costs are actually higher than what I’m already paying TriNet.
It’s also worth noting, by the way, that TriNet’s quotes to me aren’t unique to my company, as far as I know. These are the standard plans TriNet offers to all Texas-based employers.
My situation leaves me with unfavorable options:
Continue paying through the nose for my Aetna PPO;
Drop health insurance altogether; start negotiating cash settlements; and backstop my risk with a platform like CrowdHealth - this is more time-expensive and exposes us to risk, but it can be managed;
Use an EPO / HMO and search for new health care providers who will accept these plans - we’ve looked and it’s bleak;
Have my wife go find a BigCo corporate job somewhere and raise our children in daycare; or
Destroy my firm and all of the economic value it creates to go get a BigCo job myself.
I’ve chosen number 1 because I have to negotiate the following trade-offs:
Forcing my pregnant wife to find new pediatricians, OBGYN, GPs, et al for her and our children;
The amount of time I can personally spend each November searching for alternatives - 10-30 hours each year usually;
The amount of time I can personally spend negotiating health care costs - CrowdHealth might be able to help with that, but I’m extremely time-poor at the moment;
The amount of uncapped financial exposure I’m willing to tolerate - this is why Aetna can get away with highway robbery in the first place - insurers like them incentivize the creation of this exposure risk through Chargemaster / discount games; and
The amount of cash I am willing to pay for any of the above.
I am fortunate. I am a higher earner, so I can sign the enormous check each year. The real people who bear this cost though are the employees I’m not going to hire; I’m not going to spend $40-$100k an entry level software engineer / admin / SDR / marketer or whatever if I need to keep $55k in reserve to expand my family.
What if I was starting a solo plumbing business or a restaurant? What would my alternatives be then? What if I fell beneath the “$140k poverty line” but not low enough where I can qualify for Medicaid / CHIP / subsidized market plans? I’d be utterly screwed.
The problem I have with health insurance isn’t just the high price tag. It’s:
The real lack of viable alternatives, making me feel robbed at gunpoint while watching my living standards or optionality on my own hard-fought business capital shrink each year.
The societal absurdity of this situation - what civilization can survive such strong economic headwinds against the reproduction of its own populace? The health insurance market takes wealth from the young, healthy, and reproductive and transfers it as services to the old and dying. This is insane and unsustainable.
The worst of all: I am old enough to remember health insurance markets not being this way, so I know things can be different.
The first thing I’d expect someone like Tyler Cowen to explain to me, upon reading this post, is to gaslight me about median healthcare costs and show me a chart of premiums staying stable in inflation-adjusted dollars - as though that does anything to solve my immediate problem of having to spend a sum of money that is higher than many American’s annual income in order to have my third child delivered.
You can make the argument that maybe I need to change my situation, but that argument is a total loser. “Just go back to work for Microsoft” or “don’t have three children” or “send your wife back to work” or “move away from your family.”
If your answer to “I can’t afford to have children and run a business” is “then don’t,” you are building the political conditions for extremism. This is how every revolution starts: a critical mass of people who conclude the system offers them nothing worth preserving. They don’t just want change - they want revenge.
Economists and Wall Street big shots have not been remotely persuasive in making their case that “everyone is doing great in 2025, actually” because it runs completely afoul of most American’s recent experiences at the till, hence the high economic anxiety reflected in the polls.
Green writes a piece saying 140k is the new poverty line.
It’s thoroughly debunked.
And a legion of the credulous sycophants who dig the vibe ex post redefine poverty to ennui (the piece never would’ve gotten traction without the 140k poverty thing which we are now told is… https://t.co/LG2lQp2mgy— Clifford Asness (@CliffordAsness) December 1, 2025
The reason why Mike Green’s piece resonated with so many is because this sentence perfectly captures what I and many others have been trying to do for the past five years:
“Become rich enough to ignore the cost” - that is exactly what I have been trying to do and it is daunting.
Per Jeff Bezos: “When the data and the anecdotes disagree, the anecdotes are usually right.”
I am tired of hearing economists tell me how great everything is by showing me a chart that doesn’t look anything like real life on the ground - that’s exactly how Biden got voted out on his ass and the same will happen to Trump if conditions don’t improve. My being unhappy with the status quo is not “populism” - it’s reality. And it sucks.
A society that makes it this hard to have children is a society that has decided it doesn’t want a future. I’m fighting for mine anyway.
In a week, I’ll hold my third child. I’ll sign the check. I’ll keep building my business. But I won’t pretend this is fine - and neither should you.
...
Read the original on aaronstannard.com »
At the start of each year, on January 1st, a new crop of works enter the public domain and become free to enjoy, share, and reuse for any purpose. Due to differing copyright laws around the world, there is no one single public domain — and here we focus on three of the most prominent. Newly entering the public domain in 2026 will be:
* works by people who died in 1955, for countries with a copyright term of “life plus 70 years” (e.g. UK, Russia, most of EU and South America);
* works by people who died in 1975, for countries with a term of “life plus 50 years” (e.g. New Zealand, and most of Africa and Asia);
* films and books (incl. artworks featured) published in 1930 for the United States.
In our advent-style calendar below, find our top pick of what lies in store for 2026. Each day, as we move through December, we’ll open a new window to reveal our highlights! By public domain day on January 1st they will all be unveiled — look out for a special blogpost from us on that day. (And, of course, if you want to dive straight in and explore the vast swathe of new entrants for yourself, just visit the links above).
...
Read the original on publicdomainreview.org »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.