10 interesting stories served every morning and every evening.




1 800 shares, 47 trendiness

Slop Evader — Tega Brain

How to Get to Zero at Pioneer Works, Sep 12 - Dec 14, 2025. Review in the Art Newspaper, Oct 14. Offset at MediaLive: Data Rich, Dirt Poor at BMoCA, Sep 12 - Jan 11, 2026.

A browser ex­ten­sion for avoid­ing AI slop.

Download it for Chrome or Firefox.

This is a search tool that will only re­turn con­tent cre­ated be­fore ChatGPT’s first pub­lic re­lease on November 30, 2022.

Since the pub­lic re­lease of ChatGTPT and other large lan­guage mod­els, the in­ter­net is be­ing in­creas­ingly pol­luted by AI gen­er­ated text, im­ages and video. This browser ex­ten­sion uses the Google search API to only re­turn con­tent pub­lished be­fore Nov 30th, 2022 so you can be sure that it was writ­ten or pro­duced by the hu­man hand.

...

Read the original on tegabrain.com »

2 416 shares, 45 trendiness

Why xor eax, eax? — Matt Godbolt’s blog

Written by me, proof-read by an LLM.

Details at end.

In one of my talks on as­sem­bly, I show a list of the 20 most ex­e­cuted in­struc­tions on an av­er­age x86 Linux desk­top. All the usual cul­prits are there, mov, add, lea, sub, jmp, call and so on, but the sur­prise in­ter­loper is xor - eXclusive OR. In my 6502 hack­ing days, the pres­ence of an ex­clu­sive OR was a sure-fire in­di­ca­tor you’d ei­ther found the en­cryp­tion part of the code, or some kind of sprite rou­tine. It’s sur­pris­ing then, that a Linux ma­chine just mind­ing its own busi­ness, would be ex­e­cut­ing so many.

That is, un­til you re­mem­ber that com­pil­ers love to emit a xor when set­ting a reg­is­ter to zero:

We know that ex­clu­sive-OR-ing any­thing with it­self gen­er­ates zero, but why does the com­piler emit this se­quence? Is it just show­ing off?

In the ex­am­ple above, I’ve com­piled with -O2 and en­abled Compiler Explorer’s Compile to bi­nary ob­ject” so you can view the ma­chine code that the CPU sees, specif­i­cally:

If you change GCCs op­ti­mi­sa­tion level down to -O1 you’ll see:

The much clearer, more in­ten­tion-re­veal­ing mov eax, 0 to set the EAX reg­is­ter to zero takes up five bytes, com­pared to the two of the ex­clu­sive OR. By us­ing a slightly more ob­scure in­struc­tion, we save three bytes every time we need to set a reg­is­ter to zero, which is a pretty com­mon op­er­a­tion. Saving bytes makes the pro­gram smaller, and makes more ef­fi­cient use of the in­struc­tion cache.

It gets bet­ter though! Since this is a very com­mon op­er­a­tion, x86 CPUs spot this zeroing id­iom” early in the pipeline and can specif­i­cally op­ti­mise around it: the out-of-or­der track­ing sys­tems knows that the value of eax” (or whichever reg­is­ter is be­ing ze­roed) does not de­pend on the pre­vi­ous value of eax, so it can al­lo­cate a fresh, de­pen­dency-free zero reg­is­ter re­namer slot. And, hav­ing done that it re­moves the op­er­a­tion from the ex­e­cu­tion queue - that is the xor takes zero ex­e­cu­tion cy­cles! It’s es­sen­tially op­ti­mised out by the CPU!

You may won­der why you see xor eax, eax but never xor rax, rax (the 64-bit ver­sion), even when re­turn­ing a long:

In this case, even though rax is needed to hold the full 64-bit long re­sult, by writ­ing to eax, we get a nice ef­fect: Unlike other par­tial reg­is­ter writes, when writ­ing to an e reg­is­ter like eax, the ar­chi­tec­ture ze­ros the top 32 bits for free. So xor eax, eax sets all 64 bits to zero.

Interestingly, when ze­ro­ing the extended” num­bered reg­is­ters (like r8), GCC still uses the d (double width, ie 32-bit) vari­ant:

Note how it’s xor r8d, r8d (the 32-bit vari­ant) even though with the REX pre­fix (here 45) it would be the same num­ber of bytes to xor r8, r8 the full width. Probably makes some­thing eas­ier in the com­pil­ers, as clang does this too.

xor eax, eax saves you code space and ex­e­cu­tion time! Thanks com­pil­ers!

See the video that ac­com­pa­nies this post.

This post is day 1 of Advent of Compiler Optimisations 2025, a 25-day se­ries ex­plor­ing how com­pil­ers trans­form our code.

This post was writ­ten by a hu­man (Matt Godbolt) and re­viewed and proof-read by LLMs and hu­mans.

Support Compiler Explorer on Patreon

or GitHub, or by buy­ing CE prod­ucts in the Compiler Explorer Shop.

Matt Godbolt is a C++ de­vel­oper liv­ing in Chicago. He works for Hudson River Trading on su­per fun but se­cret things. Follow him on Mastodon

or Bluesky.

...

Read the original on xania.org »

3 407 shares, 15 trendiness

A Love Letter to FreeBSD

I’m still the new per­son here, learn­ing your ways, stum­bling over the oc­ca­sional quirk, smil­ing when I find the small touches that make you dif­fer­ent. You re­mind me of what com­put­ing felt like be­fore the noise. Before hype cy­cles and per­for­mance the­atre. Before every tool needed a plu­gin sys­tem and a logo. You are co­her­ent. You are de­lib­er­ate. You are the kind of sys­tem that does­n’t have to shout to be­long.

You carry the quiet strength of the greats, like a main­frame hum­ming in a locked room, not chas­ing at­ten­tion, just do­ing its work, year af­ter year. Your base sys­tem feels like it was built by peo­ple who cared about the whole pic­ture, not just the pieces. Your boot en­vi­ron­ments are like an old IBM i’s side A / side B” IPL, a built-in es­cape hatch that says, we’ve thought ahead for you. You could be, you should be, the open-source main­frame: aligned with hard­ware life­cy­cles of three to five years or more, built for long-term trust, a plat­form peo­ple bet their up­time on. Your core de­sign re­minds me of Solaris in its best days: a sta­ble base that com­mer­cial and com­mu­nity soft­ware could rely on with­out fear of shift­ing foun­da­tions.

And make up­time a de­sign goal: a thou­sand-day up­time should­n’t be folk­lore, it should be nor­mal. Not a party trick, not a screen­shot to boast about, but sim­ply the nat­ural con­se­quence of a sys­tem built to en­dure. Mainframes never apol­o­gised for up­time mea­sured in years, and nei­ther should you. Apply up­dates with­out fear, re­boot only when the ker­nel truly de­mands it, and let ad­min­is­tra­tors see longevity as a fea­ture, not a gam­ble.

I know you are reach­ing fur­ther into the desk­top now. I un­der­stand why, and I can see how it might widen your reach. But here I find my­self won­der­ing: how do you keep the heart­beat of a rock-solid server while also em­brac­ing the quicker pulse of a mod­ern desk­top? I don’t pre­tend to have all the an­swers, I’m too new to you for that, but my first in­stinct is to lean on what you al­ready have: the nat­ural sep­a­ra­tion be­tween CURRENT and RELEASE. Let those worlds move at their own pace, with­out ask­ing one to carry the oth­er’s com­pro­mises.

And now, with pkg­base in play, the sta­bil­ity of pack­ages mat­ters as much as the base sys­tem it­self. The base must re­main un­touch­able in its re­li­a­bil­ity, but I dream of a world where the pack­age ecosys­tem is avail­able in clear sta­bil­ity chan­nels: from a rock-solid production tier” you can stake a busi­ness on, to faster-mov­ing streams where new fea­tures can flow with­out fear of break­ing mis­sion-crit­i­cal sys­tems. Too many times in the past, pack­ages van­ished or broke un­ex­pect­edly. I un­der­stand the core is sa­cred, but I would­n’t mind if some of the wider ecosys­tem in­her­ited that same level of care.

Culture mat­ters too. One rea­son I stepped away from Linux was the noise, the de­bates that drowned out the joy of build­ing. Please keep FreeBSD the kind of place where thought­ful en­gi­neer­ing is wel­come with­out ego bat­tles, where en­ter­prise fo­cus and tech­ni­cal cu­rios­ity can sit at the same table. That spirit, the calm, shared pur­pose that car­ried Unix from the PDP-11 labs to the back­bone of the Internet, is worth pro­tect­ing.

There’s also the prac­ti­cal side: keep the doors open with hard­ware ven­dors like Dell and HPE, so FreeBSD re­mains a first-class cit­i­zen. Give me the tools to flash firmware with­out hav­ing to bor­row Linux or Windows. Make hard­ware life­cy­cle align­ment part of your story, ma­jor re­leases paced with the real world, point re­leases treated as re­fine­ment rather than dis­rup­tion.

My hope is sim­ple: that you stay dif­fer­ent. Not in the way that shouts for at­ten­tion, but in the way that earns trust. If some­one wants hype or the lat­est shiny thing every month, they have Linux. If they want a plat­form that feels like it could sim­ply run, and keep run­ning, the way the best of Unix al­ways did, they should know they can find it here. And I still dream of a fu­ture where a pur­pose-built open-source main­frame” ex­ists: a mod­ern, re­li­able hard­ware sys­tem run­ning FreeBSD with the same quiet pres­ence as Sun’s Enterprise 10k once did.

And maybe, one day, some­one will walk past a rack of servers, hear the steady, un­hur­ried rhythm of a FreeBSD sys­tem still run­ning, and smile, know­ing that in a world that burns through trends, there is still some­thing built to last.

With grat­i­tude,

and with the wish to stay for the long run,

A new­comer who fi­nally feels at home.

...

Read the original on www.tara.sh »

4 340 shares, 21 trendiness

It’s been a very hard year

Unlike a lot of places in tech, my com­pany, Set Studio/Piccalilli has no out­side fund­ing. Bootstrapped is what the LinkedIn peo­ple say, I think.

It’s been a hard year this year. A very hard year. I think a naive per­son would blame it all on the seem­ingly in­dus­try-wide at­ti­tude of AI can just do this for us”. While that cer­tainly has­n’t helped — as I see it — it’s been a hard year be­cause of a com­bi­na­tion of limp­ing economies, tar­iffs, even more po­lit­i­cal in­sta­bil­ity and a se­vere cost of liv­ing cri­sis. It’s been a very sim­i­lar year to 2020, in my opin­ion.

Why am I writ­ing this? All of the above has had a re­ally neg­a­tive ef­fect on us this year. Landing pro­jects for Set Studio has been ex­tremely dif­fi­cult, es­pe­cially as we won’t work on prod­uct mar­ket­ing for AI stuff, from a moral stand­point, but the vast ma­jor­ity of en­quiries have been for ex­actly that. Our rep­u­ta­tion is every­thing, so be­ing as­so­ci­ated with that tech­nol­ogy as it in­creas­ingly shows us what it re­ally is, would be a ter­ri­ble move for the long term. I would­n’t per­son­ally be able to sleep know­ing I’ve con­tributed to all of that, too.

What we do re­ally well is pro­duce web­sites and de­sign sys­tems that ac­tu­ally work for and with peo­ple. We also share our knowl­edge and ex­pe­ri­ence via tonnes of free con­tent on Piccalilli, funded by pre­mium courses to keep the lights on. We don’t pep­per our con­tent with an­noy­ing ad­verts for com­pa­nies you have no in­ter­est in.

I’ve spo­ken about my dream for us to run Piccalilli full time and heck, that may still hap­pen. For that to hap­pen though, we re­ally needed this Black Friday pe­riod to do as well, if not bet­ter, as it did last year. So far, that’s not hap­pen­ing un­for­tu­nately, but there’s still time.

I get it, money is so tight this year and com­pa­nies are seem­ingly not in­vest­ing in staff with train­ing bud­gets quite like they did. We ac­tu­ally tried to stem that a bit by tri­al­ing a com­mu­nity fund­ing model ear­lier in the year that I out­lined in ‌I’m get­ting fed up of mak­ing the rich, richer and we even started pub­lish­ing some stuff.

It went down in­cred­i­bly well, but when push came to shove, we fell way short in terms of fund­ing sup­port. Like I say, we’re not swim­ming in in­vestor money, so with­out the sup­port on Open Collective, as much as it hurt, we had to pull the plug. It’s a real shame — that would have been in­cred­i­ble — but again, I get it, money is tight.

This is­n’t a woe is me post; that’s not how I roll. This is a post to give some con­text for what I’m go­ing to ask next and how I’m try­ing to nav­i­gate the tough times. I’m ask­ing folks to help us so we can try to help every­one, whether that’s with web pro­jects that ac­tu­ally work for peo­ple or con­tin­u­ing to pro­duce ex­tremely high qual­ity ed­u­ca­tion ma­te­r­ial. Here’s some ways you can do it.

You’ll see mes­sag­ing like this is the most im­por­tant time of year for us” and it’s ex­tremely true. To break the fourth wall slightly, peo­ple buy­ing courses at full price is a lot rarer than you might think. So of­ten, dis­count events are what keeps the lights on.

We’ve launched two courses this year — JavaScript for Everyone and Mindful Design — that sit along­side my course, Complete CSS, which we launched last year. I know you’ve prob­a­bly been burned by shit courses in the past, but these three courses are far from that. I promise.

I can’t stress enough how much Mat (JavaScript for Everyone) and Scott (Mindful Design) have put in to these courses this year. These two are elite level in­di­vid­u­als with in­cred­i­ble rep­u­ta­tions and they’ve shared a seem­ingly im­pos­si­ble amount of ex­tremely high qual­ity knowl­edge in their courses. I would def­i­nitely rec­om­mend giv­ing them your time and sup­port be­cause they re­ally will trans­form you for the bet­ter. For bosses read­ing this, all three courses will pay them­selves back ten-fold — es­pe­cially when you take ad­van­tage of bulk dis­counts — trust me.

So many of you have pur­chased courses al­ready and I’m for­ever thank­ful for that. I can’t stand the term social proof” but it works. People might be on the fence about grab­bing a course, and see­ing one of their peers talk about how good it was can be the dif­fer­ence.

You might think it’s not worth post­ing about the courses on so­cial me­dia but peo­ple do see it, es­pe­cially on plat­forms like Bluesky with their cus­tom feeds. We see it too!

Testimonials are al­ways wel­come be­cause we can pop those on the course mar­ket­ing pages, just like on mine.

In terms of shar­ing the stu­dio, if you think we’re cool, post about it! It’s all about eyes and nice words. We’ll do the rest.

We’re re­ally good at what we do! I know every stu­dio/​agency says this, but we’re dif­fer­ent. We’re ac­tu­ally dif­fer­ent.

We’re not go­ing to charge you through the nose for sub­stan­dard work — only de­ploy­ing a frac­tion of our team, like a lot of agen­cies do. I set this stu­dio up to be the an­tithe­sis of the way these — and I’ll say it out loud — char­la­tans op­er­ate.

Our whole fo­cus is be­com­ing your part­ners so you can do the — y’­know — run­ning of your busi­ness/​or­gan­i­sa­tion and we take the load off your shoul­ders. We’re hy­per ef­fi­cient and we fully own pro­jects be­cause they’re way above your nor­mal du­ties. We get that. In fact, the most ef­fi­cient way to get the most out of a stu­dio like ours is to do ex­actly that.

I know numbers goes up” is re­ally im­por­tant and yes, num­bers def­i­nitely go up when we work with you. We do that with­out ex­ploit­ing your users and cus­tomers too. There’s no de­cep­tive pat­terns com­ing from us. We in­stead put every­thing into brand­ing, mes­sag­ing, con­tent ar­chi­tec­ture and mak­ing every­thing ex­tremely fast and ac­ces­si­ble. That’s what makes the num­bers go up for you.

We’re in­cred­i­bly fairly priced too. We’re not in the busi­ness of charg­ing ridicu­lous fees for our work. We’re only a small team, so our over­heads are noth­ing com­pared to a lot of agen­cies. We carry your bud­gets a long way for you and gen­uinely give you more bang for your buck with an eq­ui­table pric­ing model.

We’ve got avail­abil­ity start­ing from the new year be­cause start­ing pro­jects in December is never the ideal way to do things. Getting those pro­jects planned and ready to go is a good idea in December though, so get in touch!

I’m also slowly get­ting back into CSS and front-end con­sult­ing. I’ve helped some of the largest or­gan­i­sa­tions and the small­est or­gan­i­sa­tions, such as Harley-Davidson, the NHS and Google write bet­ter code and work bet­ter to­gether. Again, start­ing in the new year I’ll have avail­abil­ity for con­sult­ing and en­gi­neer­ing sup­port. It might just be a touch more palat­able than hir­ing the whole stu­dio for you. Again, get in touch.

I’m al­ways trans­par­ent — maybe too trans­par­ent at times — but it’s re­ally im­por­tant for me to be hon­est. Man, we need more hon­esty.

It’s taken a lot of pride-swal­low­ing to write this but I think it’s more im­por­tant to be hon­est than to be un­nec­es­sar­ily proud. I know this will be read by some­one else who’s find­ing the year hard, so if any­thing, I’m re­ally glad they’ll feel seen at least.

Getting good leads is harder than ever, so I’d re­ally ap­pre­ci­ate peo­ple shar­ing this with their net­work. You’ll never re­gret rec­om­mend­ing Piccalilli courses or Set Studio. In fact, you’ll look re­ally good at what you do when we ab­solutely smash it out of the park.

Thanks for read­ing and if you’re also strug­gling, I’m send­ing as much strength your way as I can.

👋 Hello, I’m Andy and this is my lit­tle home on the web.

I’m the founder of Set Studio, a cre­ative agency that spe­cialises in build­ing stun­ning web­sites that work for every­one and Piccalilli, a pub­li­ca­tion that will level you up as a front-end de­vel­oper.

I’ve also got a CSS course called Complete CSS to help you get to a level in de­vel­op­ment that you never thought would be pos­si­ble.

Back to blog

...

Read the original on bell.bz »

5 339 shares, 16 trendiness

Linux & DevOps Troubleshooting Interviews

The Advent of Sysadmin is a 12-day Advent cal­en­dar of Linux and DevOps chal­lenges of dif­fer­ent dif­fi­cul­ties that runs from December 1st to December 12th.

Each day there will be an Advent of Sysadmin sce­nario.

Sign up for a free ac­count (needed to keep track of your progress) and start solv­ing the sce­nar­ios!

If you want to check out a sce­nario with­out sign­ing up, you can run this one that re­quires no reg­is­tra­tion:

...

Read the original on sadservers.com »

6 250 shares, 18 trendiness

deepseek-ai/DeepSeek-Math-V2 · Hugging Face

Large lan­guage mod­els have made sig­nif­i­cant progress in math­e­mat­i­cal rea­son­ing, which serves as an im­por­tant test­bed for AI and could im­pact sci­en­tific re­search if fur­ther ad­vanced. By scal­ing rea­son­ing with re­in­force­ment learn­ing that re­wards cor­rect fi­nal an­swers, LLMs have im­proved from poor per­for­mance to sat­u­rat­ing quan­ti­ta­tive rea­son­ing com­pe­ti­tions like AIME and HMMT in one year. However, this ap­proach faces fun­da­men­tal lim­i­ta­tions. Pursuing higher fi­nal an­swer ac­cu­racy does­n’t ad­dress a key is­sue: cor­rect an­swers don’t guar­an­tee cor­rect rea­son­ing. Moreover, many math­e­mat­i­cal tasks like the­o­rem prov­ing re­quire rig­or­ous step-by-step de­riva­tion rather than nu­mer­i­cal an­swers, mak­ing fi­nal an­swer re­wards in­ap­plic­a­ble. To push the lim­its of deep rea­son­ing, we be­lieve it is nec­es­sary to ver­ify the com­pre­hen­sive­ness and rigor of math­e­mat­i­cal rea­son­ing. Self-verification is par­tic­u­larly im­por­tant for scal­ing test-time com­pute, es­pe­cially for open prob­lems with­out known so­lu­tions. Towards self-ver­i­fi­able math­e­mat­i­cal rea­son­ing, we in­ves­ti­gate how to train an ac­cu­rate and faith­ful LLM-based ver­i­fier for the­o­rem prov­ing. We then train a proof gen­er­a­tor us­ing the ver­i­fier as the re­ward model, and in­cen­tivize the gen­er­a­tor to iden­tify and re­solve as many is­sues as pos­si­ble in their own proofs be­fore fi­nal­iz­ing them. To main­tain the gen­er­a­tion-ver­i­fi­ca­tion gap as the gen­er­a­tor be­comes stronger, we pro­pose to scale ver­i­fi­ca­tion com­pute to au­to­mat­i­cally la­bel new hard-to-ver­ify proofs, cre­at­ing train­ing data to fur­ther im­prove the ver­i­fier. Our re­sult­ing model, DeepSeekMath-V2, demon­strates strong the­o­rem-prov­ing ca­pa­bil­i­ties, achiev­ing gold-level scores on IMO 2025 and CMO 2024 and a near-per­fect 118/120 on Putnam 2024 with scaled test-time com­pute. While much work re­mains, these re­sults sug­gest that self-ver­i­fi­able math­e­mat­i­cal rea­son­ing is a fea­si­ble re­search di­rec­tion that may help de­velop more ca­pa­ble math­e­mat­i­cal AI sys­tems.

Below are eval­u­a­tion re­sults on IMO-ProofBench (developed by the DeepMind team be­hind DeepThink IMO-Gold) and re­cent math­e­mat­ics com­pe­ti­tions in­clud­ing IMO 2025, CMO 2024, and Putnam 2024.

DeepSeekMath-V2 is built on top of DeepSeek-V3.2-Exp-Base. For in­fer­ence sup­port, please re­fer to the DeepSeek-V3.2-Exp github repos­i­tory.

This repos­i­tory and the model weights are li­censed un­der the Apache License, Version 2.0 (Apache 2.0).

@misc{deepseek-math-v2,

au­thor = {Zhihong Shao, Yuxiang Luo, Chengda Lu, Z.Z. Ren, Jiewen Hu, Tian Ye, Zhibin Gou, Shirong Ma, Xiaokang Zhang},

ti­tle = {DeepSeekMath-V2: Towards Self-Verifiable Mathematical Reasoning},

year = {2025},

If you have any ques­tions, please raise an is­sue or con­tact us at ser­vice@deepseek.com.

...

Read the original on huggingface.co »

7 230 shares, 6 trendiness

Erdős Problem #124

OPEN

This is open, and can­not be re­solved with a fi­nite com­pu­ta­tion.

For any $d\geq 1$ and $k\geq 0$ let $P(d,k)$ be the set of in­te­gers which are the sum of dis­tinct pow­ers $d^i$ with $i\geq k$. Let $3\leq d_1<d_2<\cdots <d_r$ be in­te­gers such that\[\sum_{1\leq i\leq r}\frac{1}{d_r-1}\geq 1.\]Can all suf­fi­ciently large in­te­gers be writ­ten as a sum of the shape $\sum_i c_i­a_i$ where $c_i\in \{0,1\}$ and $a_i\in P(d_i,0)$?

If we fur­ther have $\mathrm{gcd}(d_1,\ldots,d_r)=1$ then, for any $k\geq 1$, can all suf­fi­ciently large in­te­gers be writ­ten as a sum of the shape $\sum_i c_i­a_i$ where $c_i\in \{0,1\}$ and $a_i\in P(d_i,k)$?

Disclaimer: The open sta­tus of this prob­lem re­flects the cur­rent be­lief of the owner of this web­site. There may be lit­er­a­ture on this prob­lem that I am un­aware of, which may par­tially or com­pletely solve the stated prob­lem. Please do your own lit­er­a­ture search be­fore ex­pend­ing sig­nif­i­cant ef­fort on solv­ing this prob­lem. If you find any rel­e­vant lit­er­a­ture not men­tioned here, please add this in a com­ment.

The sec­ond ques­tion was con­jec­tured by Burr, Erdős, Graham, and Li [BEGL96], who proved it for $\{3,4,7\}$.

The first ques­tion was asked sep­a­rately by Erdős in [Er97] and [Er97e] (although there is some am­bi­gu­ity over whether he in­tended $P(d,0)$ or $P(d,1)$ - cer­tainly he men­tions no gcd con­di­tion). A sim­ple pos­i­tive proof of the first ques­tion was pro­vided (and for­malised in Lean) by Aristotle thanks to Alexeev; see the com­ments for de­tails.

In [BEGL96] they record that Pomerance ob­served that the con­di­tion $\sum 1/(d_i-1)\geq 1$ is nec­es­sary (for both ques­tions), but give no de­tails. Tao has sketched an ex­pla­na­tion in the com­ments. It is triv­ial that $\mathrm{gcd}(d_1,\ldots,d_r)=1$ is a nec­es­sary con­di­tion in the sec­ond ques­tion.

Melfi [Me04] gives a con­struc­tion, for any $\epsilon>0$, of an in­fi­nite set of $d_i$ for which every suf­fi­ciently large in­te­ger can be writ­ten as a fi­nite sum of the shape $\sum_i c_i­a_i$ where $c_i\in \{0,1\}$ and $a_i\in P(d_i,0)$ and yet $\sum_{i}\frac{1}{d_i-1}<\epsilon$.

See also [125].

This page was last edited 01 December 2025.

External data from the data­base - you can help up­date this

Formalised state­ment?

Yes

Additional thanks to: Boris Alexeev, Alfaiz, Dustin Mixon, and Terence Tao

When re­fer­ring to this prob­lem, please use the orig­i­nal sources of Erdős. If you wish to ac­knowl­edge this web­site, the rec­om­mended ci­ta­tion for­mat is:

T. F. Bloom, Erdős Problem #124, https://​www.er­dosprob­lems.com/​124, ac­cessed 2025-12-01

In [BEGL96], the prob­lem is for­mu­lated in a way that only al­lows pow­ers of $d_i$ greater than $d_i^0 = 1$ to be added. However, in [Er97] and [Er97e], it’s for­mu­lated so that $1$s are al­lowed. Incidentally, this means that all the proofs in [BEGL96] ac­tu­ally prove slightly stronger state­ments.

[Note: this com­ment was writ­ten be­fore 2025/12/01, when the prob­lem text was up­dated.]

Aristotle from Harmonic has solved this prob­lem all by it­self, work­ing only from the for­mal state­ment! Type-check it on­line!

A for­mal state­ment of the con­jec­ture was avail­able in the Formal Conjectures pro­ject. Unfortunately, there is a typo in that state­ment, wherein the com­ment says $\geq 1$ in the dis­play-style equa­tion while the cor­re­spond­ing Lean says = 1”. (That makes the state­ment weaker.) Accordingly, I have also cor­rected that is­sue and in­cluded a proof of the cor­rected state­ment. Finally, I re­moved a lot of what I be­lieved were un­nec­es­sary as­pects of the state­ment, and Aristotle proved that too. In the end, there are three dif­fer­ent ver­sions proven, of which this is my fa­vorite:

the­o­rem er­dos_124 : ∀ k, ∀ d : Fin k → ℕ,

(∀ i, 2 ≤ d i) → 1 ≤ ∑ i : Fin k, (1 : ℚ) / (d i - 1) →

∀ n, ∃ a : Fin k → ℕ,

∀ i, ((d i).dig­its (a i)).toFin­set ⊆ {0, 1} ∧

n = ∑ i, a i

I be­lieve this is a faith­ful for­mal­iza­tion of (a strength­en­ing of) the con­jec­ture stated on this page.

As men­tioned by DesmondWeisenberg above, there’s an is­sue in­volv­ing the power 1 (which cor­re­sponds to the units digit here) that means the con­jec­ture in [BEGL96] dif­fers from this. I be­lieve the ver­sion in [Er97] matches the state­ment here, in part be­cause it lacks a gcd con­di­tion that is ob­vi­ously nec­es­sary in [BEGL96]. I do not yet have ac­cess to [Er97e] to check the state­ment there. The sub­tlety of this is­sue is un­for­tu­nate, given Aristotle’s achieve­ment!

Timing-wise, Aristotle took 6 hours and Lean took 1 minute.

This is quite some­thing, con­grat­u­la­tions to Boris and Aristotle!

On one hand, as the nice sketch pro­vided be­low by tsaf con­firms, the fi­nal proof is quite sim­ple and el­e­men­tary - in­deed, if one was given this prob­lem in a maths com­pe­ti­tion (so there­fore ex­pected a short sim­ple so­lu­tion ex­isted) I’d guess that some­thing like the be­low would be pro­duced. On the other hand, if some­thing like this worked, then surely the com­bined tal­ents of Burr, Erdős, Graham, and Li would have spot­ted it.

Normally, this would make me sus­pi­cious of this short proof, in that there is over­looked sub­tlety. But (a) I can’t see any and (b) the proof has been for­malised in Lean, so clearly it just works!

Perhaps this shows what the real is­sue in the [BEGL96] con­jec­ture is - namely the re­moval of $1$ and the ad­di­tion of the nec­es­sary gcd con­di­tion. (And per­haps at least some sub­set of the au­thors were aware of this ar­gu­ment for the eas­ier ver­sion al­low­ing $1$, but this was over­looked later by Erdős in [Er97] and [Er97e], al­though if they were aware then one would hope they’d have in­cluded this in the pa­per as a re­mark.)

At the mo­ment I’m minded to keep this as open, and add the gcd con­di­tion in the main state­ment, and note in the re­marks that the eas­ier (?) ver­sion al­low­ing $1$ and omit­ting the gcd con­di­tion, which was also asked in­de­pen­dently by Erdős, has been solved.

My sum­mary is that Aristotle solved a” ver­sion of this prob­lem (indeed, with an olympiad-style proof), but not the” ver­sion.

I agree that the [BEGL96] prob­lem is still open (for now!), and your plan to keep this prob­lem open by chang­ing the state­ment is rea­son­able. Alternatively, one could add an­other prob­lem and link them. I have no pref­er­ence.

I agree with your de­scrip­tion. I also won­der whether this easy’ ver­sion of the prob­lem has ac­tu­ally ap­peared in some math­e­mat­i­cal com­pe­ti­tion be­fore now, which would of course pol­lute the train­ing data if Aristotle had seen this so­lu­tion al­ready writ­ten up some­where. (I only say this in the sense that know­ing such a short olympiad-style proof ex­ists makes it a nice com­pe­ti­tion prob­lem.)

I as­sume you have also tried giv­ing the harder ver­sion to Aristotle?

6 hours on what hard­ware? If it’s a like a con­sumer lap­top-type, prob­a­bly it’s easy to run at 100x com­pute at all Erdos prob­lems with some dat­a­cen­ter? Do we have a good un­der­stand­ing of how Aristotle’s abil­i­ties scales with com­pute?

Aristotle’s so­lu­tion is as fol­lows. It is sur­pris­ingly easy.

Let $(a_n)$ be the se­quence of pow­ers of $d_i$ (sorted, with mul­ti­plic­ity). For ex­am­ple, if $d_1=2$ and $d_2=3$, then the se­quences is: $1,1,2,3,4,8,9,16,27,\ldots$.

We want to show that every pos­i­tive in­te­ger is a sub­se­quence sum. This is equiv­a­lent to $a_{n+1} -1 \leq (a_1+\dots +a_n)$. The RHS is $\sum_{i=1}^k (d_i^{e_{i,n}}-1)/(d_i-1)$, where $e_{i,n}$ is the first power of $d_i$ that has not ocurred in the first $n$ terms. This is bounded be­low by $\min_i (d_i^{e_{i,n}}-1)$. However, $a_{n+1}=\min_i d_i^{e_{i,n}}$. Done.

Note, there is some am­bi­gu­ity in the de­f­i­n­i­tion of $e_{i,n}$. In the ex­am­ple $d_1=2, d_2=3$, we can de­cide ar­bi­trar­ily that $a_1$ is a power of $2$ and $a_2$ is a power of $3$, so $e_{2,1}=0$ but $e_{2,2}=1$.

Thank you tsaf for de­ci­pher­ing the proof! Interestingly, Theorem 2.3 from this pa­per could be thought of as a con­tin­u­ous-pa­ra­me­ter loose vari­ant of this prob­lem and the ba­sic proof out­line (appearing on pages 13-14 in that pa­per) is the same: aim­ing to prove that the rep­re­sen­ta­tions fill in an in­ter­val, sort­ing the se­quence, ver­i­fy­ing the con­tin­u­ous-pa­ra­me­ter vari­ant of con­di­tion $a_{n+1}\leq a_1+\cdots+a_n+1$, do­ing so by con­sid­er­ing the first ap­pear­ing term as­so­ci­ated with each $d_i$, etc.

I am not men­tion­ing this to di­min­ish Aristotle’s / BorisAlexeev’s proof, on the con­trary, it is quite beau­ti­ful! My point is that ba­sic ideas reap­pear at many places; hu­mans of­ten fail to re­al­ize that they ap­ply in a dif­fer­ent set­ting, while a ma­chine does­n’t have this prob­lem! I re­mem­ber see­ing this prob­lem be­fore and think­ing about it briefly. I ad­mit that I haven’t no­ticed this con­nec­tion, which is only now quite ob­vi­ous to me!

For what it is worth, the Gemini and ChatGPT deep re­search tools did not turn up any sig­nif­i­cant new lit­er­a­ture on this prob­lem.

Gemini of­fered the sim­ple ob­ser­va­tion that if 1 is omit­ted then the gcd con­di­tion be­comes nec­es­sary, ex­plained the sig­nif­i­cance of the $\sum_i \frac{1}{d_i-1} \geq 1$ con­di­tion (linking it to some par­al­lel work on Cantor sets, par­tic­u­larly the Newhouse gap lemma”), but turned up no new di­rect ref­er­ences for this prob­lem.

ChatGPT used this very web page ex­ten­sively as the main au­thor­i­ta­tive source, for in­stance cit­ing the Aristotle proof, as well as the other pa­pers cited on this page, as well as the page for the re­lated prob­lem [125]. As such, no new in­for­ma­tion was gleaned, but read­ers may find the AI-generated sum­mary of the sit­u­a­tion to be amus­ing.

As a fur­ther ex­per­i­ment, I gave this prob­lem (in the weaker, solved for­mu­la­tion) to Gemini Deepthink with a hint to use Brown’s cri­te­rion. Interestingly, it de­clared that it was un­likely that Brown’s cri­te­rion was strong enough to solve this prob­lem. Superficially this is of course a fail­ure on the part of the AI, but an in­spec­tion of the rea­son­ing showed that it was a fairly honorable” mis­take. It noted that if one took $d_1=3$ then in­fi­nitely of­ten there should be no pow­ers of any of the $d_i$ be­tween $d_1^k$ and $d_1^{k+1}$, so that the ra­tio be­tween con­sec­u­tive el­e­ments could be as large as $3$. Typically, one needs the ra­tio of con­sec­u­tive el­e­ments to be $2$ or less on the av­er­age for Brown’s cri­te­rion to ap­ply, so Gemini con­cluded that heuris­ti­cally this ap­proach was un­likely to work. This is not a bad analy­sis ac­tu­ally - it just so hap­pens that the cu­mu­la­tive sum of all the other pow­ers less than $d_1^k$ is (barely) enough to over­come this gap of $3$ and reach $d_1^{k+1}$ af­ter all. I would clas­sify this type of er­ror as one which a hu­man ex­pert could plau­si­bly also make on this prob­lem. Also, I think this analy­sis also hints at why the stronger ver­sion of this prob­lem is more dif­fi­cult, and un­likely to be re­solved by off-the-shelf tests such as Brown’s cri­te­rion.

Further up­date: given the same prompt, ChatGPT Pro lo­cated Aristotle’s proof (and tsaf’s sum­mary) from this very web page and wrote it up nicely in a hu­man-read­able form. Possibly there was an op­tion to shut off web search and test the tool’s abil­ity to solve the prob­lem in­de­pen­dently with­out con­t­a­m­i­na­tion, but I did not ex­plore this.

G. Melfi in this pa­per has given the fol­low­ing re­lated re­sult:

A se­quence $S = \{s_1, s_2,…\}$ of pos­i­tive in­te­gers is a com­plete se­quence, if $\Sigma (S) := \Sigma^\infty_{i=1} \epsilon_i s_i$, for $\epsilon_i \in \{0,1\}, \Sigma_{i=1}^\infty \epsilon_i < \infty$ con­tains all suf­fi­ciently large in­te­gers. Let $s \geq 1$ and $A$ be a (finite or in­fi­nite) set of in­te­gers greater than $1$. Let $Pow (A; s)$ be the non­de­creas­ing se­quence of pos­i­tive in­te­gers of the form $a^k$ with $a \in A$ and $k \geq s$. for any $s \geq 1$, $Pow (A; s)$ is com­plete if and $\textbf{only if}$ $\Sigma_{a \in A} 1/(a-1) \geq 1$.

This $\textbf{only if}$ part of their con­jec­ture has been dis­proved by Melfi in the above dis­cussed pa­per.

P. S. [BEGL96] also asks the fol­low­ing:

What can we say about lower and up­per as­ymp­totic den­sity of $Σ(Pow(A; s))$ when $A$ is fi­nite and $\Sigma_{a \in A} \frac{1}{log a} > \frac{1}{log 2}$?

(According to page 13 of this pa­per).

Just to clar­ify, Pomerance’s ob­ser­va­tion that Diophantine ap­prox­i­ma­tion shows the ne­ces­sity of $\sum_{a \in A} 1/(a-1) \geq 1$ only ap­plies in the case of fi­nite $A$, whereas Melfi’s ex­am­ple is for in­fi­nite $A$. (In par­tic­u­lar, the de­scrip­tion of Pomerance’s re­sult in [p. 133, BEGL96] is not quite cor­rect.)

Interestingly, (my re­con­struc­tion of) Pomerance’s ar­gu­ment is al­most iden­ti­cal to Gemini’s failed heuris­tic ar­gu­ment: if $A$ is fi­nite with $\sum_{a \in A} 1/(a-1) < 1$, then there will be in­fi­nitely many num­bers $n$ that are larger than the sum of all the pow­ers of $a$ pre­ced­ing it (for this to hold, $n$ has to be slightly less than a power of $a$ for each $a$, which can be ac­com­plished by the Kronecker ap­prox­i­ma­tion the­o­rem). Hence $A$ can­not be com­plete.

This ar­gu­ment shows that the $\sum_{a \in A} 1/(a-1)=1$ case is quite del­i­cate; at a bare min­i­mum, it needs some­thing like Baker’s the­o­rem to pre­vent pow­ers of dif­fer­ent $a$ from clus­ter­ing too close to­gether, which can cre­ate po­ten­tial coun­terex­am­ples. (And in­deed, [p. 137, BEGL96] dis­cusses this is­sue for spe­cific sets such as {3,4,7}.)

All com­ments are the re­spon­si­bil­ity of the user. Comments ap­pear­ing on this page are not ver­i­fied for cor­rect­ness. Please keep posts math­e­mat­i­cal and on topic.

...

Read the original on www.erdosproblems.com »

8 222 shares, 19 trendiness

Self-hosting a Matrix server for 5 years

Experiences with the Matrix pro­to­col, Matrix Synapse server, bridges, and Element mo­bile apps.

I have been host­ing a Matrix server for about five years now, mostly for text chats be­tween a few rel­a­tives and close friends, and a bridge to WhatsApp for a few more peo­ple. These are my ex­pe­ri­ences.

I don’t have many thoughts on the pro­to­col it­self.

The only thing that I don’t re­ally un­der­stand is the de­ci­sion on data repli­ca­tion. If a user on server A joins a room on server B, re­cent room data is copied from server B to server A and then kept in sync on both servers. I sup­pose this re­duces the load on the orig­i­nal server at the ex­pense of fed­er­a­tion over­head and space on other servers. However, this also cre­ates a sit­u­a­tion where any­thing said across fed­er­a­tion can­not be un­said, which is an ironic sit­u­a­tion for a pro­to­col/​sys­tem that of­ten comes up when talk­ing about pri­vacy.

Synapse is the only choice that sup­ports bridges, which was why I wanted to try Matrix in the first place. And back in 2019-2020 this was the only choice any­way.

As of right now, I run Synapse, PostgreSQL, and co­turn di­rectly, with­out con­tainer­iza­tion, on a small VPS.

Works fairly re­li­ably, sup­ports bridges, and is more ef­fi­cient that it was in 2020.

API is well doc­u­mented, and al­lows au­then­ti­cat­ing and send­ing (unencrypted) mes­sages via sim­ple HTTP calls. At some point in time, I wanted to write a sim­ple shell client to use with SXMO and such.

Does not have an ad­min panel

There is no ad­min page or panel. There was a third-party ad­min site, but it’s an en­tire site just for mak­ing HTTP calls. So I ended up writ­ing my own.

While tech­ni­cally, Synapse can work with a sqlite data­base (and which at first seems like an OK choice for hav­ing

Initial setup pre­sumes that the server is go­ing to be fed­er­ated, and there is no good way to turn it off. The best workaround in­volves a blank whitelist of fed­er­ated servers.

I don’t know the im­pli­ca­tions of dis­abling it.

Message re­ten­tion pol­icy can be set up server-wide, but also per-room. There are spe­cific lines in the con­fig­u­ra­tion that need to be set to ac­tu­ally en­able a ser­vice that runs the cleanup.

Synapse keeps the room even af­ter all of the mem­bers leave it, in­clud­ing fed­er­ated rooms. This re­sults in many (sometimes large) rooms with­out lo­cal mem­bers or­phaned on the server, tak­ing up data­base space.

Deleting mes­sages (events) with at­tach­ments does not delete the at­tach­ment (because an­other mes­sage might re­fer to it?), which means that the sent files con­tinue ex­ist­ing on the server in­def­i­nitely. Another pri­vacy im­pli­ca­tion. A sim­ple delete all files older than X” script works great un­til it deletes avatars. So yeah, seems like this is some­thing that should be han­dled by the Synapse server in­stead of cob­bled-to­gether scripts.

Even af­ter ex­ten­sive cleanup, PostgreSQL data­base might need to be vac­u­umed to re­duce the disk space it takes up.

Even for my small server with

Synapse keeps track of room states in an ap­pend-only (!) table named state_­group­s_s­tate. Deleting a room does not delete the state_­group­s_s­tate records. So it is never au­to­mat­i­cally cleaned up, and grows in size in­fi­nitely. It is pos­si­ble to delete many of those records from the data­base di­rectly, and Element (the com­pany) pro­vides some tool to compress” those records, but again, some­thing that should be han­dled by the server.

This is sim­ply not an op­tion in the API. Server ad­min can per­form a deactivate” (disable lo­gin) and erase” (remove re­lated data, which claims to be GDPR-compliant) on user ac­counts, but the ac­counts them­selves stay on the server for­ever.

How this not con­sid­ered a GDPR vi­o­la­tion is a mys­tery to me. Even on my tiny server, I have users who use their first name as their ID and bridged WhatsApp users that use phone num­bers as IDs.

While Matrix-Element ecosys­tem has been cater­ing to­wards gov­ern­ment and cor­po­rate en­ti­ties for some time, there have been mul­ti­ple re­cent an­nounce­ments about its fu­ture.

Specifically, Element (the com­pany) is now pro­vid­ing an all-in-one Element Server Suite (ESS) to re­place the cur­rent setup, in­clud­ing

It is in­tended for non-pro­fes­sional use, eval­u­a­tions, and small to mid-sized de­ploy­ments (1–100 users).

ESS Community in­cludes 7 com­po­nents/​ser­vices, now re­quires a min­i­mum of 2 CPUs, 2GB of RAM, and runs us­ing… Kubernetes? IMO, this is an overkill for dozen users.

For com­par­i­son, Snikket, an all-in-one so­lu­tion with sim­i­lar func­tion­al­ity us­ing XMPP, re­quires a sin­gle CPU and 128MB (!) RAM for 10 or so users.

Yes, I have seen the an­si­ble setup script setup rec­om­mended, but at this point, mak­ing setup eas­ier does not ad­dress the is­sue of ex­tra ser­vices be­ing re­quired in the first place.

Also, the ESS han­dles ac­count cre­ation and calls in an en­tirely dif­fer­ent way, more on that later.

Pretty great. Easy to in­stall and set up, works re­ally well, and needs only oc­ca­sional (semi-yearly or so) up­dates when WhatsApp changes their web API. Does not sup­port calls.

Same on all plat­forms

Element ex­ists and looks con­sis­tent on Android, iOS, and web, mak­ing it eas­ier for reg­u­lar users and for trou­bleshoot­ing.

This is silly, but while (official?) bridges sup­port im­age cap­tions, of­fi­cial Element app does not. The an­swer in the FAQ? Get a bet­ter app. Well, OK.

Image with a cap­tion in SchildiChat Classic (the bet­ter app).

Sometimes it can take up to a few min­utes to get a mes­sage, even be­tween two Android clients us­ing Google Cloud Messaging. Sometimes it is nearly in­stant. Still un­sure of the cause.

One un­re­li­able way to tell that the server is un­reach­able is the end­less load­ing bar. But even then, it even­tu­ally goes away with­out in­di­cat­ing any er­rors.

Then, when send­ing a mes­sage, the user re­ceives Unable to send mes­sage”. Frustration en­sues.

But I know the app is try­ing to call the /sync end­point. Why does­n’t it show any er­rors when that fails?

IIRC the first thing the app does is ask user to back up their sign­ing keys and en­ter the key pass­word, with­out a sim­ple ex­pla­na­tion. Not a great ex­pe­ri­ence for reg­u­lar users.

Some peo­ple re­ported is­sues with Element los­ing its keys or fre­quently re­quest­ing to be re-ver­i­fied. Thankfully I have not en­coun­tered these.

Even if you con­nect to a self-hosted server, Element Classic could at­tempt to con­nect to vec­tor.im in­te­gra­tion server and ma­trix.org key backup server.

Element X is now rec­om­mended as the new and bet­ter client. It is not.

Somehow, it is slower. Clicking on a con­ver­sa­tion takes 0.5-1.0 sec­onds to load it, com­pared to al­most in­stant load on Classic.

Perhaps it does work bet­ter for ac­counts with many large rooms, but that is not my case.

Conversations are sorted by… who knows. It is not re­cent nor al­pha­bet­i­cal.

Element X does not sup­port pe­ri­odic back­ground sync, so you need to set up ntfy or some­thing sim­i­lar to use Element X on a de-googled de­vice. Seems like a sim­ple enough fail-safe (even WhatsApp does this), but it was dropped for some rea­son.

This sliding sync” op­tion is avail­able only for newer Synapse ver­sions, and only if run­ning with PostgreSQL data­base (which should al­ready be the case - see above). Probably not an is­sue un­less the user tries to con­nect Element X to an out­dated Synapse.

Calling with Element X re­quires Element Call (part of ESS). This sup­ports group calls, but… only video calls at the mo­ment.

You also might be asked to tell your con­tact to in­stall the new app:

I don’t reg­u­larly use calls, but some peo­ple I would like to in­vite to my server would want to use them.

A few years ago, I ended up ei­ther tem­porar­ily en­abling un­re­stricted reg­is­tra­tion (a ter­ri­ble idea), or cre­at­ing my users’ ac­counts man­u­ally, be­cause the invite” ma­trix.to link was bro­ken, and reg­is­tra­tion to­kens did not work cor­rectly in mo­bile apps.

So let’s see how it works now. Keep in mind, I am still on stand­alone Synapse, not ESS.

I am a user, and I was to reg­is­ter an ac­count on my friend’s server. I see that Element X is now a rec­om­mended app, so let’s try that.

Click Create ac­count” (which is a dif­fer­ent style that does not look like a but­ton for some rea­son).

But I want an ac­count on a dif­fer­ent server. Click Change ac­count provider”.

Now I can search for the server my friend is host­ing, and it should ap­pear in the list be­low the search.

As server ad­min: I do not re­mem­ber if Synapse server has to en­able/​keep fed­er­a­tion for this to work.

Yes! That is what I want, why is this so ver­bose?

WTF. So Element X can­not cre­ate even the sim­plest user­name+pass­word ac­count. That is all I want, I don’t want to sign in with Google, Apple, or any other form of third-party au­then­ti­ca­tion.

I was un­able to reg­is­ter an ac­count us­ing Element X, so Element Classic should work bet­ter.

What dif­fer­ence does this make? Skip.

The cur­rent of­fi­cial app is telling me to use Element X. Just tried that. Click EDIT where it says matrix.org” (which does not say server”, ac­tu­ally) and en­ter the server name.

Why not? No ex­pla­na­tion. Sure, I’ll use a web client.

Well, fuck me, I guess. Why can’t I just cre­ate an ac­count?

As a server ad­min: Synapse is set to al­low reg­is­tra­tions via reg­is­tra­tion to­kens, be­cause un­re­stricted reg­is­tra­tion is a bad idea. I did not find where the /static/client/register path is set.

IIRC it is pos­si­ble to reg­is­ter an ac­count by go­ing to a web-hosted Element app, such as app.el­e­ment.io, which will al­low to reg­is­ter an ac­count us­ing a reg­is­tra­tion to­ken. But then the user has to deal with the headache of cross-ver­i­fy­ing their mo­bile de­vice to the web app (which they might never use).

So now what?

Matrix-Element is grow­ing, build­ing new fea­tures, and ac­quir­ing large cus­tomers (mostly gov­ern­ment en­ti­ties AFAIK). However, the new cor­po­ratesque ESS Community is not worth it in my opin­ion. I don’t need fancy auth, third-party IDs, group video con­fer­enc­ing, or even fed­er­a­tion for that mat­ter. But it is clear that Synapse and Element X are se­verely crip­pled and are not de­signed to work with­out these ser­vices.

I will prob­a­bly switch to Snikket, which is more ef­fi­cient, has timely no­ti­fi­ca­tions, and very smooth on­board­ing.

...

Read the original on yaky.dev »

9 212 shares, 26 trendiness

For Decades, Cartographers Have Been Hiding Covert Illustrations Inside of Switzerland’s Official Maps

The first three di­men­sions—length, height, and depth—are in­cluded on all topo­graph­i­cal maps. The fourth di­men­sion,” or time, is also avail­able on the web­site of the Swiss Federal Office of Topography (Swisstopo). In the Journey Through Time,” a time­line dis­plays 175 years of the coun­try’s car­to­graphic his­tory, ad­vanc­ing in in­cre­ments of 5-10 years. Over the course of two min­utes, Switzerland is drawn and re­drawn with in­creas­ing pre­ci­sion: inky shapes take on hard edges, blues and browns ap­pear af­ter the turn of the cen­tury, and in 2016, the let­ters drop their ser­ifs.

Watching a sin­gle place evolve over time re­veals small his­to­ries and gran­u­lar in­con­sis­ten­cies. Train sta­tions and air­ports are built, a gun­pow­der fac­tory dis­ap­pears for the length of the Cold War. But on cer­tain maps, in Switzerland’s more re­mote re­gions, there is also, cu­ri­ously, a spi­der, a man’s face, a naked woman, a hiker, a fish, and a mar­mot. These barely-per­cep­ti­ble ap­pari­tions aren’t mis­takes, but rather il­lus­tra­tions hid­den by the of­fi­cial car­tog­ra­phers at Swisstopo in de­fi­ance of their man­date to re­con­sti­tute re­al­ity.” Maps pub­lished by Swisstopo un­dergo a rig­or­ous proof­read­ing process, so to find an il­licit draw­ing means that the car­tog­ra­pher has out­smarted his col­leagues.

It also im­plies that the map­maker has openly vi­o­lated his com­mit­ment to ac­cu­racy, risk­ing pro­fes­sional reper­cus­sions on ac­count of an alpine ro­dent. No car­tog­ra­pher has been fired over these draw­ings, but then again, most were only dis­cov­ered once their au­thor had al­ready left. (Many map­mak­ers timed the pub­li­ca­tion of their draw­ing to co­in­cide with their re­tire­ment.) Over half of the known il­lus­tra­tions have been re­moved. The lat­est, the mar­mot draw­ing, was dis­cov­ered by Swisstopo in 2016 and is likely to be elim­i­nated from the next of­fi­cial map of Switzerland by next year. As the spokesper­son for Swisstopo told me, Creativity has no place on these maps.”

Errors—both ac­ci­den­tal and de­lib­er­ate—are not un­com­mon in maps (17th-century California as an is­land, the omis­sion of Seattle in a 1960s AAA map). Military cen­sors have long trans­formed nu­clear bunkers into non­de­script ware­houses and rou­tinely pix­e­late satel­lite im­ages of sen­si­tive sites. Many maps also con­tain in­ten­tional er­rors to trap would-be copy­right vi­o­la­tors. The work of record­ing re­al­ity is par­tic­u­larly vul­ner­a­ble to pla­gia­rism: if a car­tog­ra­pher is sus­pected of copy­ing an­oth­er’s work, he can sim­ply claim to be du­pli­cat­ing the real world— ide­ally, the two should be the same. Mapmakers of­ten rely on fic­ti­tious streets, typ­i­cally no longer than a block, to dif­fer­en­ti­ate their ac­counts of the truth (Oxygen Street in Edinburgh, for ex­am­ple).

Their en­tire pro­fes­sional life is spent at the mag­ni­fi­ca­tion level of a postage stamp.

But there is an­other, less in­sti­tu­tional rea­son to hide some­thing in a map. According to Lorenz Hurni, pro­fes­sor of car­tog­ra­phy at ETH Zurich, these il­lus­tra­tions are part in­side joke, part cop­ing mech­a­nism. Cartographers are quite metic­u­lous, re­ally high-pre­ci­sion peo­ple,” he says. Their en­tire pro­fes­sional life is spent at the mag­ni­fi­ca­tion level of a postage stamp. To sus­tain this kind of con­cen­tra­tion, Hurni sus­pects that they even­tu­ally look for some­thing to break out of their daily rou­tine.” The sat­is­fac­tion of these il­lus­tra­tions comes from their trans­gres­sive na­ture— the la­bor and se­crecy re­quired to con­ceal one of these vi­sual puns.

And some of them en­joy re­mark­able longevity. The naked woman draw­ing, for ex­am­ple, re­mained hid­den for al­most sixty years in the mu­nic­i­pal­ity of Egg, in north­ern Switzerland. Her rel­a­tively un­der­stated shape was com­posed in 1958 from a swath of green coun­try­side and the blue line of a river, her knees bend­ing at the curve in the stream. She re­mained un­no­ticed, re­clin­ing peace­fully, un­til 2012.

Several of the other draw­ings came about con­sid­er­ably later. In 1980, a Swisstopo car­tog­ra­pher traced the spi­der over an arach­nid-shaped ice field on the Eiger moun­tain. It faded out over the course of the decade, re­tract­ing its spindly legs in the in­ter­me­di­ary edi­tions. Around the same time, an­other car­tog­ra­pher con­cealed a fresh­wa­ter fish in a French na­ture pre­serve along the Swiss bor­der. The fish lived in the blue cir­cum­fer­ence of a marshy lake un­til 1989 when, ac­cord­ing to Swisstopo, it dis­ap­peared from the sur­face of the lake, div­ing to the depths.”

It’s un­clear how these draw­ings made it past the in­sti­tute’s proof­read­ers in the first place. They may have been in­serted only af­ter the maps were ap­proved, when car­tog­ra­phers are asked to ap­ply the proof­read­ers’ fi­nal ed­its. When the maps were once printed as com­pos­ite lay­ers of dif­fer­ent col­ors, car­tog­ra­phers could have built the draw­ings from the in­ter­play of dif­fer­ent topo­graph­i­cal el­e­ments (the naked woman, for ex­am­ple, is com­posed of a blue line over a green-shaded area). Hurni also spec­u­lates that car­tog­ra­phers could have par­ti­tioned their il­lus­tra­tions over the cor­ners of four sep­a­rate map sheets, al­though no such ex­am­ple has (yet) been found.

Some of these clan­des­tine draw­ings al­lude to ac­tual topo­graph­i­cal fea­tures: near the town of Interlaken, where an out­crop­ping of stones ap­prox­i­mates two eyes and a nose, the 1980 edi­tion of the map fea­tures an an­gu­lar car­toon face be­tween the trees. (According to lo­cal leg­end, it’s a monk who was turned to stone as pun­ish­ment for chas­ing a young girl off the cliff.) In the late 1990s, the same car­tog­ra­pher drew a hiker in the map’s mar­gins. With boots each about the size of a house, the hiker serves a prag­matic pur­pose. Like a kind of topo­graphic patch, he cov­ers an area in the Italian Alps where the Swiss ap­par­ently lacked the nec­es­sary information and data from the Italian ge­o­graph­i­cal ser­vices.”

The mar­mot, the lat­est il­lus­tra­tion, hides in plain sight in the Swiss Alps. His plump out­line was con­cealed in the del­i­cate re­lief shad­ing above a glac­ier, which shielded him from de­tec­tion for nearly five years. The moun­tain’s hachures— short, par­al­lel lines that in­di­cate the an­gle and ori­en­ta­tion of a slope— dou­ble as his fur. He is mostly in­dis­tin­guish­able from the sur­round­ing rock, ex­cept for his face, tail, and paws. He even fits eco­log­i­cally: as an an­i­mal of the ice age, alpine mar­mots are com­fort­able at high al­ti­tudes, bur­row­ing into frozen rock for their nine months of hi­ber­na­tion. In 2016, Hurni re­vealed his lo­ca­tion to the pub­lic on be­half of an un­named source.

There is a de­gree of wink­ing tol­er­ance for these draw­ings, which con­sti­tute some­thing of an un­of­fi­cial na­tional tra­di­tion: the spokes­woman for Swisstopo re­ferred me to a 1901 fish hid­den in a well-known paint­ing of Lake Lucerne at the National Council palace (probably in honor of the palace’s April 1st in­au­gu­ra­tion, which some European coun­tries cel­e­brate by at­tach­ing April Fish” to the backs of shirts). Nevertheless, the mar­mot—along with the face and hiker—will likely be eliminated” from Switzerland’s next of­fi­cial map (per a de­ci­sion from the chief of car­tog­ra­phy).

Swiss car­tog­ra­phers have a long­stand­ing rep­u­ta­tion for topo­graph­i­cal rigor. A so-called Seven Years War of Cartography” was even waged in the 1920s over the scale of the na­tional maps, with the Swiss Alpine Club ad­vo­cat­ing greater topo­graph­i­cal de­tail for its moun­taineer­ing mem­bers. Swisstopo is now an in­dus­try bench­mark for the moun­tains, from its use of aer­ial pho­togram­me­try (images taken first by bal­loons and then small planes) to aer­ial per­spec­tive (that nat­ural hazi­ness that ren­ders dis­tant peaks with less con­trast). In 1988, they were com­mis­sioned to draw Mount Everest.

Still, the orig­i­nal draw­ings were never au­tho­rized in the first place. Perhaps a metic­u­lous read­ing of next year’s Swiss maps may re­veal some other na­tion­ally-cel­e­brated an­i­mals in un­fre­quented bod­ies of wa­ter or alpine mead­ows. As Juerg Gilgen, a cur­rent car­tog­ra­pher at Swisstopo, told me as a mat­ter of fact, the proof-reader is also just a hu­man be­ing prone to fail­ure. And car­tog­ra­phers are also just hu­man be­ings try­ing to fool around.”

...

Read the original on eyeondesign.aiga.org »

10 204 shares, 57 trendiness

High-income job losses are cooling housing demand

Skip to con­tent

Most met­ros are adding jobs more slowly than nor­mal. Char­lotte leads in job growth among ma­jor met­ros, while Austin and Denver fall far short of their his­tor­i­cally strong pace.

High-income sec­tors are con­tract­ing, while Education and Healthcare are ex­pand­ing faster than nor­mal across most met­ros.

Employment com­po­si­tion mat­ters as much as to­tal growth for lo­cal hous­ing mar­ket strength. Met­ros re­liant on lower-wage job growth are likely to face softer for-sale de­mand.

The na­tional la­bor mar­ket is soft­en­ing, with im­pli­ca­tions for lo­cal hous­ing mar­kets. Most ma­jor met­ros are adding jobs more slowly than nor­mal. We an­a­lyzed em­ploy­ment per­for­mance by metro and in­dus­try, com­par­ing to­day’s growth to long-term trends since 2010. Red rep­re­sents job losses, yel­low shows slower-than-nor­mal growth, and green rep­re­sents faster-than-nor­mal growth.

The job mar­ket dri­ves hous­ing de­mand, but the type of jobs cre­ated or lost im­pacts the type of hous­ing. High-income sec­tors—In­for­ma­tion, Professional Services, and Financial Activities—are shrink­ing across most ma­jor met­ros. Workers in these in­dus­tries drive for-sale hous­ing de­mand more than rental de­mand. Nationally, high-in­come sec­tor em­ploy­ment re­mained flat YOY in August, well be­low its long-term com­pound an­nual growth of +1.6%.The Education and Healthcare sec­tors ac­count for the bulk of new jobs added in most met­ros and are grow­ing faster than nor­mal in al­most every mar­ket. Many of these jobs pay lower wages on av­er­age and of­ten gen­er­ate rental de­mand more than home­buy­ing ac­tiv­ity. Nationally, ed­u­ca­tion and health­care em­ploy­ment rose +3.3% YOY in August, well above its long-term com­pound an­nual growth of +2.1%

Philadelphia (+1.8% YOY) and New York (+1.7% YOY) show stronger job growth than their his­tor­i­cal trends (+1.1% and +1.6%, re­spec­tively). However, this im­prove­ment re­flects re­cov­ery from weak post-Great Financial Crisis base­lines rather than gen­uine out­per­for­mance. Charlotte (+2.6% YOY) is a stand­out per­former, main­tain­ing ro­bust job growth sup­ported by Professional Services ex­pan­sion (+4.5% YOY)—a rare bright spot for for-sale de­mand.Austin (+0.8% YOY) and Den­ver (+0.0% YOY) are grow­ing much more slowly than their his­tor­i­cally strong em­ploy­ment trends (+3.8% and +2.3%, re­spec­tively). Tech and Professional Services jobs are de­clin­ing in both mar­kets, and even health­care—which is ex­pand­ing faster than nor­mal in most met­ros—shows weak growth here. This re­duc­tion in high-pay­ing jobs is weak­en­ing de­mand for both home pur­chases and rentals.The Bay Area continues to lose jobs across high-in­come sec­tors (-0.4% YOY), dri­ving mod­est over­all em­ploy­ment de­clines. These job losses have slowed com­pared to a year ago but re­main neg­a­tive YOY. Despite gen­er­at­ing sub­stan­tial spend­ing and wealth, the AI-driven tech boom has­n’t added mean­ing­ful em­ploy­ment to the re­gion.

What this means for your busi­ness

Whether you build, in­vest, or ad­vise in hous­ing mar­kets, these em­ploy­ment shifts will im­pact your growth op­por­tu­ni­ties in 2026 and be­yond:Rental op­er­a­tors: Pre­pare for sus­tained de­mand from renters em­ployed in health­care and ed­u­ca­tion.

Our Metro and Regional Housing re­search pack­age in­cludes analy­sis of the lat­est de­mand, sup­ply, and af­ford­abil­ity fun­da­men­tals for each metro and re­gion as well as re­sults from our pro­pri­etary sur­veys. Our consulting team con­tin­u­ally eval­u­ates mar­ket fea­si­bil­ity, ab­sorp­tion/​pric­ing/​prod­uct rec­om­men­da­tions, and over­all in­vest­ment/​ex­pan­sion strat­egy in mar­kets na­tion­wide. Combining these two ar­eas of ex­per­tise yields qual­i­ta­tive and quan­ti­ta­tive in­sight for more in­tel­li­gent de­ci­sion-mak­ing.

This pack­age pro­vides a com­plete pic­ture of hous­ing sup­ply, de­mand, and af­ford­abil­ity through lo­cal in­sight, pro­pri­etary sur­veys, and ex­ten­sive data analy­sis. We cur­rently pro­vide an overview of ma­jor hous­ing and eco­nomic trends across 100 MSAs na­tion­wide.

Our re­search ser­vices en­able our clients to gauge hous­ing mar­ket con­di­tions and bet­ter align their busi­ness and strate­gic in­vest­ments in the hous­ing in­dus­try. We pro­vide a thought­ful and unique holis­tic ap­proach of both quan­ti­ta­tive and qual­i­ta­tive analy­sis to help clients make in­formed hous­ing in­vest­ment de­ci­sions.

Our ex­pe­ri­enced team of con­sul­tants helps clients make sound hous­ing in­vest­ment de­ci­sions. We thrive on their suc­cess and work with many clients over mul­ti­ple years and nu­mer­ous pro­jects. ​

Connect with me on LinkedIn

John leads JBRECs Southern California mar­ket cov­er­age for the Metro Analysis and Forecast re­ports, pro­duces the Regional Analysis and Forecast and Homebuilder Analysis and Forecast re­ports, and as­sists with cov­er­age of the pub­lic home­builder space.

If you have any ques­tions about our ser­vices or if you would like to speak to one of our ex­perts about we can help your busi­ness, please con­tact Client Relations at clientser­vices@jbrec.com.

Want to in­ter­view one of our ex­perts?

Media pro­fes­sion­als seek­ing ex­pert analy­sis and au­thor­i­ta­tive com­men­tary on US hous­ing mar­ket trends, pol­icy im­pacts, and in­dus­try de­vel­op­ments can email our team for in­ter­views, quotes, and data-dri­ven in­sights.

Every week, we de­liver analy­sis to over 40,000 sub­scribers with our Building Market Intelligence™ newslet­ter. Subscribe to our weekly BMI newslet­ters to stay cur­rent on press­ing top­ics in the hous­ing in­dus­try.

What’s ahead for hous­ing—In­sights from our 2026 Housing Market Outlook con­fer­ence

...

Read the original on jbrec.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.