10 interesting stories served every morning and every evening.
How to Get to Zero at Pioneer Works, Sep 12 - Dec 14, 2025. Review in the Art Newspaper, Oct 14. Offset at MediaLive: Data Rich, Dirt Poor at BMoCA, Sep 12 - Jan 11, 2026.
A browser extension for avoiding AI slop.
Download it for Chrome or Firefox.
This is a search tool that will only return content created before ChatGPT’s first public release on November 30, 2022.
Since the public release of ChatGTPT and other large language models, the internet is being increasingly polluted by AI generated text, images and video. This browser extension uses the Google search API to only return content published before Nov 30th, 2022 so you can be sure that it was written or produced by the human hand.
...
Read the original on tegabrain.com »
Written by me, proof-read by an LLM.
Details at end.
In one of my talks on assembly, I show a list of the 20 most executed instructions on an average x86 Linux desktop. All the usual culprits are there, mov, add, lea, sub, jmp, call and so on, but the surprise interloper is xor - “eXclusive OR”. In my 6502 hacking days, the presence of an exclusive OR was a sure-fire indicator you’d either found the encryption part of the code, or some kind of sprite routine. It’s surprising then, that a Linux machine just minding its own business, would be executing so many.
That is, until you remember that compilers love to emit a xor when setting a register to zero:
We know that exclusive-OR-ing anything with itself generates zero, but why does the compiler emit this sequence? Is it just showing off?
In the example above, I’ve compiled with -O2 and enabled Compiler Explorer’s “Compile to binary object” so you can view the machine code that the CPU sees, specifically:
If you change GCC’s optimisation level down to -O1 you’ll see:
The much clearer, more intention-revealing mov eax, 0 to set the EAX register to zero takes up five bytes, compared to the two of the exclusive OR. By using a slightly more obscure instruction, we save three bytes every time we need to set a register to zero, which is a pretty common operation. Saving bytes makes the program smaller, and makes more efficient use of the instruction cache.
It gets better though! Since this is a very common operation, x86 CPUs spot this “zeroing idiom” early in the pipeline and can specifically optimise around it: the out-of-order tracking systems knows that the value of “eax” (or whichever register is being zeroed) does not depend on the previous value of eax, so it can allocate a fresh, dependency-free zero register renamer slot. And, having done that it removes the operation from the execution queue - that is the xor takes zero execution cycles! It’s essentially optimised out by the CPU!
You may wonder why you see xor eax, eax but never xor rax, rax (the 64-bit version), even when returning a long:
In this case, even though rax is needed to hold the full 64-bit long result, by writing to eax, we get a nice effect: Unlike other partial register writes, when writing to an e register like eax, the architecture zeros the top 32 bits for free. So xor eax, eax sets all 64 bits to zero.
Interestingly, when zeroing the “extended” numbered registers (like r8), GCC still uses the d (double width, ie 32-bit) variant:
Note how it’s xor r8d, r8d (the 32-bit variant) even though with the REX prefix (here 45) it would be the same number of bytes to xor r8, r8 the full width. Probably makes something easier in the compilers, as clang does this too.
xor eax, eax saves you code space and execution time! Thanks compilers!
See the video that accompanies this post.
This post is day 1 of Advent of Compiler Optimisations 2025, a 25-day series exploring how compilers transform our code.
This post was written by a human (Matt Godbolt) and reviewed and proof-read by LLMs and humans.
Support Compiler Explorer on Patreon
or GitHub, or by buying CE products in the Compiler Explorer Shop.
Matt Godbolt is a C++ developer living in Chicago. He works for Hudson River Trading on super fun but secret things. Follow him on Mastodon
or Bluesky.
...
Read the original on xania.org »
I’m still the new person here, learning your ways, stumbling over the occasional quirk, smiling when I find the small touches that make you different. You remind me of what computing felt like before the noise. Before hype cycles and performance theatre. Before every tool needed a plugin system and a logo. You are coherent. You are deliberate. You are the kind of system that doesn’t have to shout to belong.
You carry the quiet strength of the greats, like a mainframe humming in a locked room, not chasing attention, just doing its work, year after year. Your base system feels like it was built by people who cared about the whole picture, not just the pieces. Your boot environments are like an old IBM i’s “side A / side B” IPL, a built-in escape hatch that says, we’ve thought ahead for you. You could be, you should be, the open-source mainframe: aligned with hardware lifecycles of three to five years or more, built for long-term trust, a platform people bet their uptime on. Your core design reminds me of Solaris in its best days: a stable base that commercial and community software could rely on without fear of shifting foundations.
And make uptime a design goal: a thousand-day uptime shouldn’t be folklore, it should be normal. Not a party trick, not a screenshot to boast about, but simply the natural consequence of a system built to endure. Mainframes never apologised for uptime measured in years, and neither should you. Apply updates without fear, reboot only when the kernel truly demands it, and let administrators see longevity as a feature, not a gamble.
I know you are reaching further into the desktop now. I understand why, and I can see how it might widen your reach. But here I find myself wondering: how do you keep the heartbeat of a rock-solid server while also embracing the quicker pulse of a modern desktop? I don’t pretend to have all the answers, I’m too new to you for that, but my first instinct is to lean on what you already have: the natural separation between CURRENT and RELEASE. Let those worlds move at their own pace, without asking one to carry the other’s compromises.
And now, with pkgbase in play, the stability of packages matters as much as the base system itself. The base must remain untouchable in its reliability, but I dream of a world where the package ecosystem is available in clear stability channels: from a rock-solid “production tier” you can stake a business on, to faster-moving streams where new features can flow without fear of breaking mission-critical systems. Too many times in the past, packages vanished or broke unexpectedly. I understand the core is sacred, but I wouldn’t mind if some of the wider ecosystem inherited that same level of care.
Culture matters too. One reason I stepped away from Linux was the noise, the debates that drowned out the joy of building. Please keep FreeBSD the kind of place where thoughtful engineering is welcome without ego battles, where enterprise focus and technical curiosity can sit at the same table. That spirit, the calm, shared purpose that carried Unix from the PDP-11 labs to the backbone of the Internet, is worth protecting.
There’s also the practical side: keep the doors open with hardware vendors like Dell and HPE, so FreeBSD remains a first-class citizen. Give me the tools to flash firmware without having to borrow Linux or Windows. Make hardware lifecycle alignment part of your story, major releases paced with the real world, point releases treated as refinement rather than disruption.
My hope is simple: that you stay different. Not in the way that shouts for attention, but in the way that earns trust. If someone wants hype or the latest shiny thing every month, they have Linux. If they want a platform that feels like it could simply run, and keep running, the way the best of Unix always did, they should know they can find it here. And I still dream of a future where a purpose-built “open-source mainframe” exists: a modern, reliable hardware system running FreeBSD with the same quiet presence as Sun’s Enterprise 10k once did.
And maybe, one day, someone will walk past a rack of servers, hear the steady, unhurried rhythm of a FreeBSD system still running, and smile, knowing that in a world that burns through trends, there is still something built to last.
With gratitude,
and with the wish to stay for the long run,
A newcomer who finally feels at home.
...
Read the original on www.tara.sh »
Unlike a lot of places in tech, my company, Set Studio/Piccalilli has no outside funding. Bootstrapped is what the LinkedIn people say, I think.
It’s been a hard year this year. A very hard year. I think a naive person would blame it all on the seemingly industry-wide attitude of “AI can just do this for us”. While that certainly hasn’t helped — as I see it — it’s been a hard year because of a combination of limping economies, tariffs, even more political instability and a severe cost of living crisis. It’s been a very similar year to 2020, in my opinion.
Why am I writing this? All of the above has had a really negative effect on us this year. Landing projects for Set Studio has been extremely difficult, especially as we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that. Our reputation is everything, so being associated with that technology as it increasingly shows us what it really is, would be a terrible move for the long term. I wouldn’t personally be able to sleep knowing I’ve contributed to all of that, too.
What we do really well is produce websites and design systems that actually work for and with people. We also share our knowledge and experience via tonnes of free content on Piccalilli, funded by premium courses to keep the lights on. We don’t pepper our content with annoying adverts for companies you have no interest in.
I’ve spoken about my dream for us to run Piccalilli full time and heck, that may still happen. For that to happen though, we really needed this Black Friday period to do as well, if not better, as it did last year. So far, that’s not happening unfortunately, but there’s still time.
I get it, money is so tight this year and companies are seemingly not investing in staff with training budgets quite like they did. We actually tried to stem that a bit by trialing a community funding model earlier in the year that I outlined in I’m getting fed up of making the rich, richer and we even started publishing some stuff.
It went down incredibly well, but when push came to shove, we fell way short in terms of funding support. Like I say, we’re not swimming in investor money, so without the support on Open Collective, as much as it hurt, we had to pull the plug. It’s a real shame — that would have been incredible — but again, I get it, money is tight.
This isn’t a woe is me post; that’s not how I roll. This is a post to give some context for what I’m going to ask next and how I’m trying to navigate the tough times. I’m asking folks to help us so we can try to help everyone, whether that’s with web projects that actually work for people or continuing to produce extremely high quality education material. Here’s some ways you can do it.
You’ll see messaging like “this is the most important time of year for us” and it’s extremely true. To break the fourth wall slightly, people buying courses at full price is a lot rarer than you might think. So often, discount events are what keeps the lights on.
We’ve launched two courses this year — JavaScript for Everyone and Mindful Design — that sit alongside my course, Complete CSS, which we launched last year. I know you’ve probably been burned by shit courses in the past, but these three courses are far from that. I promise.
I can’t stress enough how much Mat (JavaScript for Everyone) and Scott (Mindful Design) have put in to these courses this year. These two are elite level individuals with incredible reputations and they’ve shared a seemingly impossible amount of extremely high quality knowledge in their courses. I would definitely recommend giving them your time and support because they really will transform you for the better. For bosses reading this, all three courses will pay themselves back ten-fold — especially when you take advantage of bulk discounts — trust me.
So many of you have purchased courses already and I’m forever thankful for that. I can’t stand the term “social proof” but it works. People might be on the fence about grabbing a course, and seeing one of their peers talk about how good it was can be the difference.
You might think it’s not worth posting about the courses on social media but people do see it, especially on platforms like Bluesky with their custom feeds. We see it too!
Testimonials are always welcome because we can pop those on the course marketing pages, just like on mine.
In terms of sharing the studio, if you think we’re cool, post about it! It’s all about eyes and nice words. We’ll do the rest.
We’re really good at what we do! I know every studio/agency says this, but we’re different. We’re actually different.
We’re not going to charge you through the nose for substandard work — only deploying a fraction of our team, like a lot of agencies do. I set this studio up to be the antithesis of the way these — and I’ll say it out loud — charlatans operate.
Our whole focus is becoming your partners so you can do the — y’know — running of your business/organisation and we take the load off your shoulders. We’re hyper efficient and we fully own projects because they’re way above your normal duties. We get that. In fact, the most efficient way to get the most out of a studio like ours is to do exactly that.
I know “numbers goes up” is really important and yes, numbers definitely go up when we work with you. We do that without exploiting your users and customers too. There’s no deceptive patterns coming from us. We instead put everything into branding, messaging, content architecture and making everything extremely fast and accessible. That’s what makes the numbers go up for you.
We’re incredibly fairly priced too. We’re not in the business of charging ridiculous fees for our work. We’re only a small team, so our overheads are nothing compared to a lot of agencies. We carry your budgets a long way for you and genuinely give you more bang for your buck with an equitable pricing model.
We’ve got availability starting from the new year because starting projects in December is never the ideal way to do things. Getting those projects planned and ready to go is a good idea in December though, so get in touch!
I’m also slowly getting back into CSS and front-end consulting. I’ve helped some of the largest organisations and the smallest organisations, such as Harley-Davidson, the NHS and Google write better code and work better together. Again, starting in the new year I’ll have availability for consulting and engineering support. It might just be a touch more palatable than hiring the whole studio for you. Again, get in touch.
I’m always transparent — maybe too transparent at times — but it’s really important for me to be honest. Man, we need more honesty.
It’s taken a lot of pride-swallowing to write this but I think it’s more important to be honest than to be unnecessarily proud. I know this will be read by someone else who’s finding the year hard, so if anything, I’m really glad they’ll feel seen at least.
Getting good leads is harder than ever, so I’d really appreciate people sharing this with their network. You’ll never regret recommending Piccalilli courses or Set Studio. In fact, you’ll look really good at what you do when we absolutely smash it out of the park.
Thanks for reading and if you’re also struggling, I’m sending as much strength your way as I can.
👋 Hello, I’m Andy and this is my little home on the web.
I’m the founder of Set Studio, a creative agency that specialises in building stunning websites that work for everyone and Piccalilli, a publication that will level you up as a front-end developer.
I’ve also got a CSS course called Complete CSS to help you get to a level in development that you never thought would be possible.
Back to blog
...
Read the original on bell.bz »
The Advent of Sysadmin is a 12-day Advent calendar of Linux and DevOps challenges of different difficulties that runs from December 1st to December 12th.
Each day there will be an Advent of Sysadmin scenario.
Sign up for a free account (needed to keep track of your progress) and start solving the scenarios!
If you want to check out a scenario without signing up, you can run this one that requires no registration:
...
Read the original on sadservers.com »
Large language models have made significant progress in mathematical reasoning, which serves as an important testbed for AI and could impact scientific research if further advanced. By scaling reasoning with reinforcement learning that rewards correct final answers, LLMs have improved from poor performance to saturating quantitative reasoning competitions like AIME and HMMT in one year. However, this approach faces fundamental limitations. Pursuing higher final answer accuracy doesn’t address a key issue: correct answers don’t guarantee correct reasoning. Moreover, many mathematical tasks like theorem proving require rigorous step-by-step derivation rather than numerical answers, making final answer rewards inapplicable. To push the limits of deep reasoning, we believe it is necessary to verify the comprehensiveness and rigor of mathematical reasoning. Self-verification is particularly important for scaling test-time compute, especially for open problems without known solutions. Towards self-verifiable mathematical reasoning, we investigate how to train an accurate and faithful LLM-based verifier for theorem proving. We then train a proof generator using the verifier as the reward model, and incentivize the generator to identify and resolve as many issues as possible in their own proofs before finalizing them. To maintain the generation-verification gap as the generator becomes stronger, we propose to scale verification compute to automatically label new hard-to-verify proofs, creating training data to further improve the verifier. Our resulting model, DeepSeekMath-V2, demonstrates strong theorem-proving capabilities, achieving gold-level scores on IMO 2025 and CMO 2024 and a near-perfect 118/120 on Putnam 2024 with scaled test-time compute. While much work remains, these results suggest that self-verifiable mathematical reasoning is a feasible research direction that may help develop more capable mathematical AI systems.
Below are evaluation results on IMO-ProofBench (developed by the DeepMind team behind DeepThink IMO-Gold) and recent mathematics competitions including IMO 2025, CMO 2024, and Putnam 2024.
DeepSeekMath-V2 is built on top of DeepSeek-V3.2-Exp-Base. For inference support, please refer to the DeepSeek-V3.2-Exp github repository.
This repository and the model weights are licensed under the Apache License, Version 2.0 (Apache 2.0).
@misc{deepseek-math-v2,
author = {Zhihong Shao, Yuxiang Luo, Chengda Lu, Z.Z. Ren, Jiewen Hu, Tian Ye, Zhibin Gou, Shirong Ma, Xiaokang Zhang},
title = {DeepSeekMath-V2: Towards Self-Verifiable Mathematical Reasoning},
year = {2025},
If you have any questions, please raise an issue or contact us at service@deepseek.com.
...
Read the original on huggingface.co »
OPEN
This is open, and cannot be resolved with a finite computation.
For any $d\geq 1$ and $k\geq 0$ let $P(d,k)$ be the set of integers which are the sum of distinct powers $d^i$ with $i\geq k$. Let $3\leq d_1<d_2<\cdots <d_r$ be integers such that\[\sum_{1\leq i\leq r}\frac{1}{d_r-1}\geq 1.\]Can all sufficiently large integers be written as a sum of the shape $\sum_i c_ia_i$ where $c_i\in \{0,1\}$ and $a_i\in P(d_i,0)$?
If we further have $\mathrm{gcd}(d_1,\ldots,d_r)=1$ then, for any $k\geq 1$, can all sufficiently large integers be written as a sum of the shape $\sum_i c_ia_i$ where $c_i\in \{0,1\}$ and $a_i\in P(d_i,k)$?
Disclaimer: The open status of this problem reflects the current belief of the owner of this website. There may be literature on this problem that I am unaware of, which may partially or completely solve the stated problem. Please do your own literature search before expending significant effort on solving this problem. If you find any relevant literature not mentioned here, please add this in a comment.
The second question was conjectured by Burr, Erdős, Graham, and Li [BEGL96], who proved it for $\{3,4,7\}$.
The first question was asked separately by Erdős in [Er97] and [Er97e] (although there is some ambiguity over whether he intended $P(d,0)$ or $P(d,1)$ - certainly he mentions no gcd condition). A simple positive proof of the first question was provided (and formalised in Lean) by Aristotle thanks to Alexeev; see the comments for details.
In [BEGL96] they record that Pomerance observed that the condition $\sum 1/(d_i-1)\geq 1$ is necessary (for both questions), but give no details. Tao has sketched an explanation in the comments. It is trivial that $\mathrm{gcd}(d_1,\ldots,d_r)=1$ is a necessary condition in the second question.
Melfi [Me04] gives a construction, for any $\epsilon>0$, of an infinite set of $d_i$ for which every sufficiently large integer can be written as a finite sum of the shape $\sum_i c_ia_i$ where $c_i\in \{0,1\}$ and $a_i\in P(d_i,0)$ and yet $\sum_{i}\frac{1}{d_i-1}<\epsilon$.
See also [125].
This page was last edited 01 December 2025.
External data from the database - you can help update this
Formalised statement?
Yes
Additional thanks to: Boris Alexeev, Alfaiz, Dustin Mixon, and Terence Tao
When referring to this problem, please use the original sources of Erdős. If you wish to acknowledge this website, the recommended citation format is:
T. F. Bloom, Erdős Problem #124, https://www.erdosproblems.com/124, accessed 2025-12-01
In [BEGL96], the problem is formulated in a way that only allows powers of $d_i$ greater than $d_i^0 = 1$ to be added. However, in [Er97] and [Er97e], it’s formulated so that $1$s are allowed. Incidentally, this means that all the proofs in [BEGL96] actually prove slightly stronger statements.
[Note: this comment was written before 2025/12/01, when the problem text was updated.]
Aristotle from Harmonic has solved this problem all by itself, working only from the formal statement! Type-check it online!
A formal statement of the conjecture was available in the Formal Conjectures project. Unfortunately, there is a typo in that statement, wherein the comment says $\geq 1$ in the display-style equation while the corresponding Lean says “= 1”. (That makes the statement weaker.) Accordingly, I have also corrected that issue and included a proof of the corrected statement. Finally, I removed a lot of what I believed were unnecessary aspects of the statement, and Aristotle proved that too. In the end, there are three different versions proven, of which this is my favorite:
theorem erdos_124 : ∀ k, ∀ d : Fin k → ℕ,
(∀ i, 2 ≤ d i) → 1 ≤ ∑ i : Fin k, (1 : ℚ) / (d i - 1) →
∀ n, ∃ a : Fin k → ℕ,
∀ i, ((d i).digits (a i)).toFinset ⊆ {0, 1} ∧
n = ∑ i, a i
I believe this is a faithful formalization of (a strengthening of) the conjecture stated on this page.
As mentioned by DesmondWeisenberg above, there’s an issue involving the power 1 (which corresponds to the units digit here) that means the conjecture in [BEGL96] differs from this. I believe the version in [Er97] matches the statement here, in part because it lacks a gcd condition that is obviously necessary in [BEGL96]. I do not yet have access to [Er97e] to check the statement there. The subtlety of this issue is unfortunate, given Aristotle’s achievement!
Timing-wise, Aristotle took 6 hours and Lean took 1 minute.
This is quite something, congratulations to Boris and Aristotle!
On one hand, as the nice sketch provided below by tsaf confirms, the final proof is quite simple and elementary - indeed, if one was given this problem in a maths competition (so therefore expected a short simple solution existed) I’d guess that something like the below would be produced. On the other hand, if something like this worked, then surely the combined talents of Burr, Erdős, Graham, and Li would have spotted it.
Normally, this would make me suspicious of this short proof, in that there is overlooked subtlety. But (a) I can’t see any and (b) the proof has been formalised in Lean, so clearly it just works!
Perhaps this shows what the real issue in the [BEGL96] conjecture is - namely the removal of $1$ and the addition of the necessary gcd condition. (And perhaps at least some subset of the authors were aware of this argument for the easier version allowing $1$, but this was overlooked later by Erdős in [Er97] and [Er97e], although if they were aware then one would hope they’d have included this in the paper as a remark.)
At the moment I’m minded to keep this as open, and add the gcd condition in the main statement, and note in the remarks that the easier (?) version allowing $1$ and omitting the gcd condition, which was also asked independently by Erdős, has been solved.
My summary is that Aristotle solved “a” version of this problem (indeed, with an olympiad-style proof), but not “the” version.
I agree that the [BEGL96] problem is still open (for now!), and your plan to keep this problem open by changing the statement is reasonable. Alternatively, one could add another problem and link them. I have no preference.
I agree with your description. I also wonder whether this ‘easy’ version of the problem has actually appeared in some mathematical competition before now, which would of course pollute the training data if Aristotle had seen this solution already written up somewhere. (I only say this in the sense that knowing such a short olympiad-style proof exists makes it a nice competition problem.)
I assume you have also tried giving the harder version to Aristotle?
6 hours on what hardware? If it’s a like a consumer laptop-type, probably it’s easy to run at 100x compute at all Erdos problems with some datacenter? Do we have a good understanding of how Aristotle’s abilities scales with compute?
Aristotle’s solution is as follows. It is surprisingly easy.
Let $(a_n)$ be the sequence of powers of $d_i$ (sorted, with multiplicity). For example, if $d_1=2$ and $d_2=3$, then the sequences is: $1,1,2,3,4,8,9,16,27,\ldots$.
We want to show that every positive integer is a subsequence sum. This is equivalent to $a_{n+1} -1 \leq (a_1+\dots +a_n)$. The RHS is $\sum_{i=1}^k (d_i^{e_{i,n}}-1)/(d_i-1)$, where $e_{i,n}$ is the first power of $d_i$ that has not ocurred in the first $n$ terms. This is bounded below by $\min_i (d_i^{e_{i,n}}-1)$. However, $a_{n+1}=\min_i d_i^{e_{i,n}}$. Done.
Note, there is some ambiguity in the definition of $e_{i,n}$. In the example $d_1=2, d_2=3$, we can decide arbitrarily that $a_1$ is a power of $2$ and $a_2$ is a power of $3$, so $e_{2,1}=0$ but $e_{2,2}=1$.
Thank you tsaf for deciphering the proof! Interestingly, Theorem 2.3 from this paper could be thought of as a continuous-parameter loose variant of this problem and the basic proof outline (appearing on pages 13-14 in that paper) is the same: aiming to prove that the representations fill in an interval, sorting the sequence, verifying the continuous-parameter variant of condition $a_{n+1}\leq a_1+\cdots+a_n+1$, doing so by considering the first appearing term associated with each $d_i$, etc.
I am not mentioning this to diminish Aristotle’s / BorisAlexeev’s proof, on the contrary, it is quite beautiful! My point is that basic ideas reappear at many places; humans often fail to realize that they apply in a different setting, while a machine doesn’t have this problem! I remember seeing this problem before and thinking about it briefly. I admit that I haven’t noticed this connection, which is only now quite obvious to me!
For what it is worth, the Gemini and ChatGPT deep research tools did not turn up any significant new literature on this problem.
Gemini offered the simple observation that if 1 is omitted then the gcd condition becomes necessary, explained the significance of the $\sum_i \frac{1}{d_i-1} \geq 1$ condition (linking it to some parallel work on Cantor sets, particularly the “Newhouse gap lemma”), but turned up no new direct references for this problem.
ChatGPT used this very web page extensively as the main authoritative source, for instance citing the Aristotle proof, as well as the other papers cited on this page, as well as the page for the related problem [125]. As such, no new information was gleaned, but readers may find the AI-generated summary of the situation to be amusing.
As a further experiment, I gave this problem (in the weaker, solved formulation) to Gemini Deepthink with a hint to use Brown’s criterion. Interestingly, it declared that it was unlikely that Brown’s criterion was strong enough to solve this problem. Superficially this is of course a failure on the part of the AI, but an inspection of the reasoning showed that it was a fairly “honorable” mistake. It noted that if one took $d_1=3$ then infinitely often there should be no powers of any of the $d_i$ between $d_1^k$ and $d_1^{k+1}$, so that the ratio between consecutive elements could be as large as $3$. Typically, one needs the ratio of consecutive elements to be $2$ or less on the average for Brown’s criterion to apply, so Gemini concluded that heuristically this approach was unlikely to work. This is not a bad analysis actually - it just so happens that the cumulative sum of all the other powers less than $d_1^k$ is (barely) enough to overcome this gap of $3$ and reach $d_1^{k+1}$ after all. I would classify this type of error as one which a human expert could plausibly also make on this problem. Also, I think this analysis also hints at why the stronger version of this problem is more difficult, and unlikely to be resolved by off-the-shelf tests such as Brown’s criterion.
Further update: given the same prompt, ChatGPT Pro located Aristotle’s proof (and tsaf’s summary) from this very web page and wrote it up nicely in a human-readable form. Possibly there was an option to shut off web search and test the tool’s ability to solve the problem independently without contamination, but I did not explore this.
G. Melfi in this paper has given the following related result:
A sequence $S = \{s_1, s_2,…\}$ of positive integers is a complete sequence, if $\Sigma (S) := \Sigma^\infty_{i=1} \epsilon_i s_i$, for $\epsilon_i \in \{0,1\}, \Sigma_{i=1}^\infty \epsilon_i < \infty$ contains all sufficiently large integers. Let $s \geq 1$ and $A$ be a (finite or infinite) set of integers greater than $1$. Let $Pow (A; s)$ be the nondecreasing sequence of positive integers of the form $a^k$ with $a \in A$ and $k \geq s$. for any $s \geq 1$, $Pow (A; s)$ is complete if and $\textbf{only if}$ $\Sigma_{a \in A} 1/(a-1) \geq 1$.
This $\textbf{only if}$ part of their conjecture has been disproved by Melfi in the above discussed paper.
P. S. [BEGL96] also asks the following:
What can we say about lower and upper asymptotic density of $Σ(Pow(A; s))$ when $A$ is finite and $\Sigma_{a \in A} \frac{1}{log a} > \frac{1}{log 2}$?
(According to page 13 of this paper).
Just to clarify, Pomerance’s observation that Diophantine approximation shows the necessity of $\sum_{a \in A} 1/(a-1) \geq 1$ only applies in the case of finite $A$, whereas Melfi’s example is for infinite $A$. (In particular, the description of Pomerance’s result in [p. 133, BEGL96] is not quite correct.)
Interestingly, (my reconstruction of) Pomerance’s argument is almost identical to Gemini’s failed heuristic argument: if $A$ is finite with $\sum_{a \in A} 1/(a-1) < 1$, then there will be infinitely many numbers $n$ that are larger than the sum of all the powers of $a$ preceding it (for this to hold, $n$ has to be slightly less than a power of $a$ for each $a$, which can be accomplished by the Kronecker approximation theorem). Hence $A$ cannot be complete.
This argument shows that the $\sum_{a \in A} 1/(a-1)=1$ case is quite delicate; at a bare minimum, it needs something like Baker’s theorem to prevent powers of different $a$ from clustering too close together, which can create potential counterexamples. (And indeed, [p. 137, BEGL96] discusses this issue for specific sets such as {3,4,7}.)
All comments are the responsibility of the user. Comments appearing on this page are not verified for correctness. Please keep posts mathematical and on topic.
...
Read the original on www.erdosproblems.com »
Experiences with the Matrix protocol, Matrix Synapse server, bridges, and Element mobile apps.
I have been hosting a Matrix server for about five years now, mostly for text chats between a few relatives and close friends, and a bridge to WhatsApp for a few more people. These are my experiences.
I don’t have many thoughts on the protocol itself.
The only thing that I don’t really understand is the decision on data replication. If a user on server A joins a room on server B, recent room data is copied from server B to server A and then kept in sync on both servers. I suppose this reduces the load on the original server at the expense of federation overhead and space on other servers. However, this also creates a situation where anything said across federation cannot be unsaid, which is an ironic situation for a protocol/system that often comes up when talking about privacy.
Synapse is the only choice that supports bridges, which was why I wanted to try Matrix in the first place. And back in 2019-2020 this was the only choice anyway.
As of right now, I run Synapse, PostgreSQL, and coturn directly, without containerization, on a small VPS.
Works fairly reliably, supports bridges, and is more efficient that it was in 2020.
API is well documented, and allows authenticating and sending (unencrypted) messages via simple HTTP calls. At some point in time, I wanted to write a simple shell client to use with SXMO and such.
Does not have an admin panel
There is no admin page or panel. There was a third-party admin site, but it’s an entire site just for making HTTP calls. So I ended up writing my own.
While technically, Synapse can work with a sqlite database (and which at first seems like an OK choice for having
Initial setup presumes that the server is going to be federated, and there is no good way to turn it off. The best workaround involves a blank whitelist of federated servers.
I don’t know the implications of disabling it.
Message retention policy can be set up server-wide, but also per-room. There are specific lines in the configuration that need to be set to actually enable a service that runs the cleanup.
Synapse keeps the room even after all of the members leave it, including federated rooms. This results in many (sometimes large) rooms without local members orphaned on the server, taking up database space.
Deleting messages (events) with attachments does not delete the attachment (because another message might refer to it?), which means that the sent files continue existing on the server indefinitely. Another privacy implication. A simple “delete all files older than X” script works great until it deletes avatars. So yeah, seems like this is something that should be handled by the Synapse server instead of cobbled-together scripts.
Even after extensive cleanup, PostgreSQL database might need to be vacuumed to reduce the disk space it takes up.
Even for my small server with
Synapse keeps track of room states in an append-only (!) table named state_groups_state. Deleting a room does not delete the state_groups_state records. So it is never automatically cleaned up, and grows in size infinitely. It is possible to delete many of those records from the database directly, and Element (the company) provides some tool to “compress” those records, but again, something that should be handled by the server.
This is simply not an option in the API. Server admin can perform a “deactivate” (disable login) and “erase” (remove related data, which claims to be GDPR-compliant) on user accounts, but the accounts themselves stay on the server forever.
How this not considered a GDPR violation is a mystery to me. Even on my tiny server, I have users who use their first name as their ID and bridged WhatsApp users that use phone numbers as IDs.
While Matrix-Element ecosystem has been catering towards government and corporate entities for some time, there have been multiple recent announcements about its future.
Specifically, Element (the company) is now providing an all-in-one Element Server Suite (ESS) to replace the current setup, including
It is intended for non-professional use, evaluations, and small to mid-sized deployments (1–100 users).
ESS Community includes 7 components/services, now requires a minimum of 2 CPUs, 2GB of RAM, and runs using… Kubernetes? IMO, this is an overkill for dozen users.
For comparison, Snikket, an all-in-one solution with similar functionality using XMPP, requires a single CPU and 128MB (!) RAM for 10 or so users.
Yes, I have seen the ansible setup script setup recommended, but at this point, making setup easier does not address the issue of extra services being required in the first place.
Also, the ESS handles account creation and calls in an entirely different way, more on that later.
Pretty great. Easy to install and set up, works really well, and needs only occasional (semi-yearly or so) updates when WhatsApp changes their web API. Does not support calls.
Same on all platforms
Element exists and looks consistent on Android, iOS, and web, making it easier for regular users and for troubleshooting.
This is silly, but while (official?) bridges support image captions, official Element app does not. The answer in the FAQ? Get a better app. Well, OK.
Image with a caption in SchildiChat Classic (the better app).
Sometimes it can take up to a few minutes to get a message, even between two Android clients using Google Cloud Messaging. Sometimes it is nearly instant. Still unsure of the cause.
One unreliable way to tell that the server is unreachable is the endless loading bar. But even then, it eventually goes away without indicating any errors.
Then, when sending a message, the user receives “Unable to send message”. Frustration ensues.
But I know the app is trying to call the /sync endpoint. Why doesn’t it show any errors when that fails?
IIRC the first thing the app does is ask user to back up their signing keys and enter the key password, without a simple explanation. Not a great experience for regular users.
Some people reported issues with Element losing its keys or frequently requesting to be re-verified. Thankfully I have not encountered these.
Even if you connect to a self-hosted server, Element Classic could attempt to connect to vector.im integration server and matrix.org key backup server.
Element X is now recommended as the new and better client. It is not.
Somehow, it is slower. Clicking on a conversation takes 0.5-1.0 seconds to load it, compared to almost instant load on Classic.
Perhaps it does work better for accounts with many large rooms, but that is not my case.
Conversations are sorted by… who knows. It is not recent nor alphabetical.
Element X does not support periodic background sync, so you need to set up ntfy or something similar to use Element X on a de-googled device. Seems like a simple enough fail-safe (even WhatsApp does this), but it was dropped for some reason.
This “sliding sync” option is available only for newer Synapse versions, and only if running with PostgreSQL database (which should already be the case - see above). Probably not an issue unless the user tries to connect Element X to an outdated Synapse.
Calling with Element X requires Element Call (part of ESS). This supports group calls, but… only video calls at the moment.
You also might be asked to tell your contact to install the new app:
I don’t regularly use calls, but some people I would like to invite to my server would want to use them.
A few years ago, I ended up either temporarily enabling unrestricted registration (a terrible idea), or creating my users’ accounts manually, because the “invite” matrix.to link was broken, and registration tokens did not work correctly in mobile apps.
So let’s see how it works now. Keep in mind, I am still on standalone Synapse, not ESS.
I am a user, and I was to register an account on my friend’s server. I see that Element X is now a recommended app, so let’s try that.
Click “Create account” (which is a different style that does not look like a button for some reason).
But I want an account on a different server. Click “Change account provider”.
Now I can search for the server my friend is hosting, and it should appear in the list below the search.
As server admin: I do not remember if Synapse server has to enable/keep federation for this to work.
Yes! That is what I want, why is this so verbose?
WTF. So Element X cannot create even the simplest username+password account. That is all I want, I don’t want to sign in with Google, Apple, or any other form of third-party authentication.
I was unable to register an account using Element X, so Element Classic should work better.
What difference does this make? Skip.
The current official app is telling me to use Element X. Just tried that. Click “EDIT” where it says “matrix.org” (which does not say “server”, actually) and enter the server name.
Why not? No explanation. Sure, I’ll use a web client.
Well, fuck me, I guess. Why can’t I just create an account?
As a server admin: Synapse is set to allow registrations via registration tokens, because unrestricted registration is a bad idea. I did not find where the /static/client/register path is set.
IIRC it is possible to register an account by going to a web-hosted Element app, such as app.element.io, which will allow to register an account using a registration token. But then the user has to deal with the headache of cross-verifying their mobile device to the web app (which they might never use).
So now what?
Matrix-Element is growing, building new features, and acquiring large customers (mostly government entities AFAIK). However, the new corporatesque ESS Community is not worth it in my opinion. I don’t need fancy auth, third-party IDs, group video conferencing, or even federation for that matter. But it is clear that Synapse and Element X are severely crippled and are not designed to work without these services.
I will probably switch to Snikket, which is more efficient, has timely notifications, and very smooth onboarding.
...
Read the original on yaky.dev »
The first three dimensions—length, height, and depth—are included on all topographical maps. The “fourth dimension,” or time, is also available on the website of the Swiss Federal Office of Topography (Swisstopo). In the “Journey Through Time,” a timeline displays 175 years of the country’s cartographic history, advancing in increments of 5-10 years. Over the course of two minutes, Switzerland is drawn and redrawn with increasing precision: inky shapes take on hard edges, blues and browns appear after the turn of the century, and in 2016, the letters drop their serifs.
Watching a single place evolve over time reveals small histories and granular inconsistencies. Train stations and airports are built, a gunpowder factory disappears for the length of the Cold War. But on certain maps, in Switzerland’s more remote regions, there is also, curiously, a spider, a man’s face, a naked woman, a hiker, a fish, and a marmot. These barely-perceptible apparitions aren’t mistakes, but rather illustrations hidden by the official cartographers at Swisstopo in defiance of their mandate “to reconstitute reality.” Maps published by Swisstopo undergo a rigorous proofreading process, so to find an illicit drawing means that the cartographer has outsmarted his colleagues.
It also implies that the mapmaker has openly violated his commitment to accuracy, risking professional repercussions on account of an alpine rodent. No cartographer has been fired over these drawings, but then again, most were only discovered once their author had already left. (Many mapmakers timed the publication of their drawing to coincide with their retirement.) Over half of the known illustrations have been removed. The latest, the marmot drawing, was discovered by Swisstopo in 2016 and is likely to be eliminated from the next official map of Switzerland by next year. As the spokesperson for Swisstopo told me, “Creativity has no place on these maps.”
Errors—both accidental and deliberate—are not uncommon in maps (17th-century California as an island, the omission of Seattle in a 1960s AAA map). Military censors have long transformed nuclear bunkers into nondescript warehouses and routinely pixelate satellite images of sensitive sites. Many maps also contain intentional errors to trap would-be copyright violators. The work of recording reality is particularly vulnerable to plagiarism: if a cartographer is suspected of copying another’s work, he can simply claim to be duplicating the real world— ideally, the two should be the same. Mapmakers often rely on fictitious streets, typically no longer than a block, to differentiate their accounts of the truth (Oxygen Street in Edinburgh, for example).
Their entire professional life is spent at the magnification level of a postage stamp.
But there is another, less institutional reason to hide something in a map. According to Lorenz Hurni, professor of cartography at ETH Zurich, these illustrations are part inside joke, part coping mechanism. Cartographers are “quite meticulous, really high-precision people,” he says. Their entire professional life is spent at the magnification level of a postage stamp. To sustain this kind of concentration, Hurni suspects that they eventually “look for something to break out of their daily routine.” The satisfaction of these illustrations comes from their transgressive nature— the labor and secrecy required to conceal one of these visual puns.
And some of them enjoy remarkable longevity. The naked woman drawing, for example, remained hidden for almost sixty years in the municipality of Egg, in northern Switzerland. Her relatively understated shape was composed in 1958 from a swath of green countryside and the blue line of a river, her knees bending at the curve in the stream. She remained unnoticed, reclining peacefully, until 2012.
Several of the other drawings came about considerably later. In 1980, a Swisstopo cartographer traced the spider over an arachnid-shaped ice field on the Eiger mountain. It faded out over the course of the decade, retracting its spindly legs in the intermediary editions. Around the same time, another cartographer concealed a freshwater fish in a French nature preserve along the Swiss border. The fish lived in the blue circumference of a marshy lake until 1989 when, according to Swisstopo, “it disappeared from the surface of the lake, diving to the depths.”
It’s unclear how these drawings made it past the institute’s proofreaders in the first place. They may have been inserted only after the maps were approved, when cartographers are asked to apply the proofreaders’ final edits. When the maps were once printed as composite layers of different colors, cartographers could have built the drawings from the interplay of different topographical elements (the naked woman, for example, is composed of a blue line over a green-shaded area). Hurni also speculates that cartographers could have partitioned their illustrations over the corners of four separate map sheets, although no such example has (yet) been found.
Some of these clandestine drawings allude to actual topographical features: near the town of Interlaken, where an outcropping of stones approximates two eyes and a nose, the 1980 edition of the map features an angular cartoon face between the trees. (According to local legend, it’s a monk who was turned to stone as punishment for chasing a young girl off the cliff.) In the late 1990s, the same cartographer drew a hiker in the map’s margins. With boots each about the size of a house, the hiker serves a pragmatic purpose. Like a kind of topographic patch, he covers an area in the Italian Alps where the Swiss apparently lacked the necessary “information and data from the Italian geographical services.”
The marmot, the latest illustration, hides in plain sight in the Swiss Alps. His plump outline was concealed in the delicate relief shading above a glacier, which shielded him from detection for nearly five years. The mountain’s hachures— short, parallel lines that indicate the angle and orientation of a slope— double as his fur. He is mostly indistinguishable from the surrounding rock, except for his face, tail, and paws. He even fits ecologically: as an animal of the ice age, alpine marmots are comfortable at high altitudes, burrowing into frozen rock for their nine months of hibernation. In 2016, Hurni revealed his location to the public on behalf of an unnamed source.
There is a degree of winking tolerance for these drawings, which constitute something of an unofficial national tradition: the spokeswoman for Swisstopo referred me to a 1901 fish hidden in a well-known painting of Lake Lucerne at the National Council palace (probably in honor of the palace’s April 1st inauguration, which some European countries celebrate by attaching “April Fish” to the backs of shirts). Nevertheless, the marmot—along with the face and hiker—will likely be “eliminated” from Switzerland’s next official map (per a decision from the chief of cartography).
Swiss cartographers have a longstanding reputation for topographical rigor. A so-called “Seven Years War of Cartography” was even waged in the 1920s over the scale of the national maps, with the Swiss Alpine Club advocating greater topographical detail for its mountaineering members. Swisstopo is now an industry benchmark for the mountains, from its use of aerial photogrammetry (images taken first by balloons and then small planes) to aerial perspective (that natural haziness that renders distant peaks with less contrast). In 1988, they were commissioned to draw Mount Everest.
Still, the original drawings were never authorized in the first place. Perhaps a meticulous reading of next year’s Swiss maps may reveal some other nationally-celebrated animals in unfrequented bodies of water or alpine meadows. As Juerg Gilgen, a current cartographer at Swisstopo, told me “as a matter of fact, the proof-reader is also just a human being prone to failure. And cartographers are also just human beings trying to fool around.”
...
Read the original on eyeondesign.aiga.org »
Skip to content
Most metros are adding jobs more slowly than normal. Charlotte leads in job growth among major metros, while Austin and Denver fall far short of their historically strong pace.
High-income sectors are contracting, while Education and Healthcare are expanding faster than normal across most metros.
Employment composition matters as much as total growth for local housing market strength. Metros reliant on lower-wage job growth are likely to face softer for-sale demand.
The national labor market is softening, with implications for local housing markets. Most major metros are adding jobs more slowly than normal. We analyzed employment performance by metro and industry, comparing today’s growth to long-term trends since 2010. Red represents job losses, yellow shows slower-than-normal growth, and green represents faster-than-normal growth.
The job market drives housing demand, but the type of jobs created or lost impacts the type of housing. High-income sectors—Information, Professional Services, and Financial Activities—are shrinking across most major metros. Workers in these industries drive for-sale housing demand more than rental demand. Nationally, high-income sector employment remained flat YOY in August, well below its long-term compound annual growth of +1.6%.The Education and Healthcare sectors account for the bulk of new jobs added in most metros and are growing faster than normal in almost every market. Many of these jobs pay lower wages on average and often generate rental demand more than homebuying activity. Nationally, education and healthcare employment rose +3.3% YOY in August, well above its long-term compound annual growth of +2.1%
Philadelphia (+1.8% YOY) and New York (+1.7% YOY) show stronger job growth than their historical trends (+1.1% and +1.6%, respectively). However, this improvement reflects recovery from weak post-Great Financial Crisis baselines rather than genuine outperformance. Charlotte (+2.6% YOY) is a standout performer, maintaining robust job growth supported by Professional Services expansion (+4.5% YOY)—a rare bright spot for for-sale demand.Austin (+0.8% YOY) and Denver (+0.0% YOY) are growing much more slowly than their historically strong employment trends (+3.8% and +2.3%, respectively). Tech and Professional Services jobs are declining in both markets, and even healthcare—which is expanding faster than normal in most metros—shows weak growth here. This reduction in high-paying jobs is weakening demand for both home purchases and rentals.The Bay Area continues to lose jobs across high-income sectors (-0.4% YOY), driving modest overall employment declines. These job losses have slowed compared to a year ago but remain negative YOY. Despite generating substantial spending and wealth, the AI-driven tech boom hasn’t added meaningful employment to the region.
What this means for your business
Whether you build, invest, or advise in housing markets, these employment shifts will impact your growth opportunities in 2026 and beyond:Rental operators: Prepare for sustained demand from renters employed in healthcare and education.
Our Metro and Regional Housing research package includes analysis of the latest demand, supply, and affordability fundamentals for each metro and region as well as results from our proprietary surveys. Our consulting team continually evaluates market feasibility, absorption/pricing/product recommendations, and overall investment/expansion strategy in markets nationwide. Combining these two areas of expertise yields qualitative and quantitative insight for more intelligent decision-making.
This package provides a complete picture of housing supply, demand, and affordability through local insight, proprietary surveys, and extensive data analysis. We currently provide an overview of major housing and economic trends across 100 MSAs nationwide.
Our research services enable our clients to gauge housing market conditions and better align their business and strategic investments in the housing industry. We provide a thoughtful and unique holistic approach of both quantitative and qualitative analysis to help clients make informed housing investment decisions.
Our experienced team of consultants helps clients make sound housing investment decisions. We thrive on their success and work with many clients over multiple years and numerous projects.
Connect with me on LinkedIn
John leads JBREC’s Southern California market coverage for the Metro Analysis and Forecast reports, produces the Regional Analysis and Forecast and Homebuilder Analysis and Forecast reports, and assists with coverage of the public homebuilder space.
If you have any questions about our services or if you would like to speak to one of our experts about we can help your business, please contact Client Relations at clientservices@jbrec.com.
Want to interview one of our experts?
Media professionals seeking expert analysis and authoritative commentary on US housing market trends, policy impacts, and industry developments can email our team for interviews, quotes, and data-driven insights.
Every week, we deliver analysis to over 40,000 subscribers with our Building Market Intelligence™ newsletter. Subscribe to our weekly BMI newsletters to stay current on pressing topics in the housing industry.
What’s ahead for housing—Insights from our 2026 Housing Market Outlook conference
...
Read the original on jbrec.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.