10 interesting stories served every morning and every evening.
Christmas is often regarded as a time for goodwill, but one young UK couple’s act of kindness 50 years ago changed their lives forever.
On 23 December 1975, Rob Parsons and his wife Dianne were preparing for Christmas at their Cardiff home when they heard a knock at the door.
On their doorstep stood a man with a bin bag containing his possessions in his right hand and a frozen chicken in his left.
Rob studied the man’s face and vaguely remembered him as Ronnie Lockwood, someone he would occasionally see at Sunday School as a boy and who he was told to be kind to as he was a “bit different”.
“I said ‘Ronnie, what’s with the chicken?’ He said ‘somebody gave it to me for Christmas’. And then I said two words that changed all of our lives.
“And I’m not exactly sure why I said them. I said come in.”
...
Read the original on www.bbc.co.uk »
For more than a century, people have considered Alzheimer’s disease (AD) an irreversible illness. Consequently, research has focused on preventing or slowing it, rather than recovery. Despite billions of dollars spent on decades of research, there has never been a clinical trial of any drug to reverse and recover from AD.
A research team from Case Western Reserve University, University Hospitals (UH) and the Louis Stokes Cleveland VA Medical Center has now challenged this long-held dogma in the field, testing whether brains already badly afflicted with advanced AD could recover.
The study, led by Kalyani Chaubey, from the Pieper Laboratory, was published online Dec. 22 in Cell Reports Medicine. Using diverse preclinical mouse models and analysis of human AD brains, the team showed that the brain’s failure to maintain normal levels of a central cellular energy molecule, NAD+, is a major driver of AD, and that maintaining proper NAD+ balance can prevent and even reverse the disease.
NAD+ levels decline naturally across the body, including the brain, as people age. Without proper NAD+ balance, cells eventually become unable to execute many of the critical processes required for proper functioning and survival. In this study, the team showed that the decline in NAD+ is even more severe in the brains of people with AD, and that this same phenomenon also occurs in mouse models of the disease.
While AD is a uniquely human condition, it can be studied in the laboratory with mice that have been genetically engineered to express genetic mutations known to cause AD in people.
The researchers used two of these mouse models: One carried multiple human mutations in amyloid processing; the other carried a human mutation in the tau protein.
Amyloid and tau pathology are two of the major early events in AD. Both lines of mice develop brain pathology resembling AD, including blood-brain barrier deterioration, axonal degeneration, neuroinflammation, impaired hippocampal neurogenesis, reduced synaptic transmission and widespread accumulation of oxidative damage. These mice also develop the characteristics of severe cognitive impairments seen in people with AD.
After finding that NAD+ levels in the brain declined precipitously in both human and mouse AD, the research team tested whether preventing loss of brain NAD+ balance before disease onset or restoring brain NAD+ balance after significant disease progression could prevent or reverse AD, respectively.
The study was based on their previous work, published in Proceeding of the National Academy of Sciences USA, showing that restoring the brain’s NAD+ balance achieved pathological and functional recovery after severe, long-lasting traumatic brain injury. They restored NAD+ balance by administering a now well-characterized pharmacologic agent known as P7C3-A20, developed in the Pieper lab.
Remarkably, not only did preserving NAD+ balance protect mice from developing AD, but delayed treatment in mice with advanced disease also enabled the brain to fix the major pathological events driven by the disease-causing genetic mutations.
Moreover, both lines of mice fully recovered cognitive function. This was accompanied by normalized blood levels of phosphorylated tau 217, a recently approved clinical biomarker of AD in people, providing confirmation of disease reversal and highlighting an objective biomarker that could be used in future clinical trials for AD recovery.
“We were very excited and encouraged by our results,” said Andrew A. Pieper, the study’s senior author, a professor at the Case Western Reserve School of Medicine and director of the Brain Health Medicines Center, Harrington Discovery Institute at UH. “Restoring the brain’s energy balance achieved pathological and functional recovery in both lines of mice with advanced Alzheimer’s. Seeing this effect in two very different animal models, each driven by different genetic causes, strengthens the new idea that recovery from advanced disease might be possible in people with AD when the brain’s NAD+ balance is restored.”
Pieper also holds the Morley-Mather Chair in Neuropsychiatry at UH and the CWRU Rebecca E. Barchas, MD, DLFAPA, University Professorship in Translational Psychiatry. He serves as psychiatrist and investigator in the Louis Stokes VA Geriatric Research Education and Clinical Center.
The results prompt a paradigm shift in how researchers, clinicians and patients can think about treating AD in the future.
“The key takeaway is a message of hope—the effects of Alzheimer’s disease may not be inevitably permanent,” Pieper said. “The damaged brain can, under some conditions, repair itself and regain function.”
“Through our study, we demonstrated one drug-based way to accomplish this in animal models, and also identified candidate proteins in the human AD brain that may relate to the ability to reverse AD,” Chaubey said.
Pieper emphasized that current over-the-counter NAD+-precursors have been shown in animal models to raise cellular NAD+ to dangerously high levels that promote cancer. The pharmacological approach in this study, however, uses a pharmacologic agent (P7C3-A20) that enables cells to maintain their proper balance of NAD+ under conditions of otherwise overwhelming stress, without elevating NAD+ to supraphysiologic levels.
“This is an important factor when considering patient care, and clinicians should consider the possibility that therapeutic strategies aimed at restoring brain energy balance might offer a path to disease recovery,” Pieper said.
This work also encourages new research into complementary approaches and eventual testing in patients, and the technology is being commercialized by Cleveland-based company Glengary Brain Health, which Pieper co-founded.
“This new therapeutic approach to recovery needs to be moved into carefully designed human clinical trials to determine whether the efficacy seen in animal models translates to human patients,” Pieper said. “Additional next steps for the laboratory research include pinpointing which aspects of brain energy balance are most important for recovery, identifying and evaluating complementary approaches to Alzheimer’s reversal, and investigating whether this recovery approach is also effective in other forms of chronic, age-related neurodegenerative disease.”
...
Read the original on case.edu »
I’ve been reading Lord of the Rings for two months and I’m just at the end of the first part. It’s not because I’m not enjoying it. It’s one of the most enjoyable reading experiences I can remember.
From the beginning, I’ve read the whole thing aloud. I’ve found reading aloud helpful for staying engaged — limiting myself to mouth-speed rather than eye-speed means I won’t rush, miss important details, and then lose interest, which has always been a problem for me.
At first I was anxious to read a 1,500-page book this way, because it would take so long. But, as someone pointed out to me, if I’m enjoying it, why would I want to be done with it sooner?
So I tried slowing down even more, and discovered something. I slowed to a pace that felt almost absurd, treating each sentence as though it might be a particularly important one. I gave each one maybe triple the usual time and attention, ignoring the fact that there are hundreds of pages to go.
This leisurely pace made Middle-Earth blossom before my eyes. When I paused after each comma, and let each sentence ring for a small moment after the period, the events of the story reached me with more weight and strength. That extra time gave space for Tolkien’s images and moods to propagate in my mind, which they did automatically.
Some part of me still wanted to rush and get on with it, to make good time, to gloss over the songs and lore to get to Moria and Mount Doom and the other marquee moments of the story. But the more I ignored that impulse, the better the experience got.
By offering the book about triple the usual amount of attentiveness, I was getting about triple the storyness (i.e. meaning, engagement, literary pleasure). Whatever the thing is that I’m seeking when I pick up a novel in the first place, there’s much more of it available at this pace.
This effect reminded me of a paradox around eating I recognized long ago. When you slow down your eating speed, say to half or a third your default speed, you get much more enjoyment out of a smaller amount of food. The extra attention given to each bite allows more of the “good stuff,” whatever that is exactly, to reach you.
What’s paradoxical is that it’s precisely the seeking of that “good stuff” that normally drives me to eat so quickly, and miss most of what I’m seeking. When you try to barrel ahead to access the good stuff quicker, you get less of it in the end. Slow down and much more of it is released.
And it’s released automatically, in both reading and eating. You don’t have to search it out. The good stuff (the meaning in the text, the pleasure in the eating) just rises up to meet you in that extra time you give it. Slowing down, and offering more time to the act of consumption, immediately increases reading comprehension (and eating comprehension).
Both are analogous to slowing down while you vacuum a carpet. If you pass the vacuum head too quickly, you miss half the dirt. Slow down, and you can hear how much more grit is sent skittering up the tube. The suction and bristles are working, but they need more time to do their work fully, to draw up the deeper-lying stuff.
It seems that my default consumption speeds for reading and eating (and maybe everything else) reduce the rewards of those things significantly, undermining the point of doing either.
Part of it is my own impatience. But I also suspect that modern living, with its infinite supply of consumables, tends to push our rate-of-intake dials too high. I’m not going to run out of books, or snacks, or opportunities to learn something. There’s always more, so not every crust of bread or printed page needs to be appreciated fully.
Internally though, the mind is juggling like Lucy and Ethel on the conveyor belt at the chocolate factory. Our receptors for meaning and appreciation, like the vacuum head, need more time to do their full work, to make all the connections they’re designed to make.
It might sound like I’m just offering clichés — less is more, stop and smell the roses, take your time — and I guess I am. But clichés suffer the same issue: they are often profound insights, consumed and passed on too rapidly for their real meaning to register anymore. You really should stop and smell roses, as you know if you’re in the habit of doing that.
At least see what happens when you reduce your consumption speed — of anything, but especially books, information, and food — by a half, or two thirds. Notice that (1) something in you really wants to plow through at the highest viable setting, and (2) how much more of the reward is released when you slow down anyway.
As far as I can tell, almost everything becomes more satisfying when you give it more time and intention, even things like checking the mailbox or writing a shopping list.
Slowing down your rate of consumption will inevitably change what you want to consume. Reading throwaway news articles or AI slop with great care and attention is only going to show you how empty of value it is. Reading dense writing in inky old books, crafted for your mind by great masters, becomes easier without the rushed pace, and the meaning just blooms out of it.
Same with food. Try to savor a cheap, waxy “chocolate” bar, or a bag of store-brand cheese puffs, and you discover a harsh taste that you don’t want to look at too closely. Enjoy a homemade pastry with great attention, and discover there’s even more in it than you realized.
Mass production is good in so many ways, but the faster we tend to consume its fruits, the more we end up seeking things for their glossy, candied surfaces. The more we go for these surface-level rewards, the more the culture focuses on offering only that part — such as TikTok videos, processed food, CGI-forward movies, and public discourse in the form of unexamined talking points.
Who knows how far we’ve drifted from the best modes of consuming the things we value. Once something becomes a norm, it seems like an appropriate standard, no matter how much has been lost. Apparently, reading silently and alone was unusual until as late as the 18th century. Certainly sit-down meals and cooking at home were.
I don’t mean to sound like a scold. Let’s say none of this is morally good or bad. It’s just that in so much of what we do, we could be getting much more of the part of it that we really seek — but it’s only available at slower speeds.
If you’re curious, try consuming things more slowly, so slowly it seems silly to others — say a third your habitual speed — and see what rises up to meet you.
Recently I opened a discussion forum for Raptitude readers who want to give something up for the month of December (alcohol, social media, snacks, etc).
It’s been a real success, and many people want to do something similar in January. If you want to quit something, or just give it up for a month, you’re invited to join.
Follow this link at the end of this post to get an invite.
...
Read the original on www.raptitude.com »
In the introduction to “The New Yorker Index 1992,” a twenty-page catalogue of everything the magazine published that year, the staff writer John McPhee acknowledged a ritual familiar to many New Yorker readers: tackling a stack of unread issues. Instead of catching up at home, he’d schlep his copies up to New Hampshire and read in the middle of a lake, while lying in a canoe. With those issues dispatched, he’d call the New Yorker office and ask the librarian for help locating other stories he wanted to read: “Hello, Helen, in what issue did [the staff writer Thomas] Whiteside tee up the American latex tomato? Whose was the thing about the grass at Wimbledon?” (The thing was McPhee’s, of course.)
Exploring past New Yorker pieces is now a lot easier (and more portable). As of this week, our full archive is available to read at newyorker.com. On top of what was previously accessible, we’ve added more than a hundred thousand articles from more than four thousand issues, a stack hefty enough to sink your canoe. Not only is everything from the 1992 index accounted for—Susan Orlean on the inner workings of a supermarket, Talk of the Town stories about “urinals (art)” and “urinals (not art)”—but also John Updike’s 1961 short story “A & P” and Calvin Tomkins’s Profile of Marcel Duchamp. There’s work by Jorge Luis Borges and Susan Sontag, Ralph Ellison and Louise Glück. There are articles about Frank Sinatra and Michael Jordan, royals and rock stars, cowboys and clowns. All in all, there are more than thirty-one thousand Talk of the Town stories; twenty-four hundred Reporter at Large pieces; more than thirteen thousand works of fiction and fourteen thousand poems; three thousand Letters from everywhere, from Abu Dhabi to Zimbabwe; and fifteen hundred “Annals of” everything, from “haberdashery” to “veterinary medicine.”
While the complete digital archive may not have the same charm as magazines piled on the nightstand, there is now a single home for every issue—a place to peruse covers, scan tables of contents, and choose what to read next. Better still, if you don’t happen to have the phone number of our librarian, upgraded search capabilities allow you to hunt down “Whiteside” or “Wimbledon,” “vaping” or “vampires,” and sort results by date of publication. We’ve also made use of A. I. to add short summaries where they didn’t previously appear, making it easier to discern what an article is about. (This is, after all, a magazine in which the headline “Measure for Measure” might lead to an essay not on Shakespeare’s comedy but on the rise of the metric system.)
The magazine’s centenary celebrations, which kicked off in February, provide a wonderful occasion to get reacquainted with our rich history. Whether you are looking for something specific, going down a rabbit hole, or simply catching up, the newly expanded archive is designed to make a hundred years of writing more accessible than ever. Subscribers enjoy unlimited access; if you aren’t a subscriber yet, become one today.
We’ll continue to highlight some of our past favorites in the Classics newsletter, on our home page, and elsewhere, but consider this an open invitation to dive into the archive on your own. If you do choose to read on the water, please be careful—an iPad dropped overboard won’t hold up quite as well as a copy of the print magazine. ♦
...
Read the original on www.newyorker.com »
Python 3.15’s interpreter for Windows x86-64 should hopefully be 15% faster
Some time ago I posted an apology piece
for Python’s tail calling results. I apologized for communicating performance results without noticing a compiler bug had occured.
I can proudly say today that I am partially retracting that apology, but only for two platforms—macOS AArch64 (XCode Clang) and Windows x86-64 (MSVC).
In our own experiments, the tail calling interpreter for CPython was found to beat the computed goto interpreter by 5% on pyperformance on AArch64 macOS using XCode Clang, and roughly 15% on pyperformance on Windows on an experimental internal version of MSVC. The Windows build is against a switch-case interpreter, but this in theory shouldn’t matter too much, more on that in the next section.
This is of course, a hopefully accurate result. I tried to be more diligent here, but I am of course not infallible. However, I have found that sharing early and making a fool of myself often works well, as it has led to people catching bugs in my code, so I shall continue doing so :).
Also this assumes the change doesn’t get reverted later in Python 3.15’s development cycle.
Just a recap. There are two popular current ways of writing C-based interpreters.
Where we just switch-case to the correct instruction handler.
And the other popular way is a GCC/Clang extension called labels-as-values/computed gotos.
Which is basically the same idea, but to instead jump to the address of the next label. Traditionally, the key optimization here is that it needs only one jump to go to the next instruction, while in the switch-case interpreter, a naiive compiler would need two jumps.
With modern compilers however, the benefits of the computed gotos is a lot less, mainly because modern compilers have gotten better and modern hardware has also gotten better. In Nelson Elhage’s
excellent investigation
on the next kind of interpreter, the speedup of computed gotos over switch case on modern Clang was only in the low single digits on pyperformance.
A 3rd way that was suggested decades ago, but not really entirely feasible is call/tail-call threaded interpreters. In this scheme, each bytecode handler is its own function, and we tail-call from one handler to the next in the instruction stream:
This wasn’t too feasible in C for one main reason—tail call optimization was merely an optimization. It’s something the C compiler might do, or might not do. This means if you’re unlucky and the C compiler chooses not to perform the tail call, your interpreter might stack overflow!
Some time ago, Clang introduced __attribute__((musttail)), which allowed for mandating that a call must be tail-called. Otherwise, the compilation will fail. To my knowledge, the first time this was popularized for use in a mainstream interpreter was in
Josh Haberman’s Protobuf blog post.
Later on, Haoran Xu noticed that the GHC calling convention combined with tail calls produced efficient code. They used this for their baseline JIT in a paper and termed the technique
Copy-and-Patch.
So where are we now?
After using a fixed XCode Clang, our performance numbers on CPython 3.14/3.15 suggest that the tail calling interpreter does provide a modest speedup over computed gotos. Around the 5% geomean range on pyperformance.
To my understanding, uv already ships Python 3.14 on macOS with tail calling, which might be responsible for some of the speedups you see on there. We’re planning to ship the official 3.15 macOS binaries on python.org with tail calling as well.
However, you’re not here for that. The title of this blog post is clearly about MSVC Windows x86-64. So what about that?
[!CAUTION] The features for MSVC discussed below are to my knowledge, experimental. They are not guaranteed to always be around unless the MSVC team decide to keep them. Use at your own risk!
These are the preliminary pyperformance results for CPython on MSVC with tail-calling vs switch-case. Any number above 1.00x is a speedup (e.g. 1.01x == 1% speedup), anything below 1.00x is a slowdown. The speedup is a geomtric mean of around 15-16%, with a range of ~60% slowdown (one or two outliers) to 78% speedup. However, the key thing is that the vast majority of benchmaarks sped up!
[!WARNING] These results are on an experimental internal MSVC compiler, public results below.
To verify this and make sure I wasn’t wrong yet again, I checked the results on my machine with Visual Studio 2026. These are the results from
this issue.
So yeah, the speedups are real! For a large-ish library like xDSL, we see a 14% speedup, while for smaller microbenchmarks like nbody and spectralnorm, the speedups are greater.
Thanks to Chris Eibl and Brandt Bucher, we managed to get the
PR for this
on MSVC over the finish line. I also want to sincerely thank the MSVC team. I can’t say this enough: they have been a joy to work with and I’m very impressed by what they’ve done, and I want to congratulate them on releasing Visual Studio 2026.
This is now listed in the What’s New for 3.15 notes:
Builds using Visual Studio 2026 (MSVC 18) may now use the new tail-calling interpreter. Results on an early experimental MSVC compiler reported roughly 15% speedup on the geometric mean of pyperformance on Windows x86-64 over the switch-case interpreter. We have observed speedups ranging from 15% for large pure-Python libraries to 40% for long-running small pure-Python scripts on Windows. (Contributed by Chris Eibl, Ken Jin, and Brandt Bucher in gh-143068. Special thanks to the MSVC team including Hulon Jenkins.)
This is the documentation for [[msvc::musttail]].
Where exactly do the speedups come from?
I used to believe the the tail calling interpreters get their speedup from better register use. While I still believe that now, I suspect that is not the main reason for speedups in CPython.
My main guess now is that
tail calling resets compiler heuristics to sane levels, so that compilers can do their jobs.
Let me show an example, at the time of writing, CPython 3.15’s interpreter loop is around 12k lines of C code. That’s 12k lines in a single function for the switch-case and computed goto interpreter.
This has caused many issues for compilers in the past, too many to list in fact. I have a EuroPython 2025 talk about this. In short, this overly large function breaks a lot of compiler heuristics.
One of the most beneficial optimisations is inlining. In the past, we’ve found that compilers sometimes straight up
refuse to inline even the simplest of functions in that 12k loc eval loop. I want to stress that this is not the fault of the compiler. It’s actually doing the correct thing—you usually don’t want to increase the code size of something already super large. Unfortunately, this does’t bode well for our interpreter.
You might say just write the interpreter in assembly! However, the whole point of this exercise is to not do that.
Ok enough talk, let’s take a look at the code now. Taking a real example, we examine BINARY_OP_ADD_INT which adds two Python integers. Cleaning up the code so it’s readable, things look like this:
TARGET(BINARY_OP_ADD_INT) {
// Increment the instruction pointer.
_Py_CODEUNIT* const this_instr = next_instr;
frame->instr_ptr = next_instr;
next_instr += 6;
_PyStackRef right = stack_pointer[-1];
// Check that LHS is an int.
PyObject *value_o = PyStackRef_AsPyObjectBorrow(left);
if (!_PyLong_CheckExactAndCompact(value_o)) {
JUMP_TO_PREDICTED(BINARY_OP);
// Check that RHS is an int.
// … (same code as above for LHS)
// Add them together.
PyObject *left_o = PyStackRef_AsPyObjectBorrow(left);
PyObject *right_o = PyStackRef_AsPyObjectBorrow(right);
res = _PyCompactLong_Add((PyLongObject *)left_o, (PyLongObject *)right_o);
// If the addition fails, fall back to the generic instruction.
if (PyStackRef_IsNull(res)) {
JUMP_TO_PREDICTED(BINARY_OP);
// Close the references.
PyStackRef_CLOSE_SPECIALIZED(left, _PyLong_ExactDealloc);
PyStackRef_CLOSE_SPECIALIZED(right, _PyLong_ExactDealloc);
// Write to the stack, and dispatch.
stack_pointer[-2] = res;
stack_pointer += -1;
DISPATCH();
Seems simple enough, let’s take a look at the assembly for switch-case on VS 2026. Note again, this is a non-PGO build for easy source information, PGO generally makes some of these problems go away, but not all of them:
Huh… all our functions were not inlined. Surely that must’ve mean they were too big or something right? Let’s look at PyStackReF_CLOSE_SPECIALIZED:
Here’s how BINARY_OP_ADD_INT looks with tail calling on VS 2026 (again, no PGO):
if (!_PyLong_CheckExactAndCompact(left_o)) {
00007FFC67164785 cmp qword ptr [rax+8],rdx
00007FFC67164789 jne _TAIL_CALL_BINARY_OP_ADD_INT@@_A+149h (07FFC67164879h)
00007FFC6716478F mov r9,qword ptr [rax+10h]
00007FFC67164793 cmp r9,10h
00007FFC67164797 jae _TAIL_CALL_BINARY_OP_ADD_INT@@_A+149h (07FFC67164879h)
res = _PyCompactLong_Add((PyLongObject *)left_o, (PyLongObject *)right_o);
00007FFC6716479D mov eax,dword ptr [rax+18h]
00007FFC671647A0 and r9d,3
00007FFC671647A4 and r8d,3
00007FFC671647A8 mov edx,1
00007FFC671647AD sub rdx,r9
00007FFC671647B0 mov ecx,1
00007FFC671647B5 imul rdx,rax
00007FFC671647B9 mov eax,dword ptr [rbx+18h]
00007FFC671647BC sub rcx,r8
00007FFC671647BF imul rcx,rax
00007FFC671647C3 add rcx,rdx
00007FFC671647C6 call medium_from_stwodigits (07FFC6706E9E0h)
00007FFC671647CB mov rbx,rax
PyStackRef_CLOSE_SPECIALIZED(value, _PyLong_ExactDealloc);
00007FFC671647EB test bpl,1
00007FFC671647EF jne _TAIL_CALL_BINARY_OP_ADD_INT@@_A+0ECh (07FFC6716481Ch)
00007FFC671647F1 add dword ptr [rbp],0FFFFFFFFh
00007FFC671647F5 jne _TAIL_CALL_BINARY_OP_ADD_INT@@_A+0ECh (07FFC6716481Ch)
...
Read the original on fidget-spinner.github.io »
We have a problem since the new upgrade.
“10.000-message limit reached. Messages sent before 26. September 2025 are hidden -Restore Access” appeared.
so the messages before that date can not be accessed anymore.
When was this hard restriction implemented. v11?
We are a school and have since corona this Mattermost instance with over 2000 active users and 470000 posts.
is the September 26 a calculated date with the 10000 messages or just access to the last months messages?
...
Read the original on github.com »
The worst toy is one with many pieces that my kids dump on the ground and then play with for only 2 minutes. This makes a cleaning to playtime ratio: 2 minute play vs 10 minute clean up
Sucking away my life as a parent.
A beautiful toy is one that the kids play with a lot, over a long time, and that isn’t hard to clean up.
Comparing the toys I score high and the toy I scored low, here are the principles that I think give a toy a high score.
The high-scoring toys can become many different kinds of objects. At our house, they are robots, they are rocket ships. They are a fishing hook that then we go fishing with.
The giant magnet tiles and small magnet tiles also become containers for a narrative to play out. The magnet tiles are often present boxes to deliver birthday presents to each other, or houses that other toys live inside of. The giant tiles are houses, rocket ships, nap pods, or shops. I would rather have a pile of giant magnet tiles than one “play store”.
By contrast, the pieces in the Minecraft toy are each a specific thing: a tree, water, or lava. There are fewer world building possibilities, with everything fitting into the strong frame that the toy offers. It makes sense that they grow bored of playing with them if it has fewer “games” to offer.
Each piece of the high scoring toys has a fun relationship with the others. If the pieces are different, they are different enough that it doesn’t take a lot of thought to choose between them.
The toy below looks less fun to me because it looks more fiddly. If I was playing, I would need to figure out if I want a slightly bent angle or a straight one, and it seems like not a fun choice.
Apparently every toy that I find easiest to clean up has magnets on it. Maybe I feel the satisfaction of clicking them together as I clean them up. Cleaning becomes a little like playing.
With the high scoring toys, the magnets are strong, and the connection between parts feels satisfying when you make it.
On the other side, the Minecraft toy magnets are less strong and feel less satisfying to put together. It doesn’t have a satisfying sense of being complete when you stick them together; it’s more like a temporary paste.
The toy Clixo seems cool: flexible play, elegant shapes, and magnetic. I predict it would be a top scoring toy.
...
Read the original on joannabregan.substack.com »
I bought a MacBook Air M2. As of writing, it’s very affordable with the 16 GB RAM, 256 GB SSD, 13.6” model available for $750. As of writing, also Asahi Linux doesn’t support anything newer than M2.
* 2011-2015: MacBook Air 13.3” with Intel Core i5 1.8 GHz, 8 GB of RAM, and 256 GB SSD (aftermarket upgrade from OWC). I installed Arch Linux on it with the i3 window manager.
* 2014-2018: Dell XPS 13 Developer Edition. I used the Ubuntu 14.04 that came with it with the i3 window manager.
* 2018-2024: Lenovo Thinkpad X1 Carbon Gen 6 with Intel Core i7 8640U, 16 GB of RAM, and 1 TB SSD. I installed Arch Linux on it with Sway.
On the Asahi Linux there’s a one liner which you can paste into the Terminal. It worked very well. The only complaint is that it seemed to take hours to copy root.img and boot.img over at 150 KB/s.
Since I intended to run it with the Sway Window Manager, and storage space is precious, I installed Fedora minimal.
nmcli device wifi list
nmcli device wifi connect ‘my_ssid’ password ‘mypassword’
and then I installed a bunch of packages I use, such as:
sudo dnf install @sway-desktop-environment fish alacritty rofi ruff rclone pavucontrol-qt i3status mako pass syncthing maim xdg-user-dirs firefox rustup openssl-devel ncdu fd-find neovim
Then, I cloned my personal dotfile git repo and ran setup.sh. Of course, my configs weren’t meant for the MacBook, so I had to make some changes (which I’ve pushed to the dotfiles).
By default, the whole row containing the notch is disabled, leading to a large-bezels look which I personally don’t like. There has got to be a way to use that screen real estate nicely!
I re-enabled that part of the screen with
grubby –args=apple_dcp.show_notch=1 –update-kernel=ALL
Then, I put the Sway bar on the top to make a seamless appearance where the left and right side are used for useful information but the middle part is all black. By experimentation I found that the notch is 56px tall.
bar {
position top
status_command i3status
modifier $mod
tray_output primary
# the height of the m2 macbook air’s notch???
height 56
colors {
background #000000
statusline #cfcfd9
separator #000000
# border background text
focused_workspace #0c0c0c #413459 #cfcfd9
active_workspace #0c0c0c #413459 #cfcfd9
inactive_workspace #0c0c0c #0c0c0c #cfcfd9
urgent_workspace #2f343a #ff3300 #ffffff
The full i3status shows a lot of information which might get occluded by the notch, and it doesn’t work with the MacBook battery levels by default, so I had to update the config:
general {
colors = true
interval = 5
order += “wireless _first_”
order += “ethernet _first_”
order += “battery 0”
order += “tztime local”
wireless _first_ {
format_up = “W: (%quality at %essid) %ip”
format_down = “W: down”
ethernet _first_ {
format_up = “E: %ip (%speed)”
format_down = “E: down”
battery 0 {
format = “%status %percentage”
hide_seconds = true
path = /sys/class/power_supply/macsmc-battery/uevent
tztime local {
format = “%Y-%m-%d %H:%M:%S”
I usually don’t like having the bar on the top (as with macOS), since you won’t be able to move your mouse cursor to the top edge to, say, click on tabs. Despite being mostly keyboard-driven, clicking on browser tabs with the mouse is something I still do often.
To fix that, I prevented the mouse cursor from entering the bar on the top, with
# use swaymsg -t get_inputs for the touchpad’s identifier
input 1452:849:Apple_MTP_multi-touch map_to_region 0 56 2560 1608
In around September 2025, I switched from the native Swaybar to Waybar. Somehow, I was running into some issues with swaymsg‘s handling of battery levels, and my computer ricing was due for a slight visual update anyway. It’s nice to save a tiny bit of screen real estate with icons instead of pure text, but of course, it is somewhat slower than Swaybar as it needs to render graphical stuff. The Waybar is still situated behind the notch.
The new waybar config and css are at waybarconfig and waybarstyle.
I am very impressed with how smooth and problem-free Asahi Linux is. It is incredibly responsive and feels even smoother than my Arch Linux desktop with a 16 core AMD Ryzen 7945HX and 64GB of RAM.
The touchpad in particular is stunningly good and just as good as native macOS. The mouse cursor movement and two finger scroll with inertia just feel incredibly natural, much better than my old Thinkpad X1 Carbon.
One of the main reasons for getting the laptop was to use it for line scan photography. I was able to install the Alkeria SDK for ARM64 without any issues, even though it came as a deb file instead of an rpm. I didn’t manage to get alien to work properly (something about the architecture arm64 not matching Fedora’s convention of calling it aarch64?) so I just used bsdtar to extract the contents into the filesystem root, yolo!!! The M2 compiles my code super fast!
With high screen brightness and compiling lots of code, my battery went down from 100% to 60% after about 4.5 hours of use — not as good as the 15 hours of battery life on macOS but still pretty respectable.
That said, it isn’t perfect. Common issues are:
* higher battery drainage during sleep, so I usually just shut it down entirely when not using it
...
Read the original on daniel.lawrence.lu »
On a Google support page, the company says it is rolling out a new option to let users change their email address even if it is an “@gmail.com” address.
* Related: Google One discounting annual 2 TB and AI Pro plans by 50%
For quite some time now, Google has allowed users to change their account email address if they are using a third-party email address, but users with an “@gmail.com” address are left unable to change it, as Google says:
If your account’s email address ends in @gmail.com, you usually can’t change it.
It appears this is changing.
On the same support page that currently says that you usually can’t change your email, Google is detailing a new process that is “gradually rolling out.” The altered page weirdly only shows in Hindi at the moment, meaning you can’t see the changes in English. Everything quoted below is translated. The page was first spotted in the “Google Pixel Hub” group on Telegram.
The email address associated with your Google Account is the address you use to sign in to Google services. This email address helps you and others identify your account. If you’d like, you can change your Google Account email address that ends in gmail.com to a new email address that ends in gmail.com.
This is new functionality that Google hasn’t detailed elsewhere yet, but says is “gradually rolling out to all users.”
With this change, Google will allow users to change their “@gmail.com” email address to a new “@gmail.com” address with an altered username. After changing, Google details that your original email address will still receive emails at the same inbox as your new one and work for sign-in, and that none of your account access will change. Users will be unable to change or delete their email within 12 months of the change.
When you change your Google Account email address from an email address ending in gmail.com to a new email address ending in gmail.com:
The old email address in your Google Account that ends with gmail.com will be set as an alias. Learn more about alias email addresses .
You will receive emails at both your old and new email addresses.
Data saved in your account, including photos, messages, and emails sent to your old email address, will not be affected.
You can reuse your old Google Account email address at any time. However, you can’t create a new Google Account email address that ends with gmail.com for the next 12 months. You can’t delete your new email address either.
You can sign in to Google services like Gmail, Maps, YouTube, Google Play, or Drive with your old or new email address.
Each account can only change its “@gmail.com” address up to 3 times for a total of 4 addresses.
Google further details that your old Gmail address will still appear in some cases, and “won’t be immediately reflected in older instances” such as events on Calendar created before the change. You’ll also still be able to send emails from the old address. The old address remains yours and cannot be used by another user.
The page is very detailed on the process, but the changes just aren’t live yet. Presumably, this support page detailing the process in Hindi went up a little earlier than intended, but it certainly seems that we’ll be hearing more about this change in the coming weeks.
When the functionality goes live, users will be able to change their Gmail address via “My Account.”
We’ll update this article if further information comes out.
...
Read the original on 9to5google.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.