10 interesting stories served every morning and every evening.
This is the twenty-second post in the Fastmail Advent 2024 series. The previous post was Dec 21: Fastmail In A Box. The next post is Dec 23: Ten years of JMAP.
There has recently been talk of cloud repatriation where companies are moving from the cloud to on premises, with some particularly noisy examples.
Fastmail has a long history of using our own hardware. We have over two decades of experience running and optimising our systems to use our own bare metal servers efficiently.
We get way better cost optimisation compared to moving everything to the cloud because:
We understand our short, medium and long term usage patterns, requirements and growth very well. This means we can plan our hardware purchases ahead of time and don’t need the fast dynamic scaling that cloud provides. We have in house operations experience installing, configuring and running our own hardware and networking. These are skills we’ve had to maintain and grow in house since we’ve been doing this for 25 years. We are able to use our hardware for long periods. We find our hardware can provide useful life for anywhere from 5-10 years depending on what it is and when in the global technology cycle it was bought, meaning we can amortise and depreciate the cost of any hardware over many years.
Yes, that means we have to do more ourselves, including planning, choosing, buying, installing, etc, but the tradeoff for us has and we believe continues to be significantly worth it.
Of course over the 25 years we’ve been running Fastmail we’ve been through a number of hardware changes. For many years, our IMAP server storage platform was a combination of spinning rust drives and ARECA RAID controllers. We tended to use faster 15k RPM SAS drives in RAID1 for our hot meta data, and 7.2k RPM SATA drives in RAID6 for our main email blob data.
In fact it was slightly more complex than this. Email blobs were written to the fast RAID1 SAS volumes on delivery, but then a separate archiving process would move them to the SATA volumes at low server activity times. Support for all of this had been added into cyrus and our tooling over the years in the form of separate “meta”, “data” and “archive” partitions.
A few years ago however we made our biggest hardware upgrade ever. We moved all our email servers to a new 2U AMD platform with pure NVMe SSDs. The density increase (24 x 2.5″ NVMe drives vs 12 x 3.5″ SATA drives per 2U) and performance increase was enormous. We found that these new servers performed even better than our initial expectations.
At the time we upgraded however NVMe RAID controllers weren’t widely available. So we had to decide on how to handle redundancy. We considered a RAID-less setup using raw SSDs drives on each machine with synchronous application level replication to other machines, but the software changes required were going to be more complex than expected.
We were looking at using classic Linux mdadm RAID, but the write hole was a concern and the write cache didn’t seem well tested at the time.
We decided to have a look at ZFS and at least test it out.
Despite some of the cyrus on disk database structures being fairly hostile to ZFS Copy-on-write semantics, they were still incredibly fast at all the IO we threw at them. And there were some other wins as well.
When we rolled out ZFS for our email servers we also enabled transparent Zstandard compression. This has worked very well for us, saving about 40% space on all our email data.
We’ve also recently done some additional calculations to see if we could tune some of the parameters better. We sampled 1 million emails at random and calculated how many blocks would be required to store those emails uncompressed, and then with ZFS record sizes of 32k, 128k or 512k and zstd-3 or zstd-9 compression options. Although ZFS RAIDz2 seems conceptually similar to classic RAID6, the way it actually stores blocks of data is quite different and so you have to take into account volblocksize, how files are split into logical recordsize blocks, and number of drives when doing calculations.
Emails: 1,026,000
Raw blocks: 34,140,142
32k & zstd-3, blocks: 23,004,447 = 32.6% saving
32k & zstd-9, blocks: 22,721,178 = 33.4% saving
128k & zstd-3, blocks: 20,512,759 = 39.9% saving
128k & zstd-9, blocks: 20,261,445 = 40.7% saving
512k & zstd-3, blocks: 19,917,418 = 41.7% saving
512k & zstd-9, blocks: 19,666,970 = 42.4% saving
This showed that the defaults of 128k record size and zstd-3 were already pretty good. Moving to a record size of 512k improved compression over 128k by a bit over 4%. Given all meta data is cached separately, this seems a worthwhile improvement with no significant downside. Moving to zstd-9 improved compression over zstd-3 by about 2%. Given the CPU cost of compression at zstd-9 is about 4x zstd-3, even though emails are immutable and tend to be kept for a long time, we’ve decided not to implement this change.
We always enable encryption at rest on all of our drives. This was usually done with LUKS. But with ZFS this was built in. Again, this reduces overall system complexity.
So after the success of our initial testing, we decided to go all in on ZFS for all our large data storage needs. We’ve now been using ZFS for all our email servers for over 3 years and have been very happy with it. We’ve also moved over all our database, log and backup servers to using ZFS on NVMe SSDs as well with equally good results.
The flash memory in SSDs has a finite life and finite number of times it can be written to. SSDs employ increasingly complex wear levelling algorithms to spread out writes and increase drive lifetime. You’ll often see the quoted endurance of an enterprise SSD as either an absolute figure of “Lifetime Writes”/“Total bytes written” like 65 PBW (petabytes written) or a relative per-day figure of “Drive writes per day” like 0.3, which you can convert to lifetime figure by multiplying by the drive size and the drive expected lifetime which is often assumed to be 5 years.
Although we could calculate IO rates for existing HDD systems, we were making a significant number of changes moving to the new systems. Switching to a COW filesystem like ZFS, removing the special casing meta/data/archive partitions, and the massive latency reduction and performance improvements mean that things that might have taken extra time previously and ended up batching IO together, are now so fast it actually causes additional separated IO actions.
So one big unknown question we had was how fast would the SSDs wear in our actual production environment? After several years, we now have some clear data. From one server at random but this is fairly consistent across the fleet of our oldest servers:
# smartctl -a /dev/nvme14
Percentage Used: 4%
At this rate, we’ll replace these drives due to increased drive sizes, or entirely new physical drive formats (such E3. S which appears to finally be gaining traction) long before they get close to their rated write capacity.
We’ve also anecdotally found SSDs just to be much more reliable compared to HDDs for us. Although we’ve only ever used datacenter class SSDs and HDDs failures and replacements every few weeks were a regular occurrence on the old fleet of servers. Over the last 3+ years, we’ve only seen a couple of SSD failures in total across the entire upgraded fleet of servers. This is easily less than one tenth the failure rate we used to have with HDDs.
After converting all our email storage to NVMe SSDs, we were recently looking at our data backup solution. At the time it consisted of a number of older 2U servers with 12 x 3.5″ SATA drive bays and we decided to do some cost calculations on:
Looking at various providers, the per TB per month price, and then a yearly price for 1000Tb/1Pb (prices as at Dec 2024)
Some of these (e.g. Amazon) have potentially significant bandwidth fees as well.
It’s interesting seeing the spread of prices here. Some also have a bunch of weird edge cases as well. e.g. “The S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive storage classes require an additional 32 KB of data per object”. Given the large retrieval time and extra overhead per-object, you’d probably want to store small incremental backups in regular S3, then when you’ve gathered enough, build a biggish object to push down to Glacier. This adds implementation complexity.
* Pros: No limit to amount we store. Assuming we use S3 compatible API, can choose between multiple providers.
* Cons: Implementation cost of converting existing backup system that assumes local POSIX files to S3 style object API is uncertain and possibly significant. Lowest cost options require extra careful consideration around implementation details and special limitations. Ongoing monthly cost that will only increase as amount of data we store increases. Uncertain if prices will go down or not, or even go up. Possible significant bandwidth costs depending on provider.
Seagate Exos 24 HDs are 3.5″ 24T HDDs. This would allow us to triple the storage on existing servers. Each HDD is about $500, so upgrading one 2U machine would be about $6,000 and have storage of 220T or so.
* Pros: Reuses existing hardware we already have. Upgrades can be done a machine at a time. Fairly low price
* Cons: Will existing units handle 24T drives? What’s the rebuild time on drive failure look like? It’s almost a day for 8T drives already, so possibly nearly a week for a failed 24T drive? Is there enough IO performance to handle daily backups at capacity?
As we know, SSDs are denser (2.5″ -> 24 per 2U vs 3.5″ -> 12 per 2U), more reliable, and now higher capacity - up to 61T per 2.5″ drive. A single 2U server with 24 x 61T drives with 2 x 12 RAIDz2 = 1220T. Each drive is about $7k right now, prices fluctuate. So all up 24 x $7k = $168k + ~$20k server =~ $190k for > 1000T storage one-time cost.
* Pros: Much higher sequential and random IO than HDDs will ever have. Price < 1 year of standard S3 storage. Internal to our WAN, no bandwidth costs and very low latency. No new development required, existing backup system will just work. Consolidate on single 2U platform for all storage (cyrus, db, backups) and SSD for all storage. Significant space and power savings over existing HDD based servers
* Cons: Greater up front cost. Still need to predict and buy more servers as backups grow.
One thing you don’t see in this calculation is datacenter space, power, cooling, etc. The reason is that compared to the amortised yearly cost of a storage server like this, these are actually reasonably minimal these days, on the order of $3000/2U/year. Calculating person time is harder. We have a lot of home built automation systems that mean installing and running one more server has minimal marginal cost.
We ended up going with the the new 2U servers option:
* The 2U AMD NVMe platform with ZFS is a platform we have experience with already
* SSDs are much more reliable and much higher IO compared to HDDs
* No uncertainty around super large HDDs, RAID controllers, rebuild times, shuffling data around, etc.
* No new development required, can use existing backup system and code
So far this has worked out very well. The machines have bonded 25Gbps networks and when filling them from scratch we were able to saturate the network links streaming around 5Gbytes/second of data from our IMAP servers, compressing and writing it all down to a RAIDz2 zstd-3 compressed ZFS dataset.
Running your own hardware might not be for everyone and has distinct tradeoffs. But when you have the experience and the knowledge of how you expect to scale, the cost improvements can be significant.
...
Read the original on www.fastmail.com »
I’m one of the founders at Corelatus. Our niche is telecom hardware for the E1/T1 and SDH/SONET interfaces. I work with layer 2 signalling, i.e SS7 MTP-2, ATM and frame relay, and the Erlang control system for our embedded hardware.
You can reach me at matthias@corelatus.com
Decoding the telephony signals in Pink Floyd’s ‘The Wall’
How far can I place a PMP from the tap-in point? (update)
Finding the bad link in an SDH network using BIP counters
What does an improperly placed monitor point do to a signal?
I have 47 E1s? How much hardware do I need to monitor them?
Decoding signalling on the Abis interface with Wireshark
How does TCP behave on an interrupted network?
Perl example code for GTH: SS7 ISUP decoding and playback/record
I like puzzles. Recently, someone asked me to identify the telephone network signalling in The Wall, a 1982 film featuring Pink Floyd. The signalling is audible when the main character, Pink, calls London from a payphone in Los Angeles, in this
scene (Youtube).
Here’s a five second audio clip from when Pink calls:
The clip starts with some speech overlapping a dial-tone which in turn overlaps some rapid tone combinations, a ring tone and some pops, clicks and music. It ends with an answer tone.
The most characteristic part is the telephone number encoded in the rapid tone combinations. Around 1980, when the film was made, different parts of the world used similar, but incompatible, tone-based signalling schemes. They were all based on the same idea: there are six or eight possible tones, and each digit is represented by a combination of two tones.
SoX, an audio editing tool for PCs, can make charts that show the spectral components of the audio over time. The horizontal axis represents time, the vertical axis frequency, and darker sections show more audio power, and lighter sections less.
Signalling tones appear as dark horizontal lines in the spectrogram, with the digit signalling visible from 0.7 to 1.8 seconds. That part of the signalling has tones at roughly 700, 900, 1100, 1300, 1500
and 1700 Hz.
Everyone’s heard DTMF (Dual Tone Multi Frequency). It’s the sound your phone makes when you interact with one of those “Press 1 if you are a new customer. Press 2 if you have a billing enquiry. Press 3…” systems. DTMF is still used by many fixed-line telephones to set up a call.
In DTMF, each digit is encoded by playing a “high” tone and a “low” tone. The low ones can be 697, 770, 852 or 941 Hz. The high ones 1209, 1336, 1477 and 1633 Hz.
None of the pairs in the audio match this, so it’s not
DTMF. Here’s an audio clip of what it would sound like if we used DTMF signalling for the same number, with about the same speed of tones:
CAS R2 uses a two-out-of-six tone scheme with the frequencies 1380, 1500, 1620, 1740, 1860 and 1980 Hz for one call direction and 1140, 1020, 900, 780, 660 and 540 Hz for the other. None of these are a good match for the tones we heard. Besides, Pink is in the USA, and the USA did not use CAS R2, so it’s not CAS.
This is what the digit signalling would have sounded like if CAS were used:
SS5 also uses a two-out-of-six scheme with the frequencies 700, 900, 1100, 1300, 1500 and 1700 Hz. This matches most of what we can hear, and SS5 is the signalling system most likely used for a call from the USA to the UK in the early 1980s.
This is what the digit signalling sounds like in SS5, when re-generated to get rid of all the other sounds:
It can’t be SS7. Signalling system No. 7 (SS7) doesn’t use tones at all; it’s all digital. SS7 is carried separately from the audio channel, so it can’t be heard by callers. SS7 wasn’t in common use until later in the 1980s.
I made a spectrogram which combines all three signalling types on the same chart. The difference between DTMF and SS5 is subtle, but recognisable. CAS is obviously different.
I injected the audio file into a timeslot of an E1 line, connected it to Corelatus’ hardware and started an ss5_registersig_monitor.
The input audio has a lot of noise in addition to the signalling, but these protocols are robust enough for the digital filters in the hardware to be able to decode and timestamp the dialled digits anyway. Now, we know that the number signalling we hear was 044
1831. The next step is to analyse the frequencies present at the start time for each tone. I re-analysed the audio file with SoX, which did an FFT on snippets of the audio to find the actual tone frequencies at the times there were tones, like this:
sox input.wav -n trim 0.700 0.060 stat -freq
At this point, I’m certain the signalling is SS5. It uses the correct frequencies to transmit digits. It uses the correct digit timing. It obeys the SS5 rules for having KP1 before the digits and ST after the digits. It uses a tone close to 2400 Hz to indicate that the call was answered.
I’ve also listed the dial tone at the beginning, and the 2400 Hz seizing tone at the end. SS5 also uses a 2600 Hz tone, which is infamous for its use in blue box phreaking (telephone fraud) in the 1980s.
My best guess is that, at the time the film was made, callers could hear the inter-exchange signalling during operator-assisted calls in the US. That would have allowed the sound engineer to record a real telephone in the US and accurately capture the feeling of a long-distance call. The number itself was probably made-up: it’s too short and the area code doesn’t seem valid.
The audio was then cut and mixed to make the dial tone overlap the signalling. It sounds better that way and fits the scene’s timing.
It turns out that an extended version of the same phone call appears near the end of ‘Young Lust’, a track on the album ‘The Wall’. Other engineers with actual experience of 1970s telephone networks have also analysed the signalling
in an interesting article with a host of details and background I didn’t know about, including the likely names of the people in the call.
It’s nice to know that I got the digit decoding right, we both concluded it was 044 1831. One surprise is that the number called is probably a shortened real number in London, rather than a completely fabricated one as I suspected earlier. Most likely, several digits between the ‘1’ and the ‘8’ are cut out. Keith Monahan’s analysis noted a very ugly splice point there, whereas I only briefly wondered why the digit start times are fairly regular for all digits except that the ‘8’ starts early and the final ‘1’ starts late.
...
Read the original on corelatus.com »
This is the fascinating story of how researchers used Bloom filters cleverly to make SQLite 10x faster for analytical queries. These are my five-minute notes on the paper SQLite: Past, Present, and Future (2022). I’ll also explain some database internals and how databases implement joins.
SQLite is a B-tree on disk, row-based storage. It internally uses a VM called VDBE to execute queries. It is cross-platform, single-threaded, and runs almost everywhere.
SQLite is a general-purpose database, but it excels at OLTP workloads. However, researchers at Buffalo University in 2015 found that most queries are simple key-value lookups and complicated OLAP queries. So, researchers at the University of Wisconsin-Madison set out to make it faster for analytical queries since the trend is changing. To set a baseline, they used DuckDB. They used the industry standard OLAP benchmark - Star Schema Benchmark (SSB).
SSB contains a bunch of analytical queries called Star Queries, where you have one large fact table and multiple smaller dimension tables. e.g., fact table = customer orders, dimension tables = customer, merchant info, delivery partners, etc.
Here is a sample query with the query plan. Like a typical analytical query, this one contains joins between four tables.
As expected, they found DuckDB to be 30-50x faster than SQLite. Note that they ran DuckDB in single-threaded mode.
Let’s figure out why SQLite was slow; then, we can determine how to make it faster. SQLite has a compile time option, VDBE_PROFILE, which measures the number of CPU cycles each instruction takes in VDBE. When they reran the benchmarks, they found two opcodes:
What do these opcodes do?
* SeekRowID - given a rowId, probe the row in B-tree
Since the analytical queries mainly contain join operations, let’s understand how they are implemented.
Following are some of the ways databases implement joins:
SQLite does Nested Loop join, which is the simplest of all three. Here is an animation (source) showing how Nested Loop join works:
Assume you have two tables: orders and customers. The code here depicts joining both tables using the order_id column. The outer loop iterates over every item in the customers table, and for each item, it iterates over every item in the orders table.
For every id it matches, it appends to the result list. The loop operations are equivalent to probing in B-tree, which is very expensive.
Your goal is to reduce the B-tree probes. Now, stop for a minute and consider why this is not good. Can you come up with some ideas where you can make this better?
Next, the order of tables in the join operation matters. Here is a simple illustration:
Consider there are three tables: Orders, Customers, Date. Our query matches 20 items in Customers and 10 items in Date. We probe the B-tree whenever a row matches.
Just by flipping the order, it reduced to 1M operations! But it is incredibly difficult to come up with an optimized query plan. It is an NP-Hard problem.
How do we optimize the join? The other two join algorithms are better than Nested Loop join. However, authors argue that Hash Join takes significant memory, and SQLite mostly runs in memory-constrained environments. Second, adding one more join algorithm would complicate the query planner.
Here is one way: before you run both loops, you first build the customer data cache. Then, in the inner loop, first, you check this cache. Only if there is a match do you iterate over the loop.
That’s what the researchers did! They used a Bloom filter, which is very space efficient and fits in a CPU cache line. It was also easy to implement.
They added two opcodes: Filter and FilterAdd. At the start of the join operation, we go over all the rows of dimension tables and set the bits in the Bloom filter which match the query predicate. The opcode is FilterAdd.
During the join operation, we first check if the row exists in the Bloom filter at each stage. If it does, then we do the B-tree probe. This is the Filter opcode.
This is the optimized query plan.
This is the CPU cycle analysis post-optimization. You can see that the large blue bars are almost gone!
The results of this research have been applied to SQLite already and were released in v3.38.0.
tl;dr: Bloom filters were great because: minimal memory overhead, goes well with SQLite’s simple implementation, and worked within existing query engine.
1. I presented this paper at a local Papers We Love group. This blog post is a summary of the talk
2. Yes, there is a typo in the ‘Matters Order’ slide
...
Someone purporting to be United Healthcare is filing DMCA requests to scrub the internet of artists’ depictions of the surveillance video of Luigi smiling, parody merchandise of “Deny, Defend, Depose,” and other merchandise showing the alleged shooter.
If it is really the health insurer filing these notices, it wildly oversteps any legal rights the company might have, but if there’s any company willing to preemptively breach the law and dare the other side to spend their limited resources trying to protect their rights it would be a health insurer.
404 Media reports that a number of DMCA takedown requests from “United Healthcare” have hit artists depicting the newsworthy event. The publication reached out to the company to confirm that it’s behind the filings but didn’t hear back.
For those keeping score at home, the DMCA provides that a must inform an internet entity that their intellectual property is being abused and the service provider then has an opportunity to expeditiously remove the material and avoid possible liability. That said, the DMCA only provides a shield against legal action and if the copyright claim is — to use the precise legal term — “utter bullshit,” they don’t have to comply. Unfortunately, hosts routinely err on the side of caution and remove content when there’s zero cognizable claim.
If this is someone acting on behalf of United Healthcare, what an irony to lodge a falsified claim without it getting denied.
An entity claiming to be United Healthcare is sending bogus copyright claims to internet platforms to get Luigi Mangione fan art taken off the internet, according to the print-on-demand merch retailer TeePublic. An independent journalist was hit with a copyright takedown demand over an image of Luigi Mangione and his family she posted on Bluesky, and other DMCA takedown requests posted to an open database and viewed by 404 Media show copyright claims trying to get “Deny, Defend, Depose” and Luigi Mangione-related merch taken off the internet, though it is unclear who is filing them.
It’s a bizarre and grimly ironic turn if United Healthcare wanted to the words “Deny, Defend, Depose.” The takedown request for that piece objects to the artist styling the “D” with elements of the United Healthcare logo… which would be the very definition of protected parody. The 404 story has images if you want to see what these all look like.
Beyond tying itself to the gunman’s catchphrase, the idea of UHC trying to own any and all fixations of the alleged shooter’s likeness would be a wild leap. An artists’ depiction of Mangione could belong to the artist (Mangione might be able to assert some rights to his likeness — a dubious claim under the circumstances and in light of the First Amendment — but in no case would UHC have such a claim).
“What is the circumstance under which United Healthcare might come to own the copyright to a watercolor painting of the guy who assassinated their CEO?” tech rights expert and science fiction author Cory Doctorow told 404 Media in a phone call. “It’s just like, it’s hard to imagine” a lawyer thinking that, he added, saying that it’s an example of “copyfraud.”
It is illegal to file DMCA notices if you don’t own the copyright (or at least have a good faith belief that you do). The idea that UHC now owns every depiction of the guy accused of killing their employee is laughably frivolous and one that its legal department understands this and these requests are coming from a third party troll impersonating the carrier.
An independent journalist posting a photo of Mangione with his family also received a DMCA request — from a lawyer claiming to represent a family member holding the copyright — even though the image was “originally posted on the campaign website of Maryland assemblymember Nino Mangione.” That site apparently deleted the image and turned around to threaten anyone using it now which is… not how fair use works. But at least this request can claim they have a “good faith” claim, though the system probably shouldn’t reward people for trying to retroactively claim rights after they try to memory-hole their internet history.
But the disturbing thread running through all these requests is how easy it’s become for copyright trolls to leverage the DMCA to intimidate providers into accepting facially invalid requests. The statute has given way to a sort of asymmetrical warfare over IP where bad faith actors can pepper sites with ownership claims and trust that their targets will back down rather than deal with the litigation risk. As 404’s coverage notes, this doesn’t bode well in a country about to inaugurate an administration openly encouraging retribution against journalists in an effort to silence criticism.
This sure looks like a company (and a local politician) trying to use copyright law as a “square peg-round hole” solution to erasing any humanizing or sympathetic depiction of Mangione. And unfortunately, everyone seems to be obeying in advance.
Copyright Abuse Is Getting Luigi Mangione Merch Removed From the Internet [404 Media]
Joe Patrice is a senior editor at Above the Law and co-host of Thinking Like A Lawyer. Feel free to email any tips, questions, or comments. Follow him on Twitter or Bluesky if you’re interested in law, politics, and a healthy dose of college sports news. Joe also serves as a Managing Director at RPN Executive Search.
...
Read the original on abovethelaw.com »
Whose woods these are I think I know.
His house is in the village though;
He will not see me stopping here
To watch his woods fill up with snow.
My little horse must think it queer
To stop without a farmhouse near
Between the woods and frozen lake
The darkest evening of the year.
He gives his harness bells a shake
To ask if there is some mistake.
The only other sound’s the sweep
Of easy wind and downy flake.
The woods are lovely, dark and deep.
But I have promises to keep,
And miles to go before I sleep,
And miles to go before I sleep.
...
Read the original on poets.org »
Welcome to twtxt’s documentation. This documentation is divided into multiple parts. We recommend you to get started with the Installation and then head over to the
Quickstart section. If you don’t know what this is about read the Introduction first. There is a more detailed Usage section about how to use twtxt via the CLI. The internals of twtxt are documented in the API chapter.
Feel free to contribute to this project. The source code is maintained on GitHub.
...
Read the original on twtxt.readthedocs.io »
Five years ago I was in Berlin for a few months, and my friend Michael invited me to a gathering of his beer aficionado friends at a neighborhood bar. It seemed like a typical bar hang — until one of the guys planted a little flag on the table: an insignia for their group.
That seemed a bit odd: Why do these pals have a flag? Turns out, this was no random meetup for a pint. It was a Stammtisch.
As I learned, Stammtisch (“shtom-tish”) means “regulars’ table”. It’s an age-old German tradition where a group of like-minded people — traditionally men — meet up at a bar or a restaurant at a regular time to have drinks and chat.
Michael and his Stammtisch companions clearly enjoyed each other’s company. And as I learned, it was really more than just sharing drinks — these monthly meetings deepened their friendships.
And it had an elegant simplicity: A standing date on the calendar to see a group of people you enjoy, out at a bar or restaurant so no one needs to tidy their home.
These days, I’ve got a toddler at home and socializing sans kid takes more planning. The easy comfort of meeting friends at a bar sometimes feels like an activity from my carefree youth, and I’m already nostalgic for it.
Back home in D. C., I found myself thinking recently of Stammtisch and what a nice way it is to see people you like. I mentioned this to a German friend, Amrai, as we walked around the zoo with our kids, admiring the goats and alpacas.
Amrai laughed and said, “You know there’s a Stammtisch that meets in D. C.? It’s mostly Germans.” Regretting my scant Deutsch, I decided to get myself invited.
So on a recent cold, dark December evening, I parked my bike outside a brewpub called Right Proper, and stepped inside. I was met by warm conviviality and conversation — and many people who know the Stammtisch tradition well.
Over the din of chatter and pop music, the gathering’s organizer, Andrea Shalal, described the classic Stammtisch: “In every German village there is the corner bar, and in the corner is a table. It’s reserved for the sort of elders or other regulars. And they sit in the corner and they drink their beer and smoke their cigarettes and pontificate on the town and all of its craziness.”
Traditionally, many taverns in Germany had a big, round table reserved for regulars — Stammtisch refers to both the table and those who meet there. There are many tales of foreign travelers unwittingly taking a seat at this table and being shooed away by the barkeep.
This Stammtisch in D. C. was a modern take: There were women, not just men. Instead of that big table, people milled with their drinks beside a bar. But everyone seemed to agree that the basic format of Stammtisch is a good one.
“Growing up in the ’80s, ’90s, early 2000s, the idea of Stammtisch in Germany’s youth was sort of rejected as ‘This is something that our parents’ generation, our grandparents’ generation, would do,′ ” said Robert Christoffel, 45, who moved to the U. S. more than a decade ago.
But he’s noticed that’s starting to change.
“A lot of my friends, nowadays, in Germany — maybe because they have gotten a little older as well — they are now meeting up and sort of establishing Stammtisch,” he told me.
In modern life, as fewer people go to church — which in Germany might be followed by drinking at the tavern — these gatherings can take on even more significance.
Werner Hein, who’s 80 years old, described the Stammtisch this way: “That’s a place where you meet in person. It’s hard to see a Stammtisch on Zoom.”
Men in particular can find it hard to be vulnerable with one another, added Hein. But amid the ambiance and repeated meetings of a Stammtisch, something starts to happen.
“You open up on things, on your very personal things,” Hein said. “Nowadays, I have to almost say it’s more important than ever to have a kind of Stammtisch.”
Does a Stammtisch have to involve alcohol? I asked.
“If you consider beer alcohol, yes, I think so,” replied Christoffel, pint glass in hand.
“That’s called a Kaffeekränzchen,” Hein explained. “That’s typically more like the women when they meet in the afternoon for a coffee and cake. Also another very traditional German thing.”
To me, that sounds equally good as a stein of beer — maybe better.
Turns out I’m actually familiar with Kaffeekränzchen, though I’d never heard the term. For the last couple years, my friend Jordan has organized a regular gathering she dubbed Coffee Club. A group of us meet at a neighborhood café at the same time each week and hang out for an hour over caffeine and baked goods.
At Coffee Club we talk about our jobs, our weekends — and our lives. On one occasion, several members brought dresses for me to borrow for a wedding and an impromptu catwalk transpired.
I can’t make it every week, but that’s OK. The group is large enough that there’s no pressure.
Amid the stress of work and the full-court press of parenthood, this regular hang with the ladies has been a reliable source of good vibes and commiseration. It’s something to look forward to, not an obligation — just like the Stammtisch.
I asked Werner Hein if he had any advice for someone who might want to start a Stammtisch.
It takes a bit of commitment, he said — “A commitment to have fun.”
And in return? A warm circle of connection, no matter how cold it is outside.
...
Read the original on www.npr.org »
Aging brings two opposing trends in cancer risk: first, the risk climbs in our 60s and 70s, as decades of genetic mutations build up in our bodies. But then, past the age of around 80, the risk drops again — and a new study may explain a key reason why.
The international team of scientists behind the study analyzed lung cancer in mice, tracking the behavior of alveolar type 2 (AT2) stem cells. These cells are crucial for lung regeneration, and are also where many lung cancers get started.
What emerged was higher levels of a protein called NUPR1 in the older mice. This caused cells to act as if they were deficient in iron, which in turn limited their regeneration rates — putting restrictions on both healthy growth and cancerous tumors.
“The aging cells actually have more iron, but for reasons we don’t yet fully understand, they function like they don’t have enough,” says cancer biologist Xueqian Zhuang, from the Memorial Sloan Kettering Cancer Center (MSK) in New York.
“Aging cells lose their capacity for renewal and therefore for the runaway growth that happens in cancer.”
The same processes were found to be happening in human cells too: more NUPR1 leads to a drop in the amount of iron available to cells. When NUPR1 was artificially lowered or iron was artificially increased, cell growth capabilities were boosted again.
That potentially gives researchers a way of exploring treatments that target iron metabolism — especially in older people. It could perhaps restore lung capacity in those experiencing long-term effects from COVID-19, for example.
These findings also have implications for cancer treatments based on a type of cell death called ferroptosis, which is triggered by iron. This cell death is less common in older cells, the researchers found, because of their functional iron deficiency.
This perhaps also makes them more resistant to cancer treatments based on ferroptosis that are in development– so the earlier a ferroptosis treatment can be tried, the better it’s likely to work.
“What our data suggests in terms of cancer prevention is that the events that occur when we’re young are probably much more dangerous than the events that occur later,” says cancer biologist Tuomas Tammela, from MSK.
“So, preventing young people from smoking, or from tanning, or from other obvious carcinogenic exposures are probably even more important than we thought.”
There’s lots more to explore here about the effects of NUPR1 and how it relates to stem cell function — both healthy regeneration and cancerous growth — but these are important findings for fighting cancer at any stage of life.
As always with cancer treatments, multiple factors need to be taken into account: the type and stage of cancer, other medical conditions that might be involved, and (as this new study shows) the age of the individual. The more personalized we can make these treatments, the more effective they can be.
“There’s still a lot that’s unknown about how aging actually changes the biology of cancer,” says Zhuang.
The research has been published in Nature.
...
Read the original on www.sciencealert.com »
In the 1990s, Apple struggled to bring the original Mac OS—originally written in 1984 for the resource-constrained Macintosh 128K machine—up to modern operating system standards. The story of how OS X came to be is thrilling in its own right, but suffice it to say that Apple ended up buying Steve Jobs’ second computer company, NeXT, and using its NeXTSTEP operating system as the basis of a new generation of Macs.
Apple announced the acquisition of NeXT on December 20, 1996, noting it wanted NeXT’s object-oriented software development technology and its operating system know-how. As part of the deal, Jobs came back to Apple, eventually taking over as CEO and making the company into the consumer electronics giant it is today. Sixteen years later, several technologies developed or championed by NeXT still survive in OS X and in its mobile cousin, iOS. This week, we remember some of those technologies which continue to power key features of Apple’s devices.
We consulted several developers with knowledge of both NeXTSTEP and OS X; they universally agreed that one crucial feature of NeXTSTEP that made OS X and iOS what they are today is its underlying UNIX roots. Beneath all the slickly styled windows, icons, scrolling lists, and buttons sits a rock-solid, certified UNIX operating system.
UNIX was originally conceived in 1969 by Bell Labs’ Ken Thompson for the PDP-7 minicomputer. Its development throughout the early ’70s led to the development of the C programming language by Thompson’s colleague Dennis Ritchie. UNIX at its core was meant to be a powerful operating system for the massive computers of the day, but it could also be easily programmed and controlled by developers using a simple video terminal.
That same basic system powers both the Mac sitting on your desk and the iPhone resting in your pocket. By basing OS X—and later, iOS—on UNIX, Macs were finally able to enjoy important operating system features like protected memory, pre-emptive multitasking, and daemon-based services.
...
Read the original on arstechnica.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.