10 interesting stories served every morning and every evening.
Posts from this author will be added to your daily email digest and your homepage feed.
Discord announced on Monday that it’s rolling out age verification on its platform globally starting next month, when it will automatically set all users’ accounts to a “teen-appropriate” experience unless they demonstrate that they’re adults.
“For most adults, age verification won’t be required, as Discord’s age inference model uses account information such as account tenure, device and activity data, and aggregated, high-level patterns across Discord communities. Discord does not use private messages or any message content in this process,” Savannah Badalich, Discord’s global head of product policy, tells The Verge.
Users who aren’t verified as adults will not be able to access age-restricted servers and channels, won’t be able to speak in Discord’s livestream-like “stage” channels, and will see content filters for any content Discord detects as graphic or sensitive. They will also get warning prompts for friend requests from potentially unfamiliar users, and DMs from unfamiliar users will be automatically filtered into a separate inbox.
Direct messages and servers that are not age-restricted will continue to function normally, but users won’t be able to send messages or view content in an age-restricted server until they complete the age check process, even if it’s a server they were part of before age verification rolled out. Badalich says those servers will be “obfuscated” with a black screen until the user verifies they’re an adult. Users also won’t be able to join any new age-restricted servers without verifying their age.
Discord’s global age verification launch is part of a wave of similar moves at other online platforms, driven by an international legal push for age checks and stronger child safety measures. This is not the first time Discord has implemented some form of age verification, either. It initially rolled out age checks for users in the UK and Australia last year, which some users figured out how to circumvent using Death Stranding’s photo mode. Badalich says Discord “immediately fixed it after a week,” but expects users will continue finding creative ways to try getting around the age checks, adding that Discord will “try to bug bash as much as we possibly can.”
It’s not just teens trying to cheat the system who might attempt to dodge age checks. Adult users could avoid verifying, as well, due to concerns around data privacy, particularly if they don’t want to use an ID to verify their age. In October, one of Discord’s former third-party vendors suffered a data breach that exposed users’ age verification data, including images of government IDs.
If Discord’s age inference model can’t determine a user’s age, a government ID might still be required for age verification in its global rollout. According to Discord, to remove the new “teen-by-default” changes and limitations, “users can choose to use facial age estimation or submit a form of identification to [Discord’s] vendor partners, with more options coming in the future.”
The first option uses AI to analyze a user’s video selfie, which Discord says never leaves the user’s device. If the age group estimate (teen or adult) from the selfie is incorrect, users can appeal it or verify with a photo of an identity document instead. That document will be verified by a third party vendor, but Discord says the images of those documents “are deleted quickly — in most cases, immediately after age confirmation.”
Badalich also says after the October data breach, Discord “immediately stopped doing any sort of age verification flows with that vendor” and is now using a different third-party vendor. She adds, “We’re not doing biometric scanning [or] facial recognition. We’re doing facial estimation. The ID is immediately deleted. We do not keep any information around like your name, the city that you live in, if you used a birth certificate or something else, any of that information.”
Badalich goes on to explain that the addition of age assurance will mainly impact adult content: “A majority of people on Discord are not necessarily looking at explicit or graphic content. When we say that, we’re really talking about things that are truly adult content [and] age inappropriate for a teen. So, the way that it will work is a majority of people are not going to see a change in their experience.”
Even so, there’s still a risk that some users will leave Discord as a result of the age verification rollout. “We do expect that there will be some sort of hit there, and we are incorporating that into what our planning looks like,” Badalich says. “We’ll find other ways to bring users back.”
...
Read the original on www.theverge.com »
Let’s start by asking ourselves: what color SHOULD the sky be?
Or, one step further back, what color should anything be?
And the answer is: the color of anything is due to the wavelength of photons coming from that thing and hitting your eye.
These sidenotes are optional to read, but I’ll use them for giving the fuller technical details when I’ve abbreviated things in the main body of the text.
In this case, the color you see is determined by the wavelengths of light entering your eye since (1) you may be seeing a pure frequency, but in almost all cases, (2) you’re seeing many frequencies, which your brain interprets as a single color.
For instance, the sensation of turquoise at a specific point can be caused by (a) photons of wavelength 500nm emanating from that point, (b) a specific combo of photons of wavelengths 470nm and 540nm, or (c) (mostly realistically) photons of a huge number of wavelengths, probably peaking somewhere around 500nm.
In the text, I am a bit fast and loose with the difference.
When sunlight hits Earth’s atmosphere, most colors of photons pass through unencumbered. But blue photons have a tendency to ricochet around a lot.
This causes them to disperse all throughout the atmosphere. They disperse so far and wide, and are so numerous, that you can look at any part of the sky on a clear afternoon and, at that moment, blue photons will be shooting from that point straight to your eyes.
Therefore the sky is blue.
Most colors of light pass through the atmosphere relatively unencumbered. You only see them when you look at the sun, where they contribute to the whiteness of the sun’s light. Blue, however, bounces around a lot, getting spread all over the sky. Because blue photons hit our eyeballs from every angle of the sky, the whole sky appears blue.
This is true and all, but it kicks the can down the road. Why blue? Why not red?
In short, it’s because blue and violet have the closest frequencies to a “resonant frequency” of nitrogen and oxygen molecules’s electron clouds.
There’s a lot there, so we’ll unpack it below. But first, here’s an (interactive) demo.
This demo is a simplification. In reality, 99.999% of photons pass through (neither scattering nor absorbing), even at the resonant frequency. Pretty boring to watch!
When a photon passes through/near a small molecule (like N or O, which make up 99% of our atmosphere), it causes the electron cloud around the molecules to “jiggle”. This jiggling is at the same frequency as the photon itself — meaning violet photons cause faster jiggling than red photons.
In any case, for reasons due the internal structure of the molecule, there are certain resonant frequencies of each molecule’s electron cloud. As the electron clouds vibrate closer and closer to these resonant frequencies, the vibrations get larger and larger.
The stronger the electron cloud’s oscillations, the more likely a passing photon (a) is deflected in a new direction rather than (b) passes straight through.
For both N and O, the lowest resonant frequency is in the ultraviolet range. So as the visible colors increase in frequency towards ultraviolet, we see more and more deflection, or “scattering”.
“Scattering” is the scientific term of art for molecules deflecting photons. Linguistically, it’s used somewhat inconsistently. You’ll hear both “blue light scatters more” (the subject is the light) and “atmospheric molecules scatter blue light more” (the subject is the molecule). In any case, they means the same thing 🤷♂️
In fact, violet is 10x more likely to scatter than red.
Math talk: scattering increases proportional to the FOURTH power of the frequency. So higher frequency light means WAY more scattering.
So why isn’t the sky violet? Great question – we’ll cover that in a sec.
I just want to point out two other things that (a) you can see in the demo above, and (b) are useful for later in this article.
First, when light gets really close to — and eventually exactly at — the resonant frequency of the molecule’s electron cloud, it gets absorbed far more than scattered! The photon simply disappears into the electron cloud (and the electron cloud bumps up one energy level). This isn’t important for understanding the color of Earth’s sky… but there are other skies out there 😉
Second, did you notice that even red scatters some? Like, yes, blue scatters 10x more. But the sky is actually every color, just mostly blue/violet. This is why the sky is light blue. If white light is all visible colors of light mixed together equally, light blue is all visible colors mixed together — but biased towards blue.
What would the sky look like if it was only blue? Check it out.
I’ll just end by saying, this dynamic (where scattering increases sharply with the frequency of light) applies to far more than just N and O. In fact, any small gaseous molecule — carbon dioxide, hydrogen, helium, etc. — would preferentially scatter blue, yielding a blue sky at day.
As you saw above, violet scatters more than blue. So why isn’t the sky purple? The dumb but true answer is: our eyes are just worse at seeing violet. It’s the very highest frequency of light we can see; it’s riiight on the edge of our perception.
But! — if we could see violet as well as blue, the sky would appear violet.
We might as well tackle the elephant in the room: if we could see ultraviolet (which is the next higher frequency after violet), would the sky actually be ultraviolet?
And the answer is not really. If we could see UV, the sky would be a UV-tinted violet, but it wouldn’t be overwhelmingly ultraviolet. First, because the sun emits less UV light than visible light. And second, some of that UV light is absorbed by the ozone layer, so it never ever reaches Earth’s surface.
You can see both of those effects in the solar radiation spectrum chart:
The sun emits the most visible light, with UV frequencies falling off very steeply. Augmenting this effect is that the ozone layer in particular absorbs a lot of UV before it can reach Earth’s surface.
Why is the sunset red?
So the obvious next question is why is the sky red at dusk and dawn?
It’s because the sunlight has to travel through way more atmosphere when you’re viewing it at a low angle, and this extended jaunt through the atmosphere gives ample opportunity for allll the blue to scatter away — and even a good deal of the green too!
Simply put, the blue photons (and to a lesser degree, the green) have either (a) gone off into space or (b) hit the earth somewhere else before they reach your eyes.
When the sun is on the horizon (e.g. sunrise or sunset), the photons it emits travel through 40x as much atmosphere to reach your eyes as they would at midday. So blue’s 10x propensity to scatter means it’s simply gone by the time it would’ve reached your eyes. Even green is significantly dampened. Red light, which hardly scatters at all, just cruises on through.
Again, you can play with this and see for yourself 😎
The answer to this question is the second of three “domains” you should understand in order to have a working model of atmosphere color. The physics are different from the small-molecule scattering above.
Clouds are made up of a huge number of tiny water droplets. These droplets are so small (around .02 millimeters in diameter) that they remain floating in the air. But compared to small gas molecules like N and O, these droplets are enormous. A single water droplet may be 100 trillion HO molecules!
So, it’s not as simple as “the photons cause the hundreds of trillions of electrons to jiggle”. Instead, it’s more like the light has entered a very tiny prism or glass bead.
In a prism, white light can reflect around, bounce off exterior or interior surfaces, and even reflect differently depending on frequency — creating a rainbow effect.
The droplet is just as complex. Some of the photons hitting the droplet bounce off the surface. Some enter it, bounce around inside once, twice, etc. — and leave again. Perhaps a few are absorbed. As with a prism, different wavelengths of light will reflect at different angles. The specifics aren’t important — you should just get the general gist.
So whatever white (or slightly yellowish) light that came from the direction of the sun is leaving in many random directions. Think of every color, shooting off in different directions! And then multiply that by a quadrillion droplets! In sum, you just see every frequency of photon coming from every part of the cloud.
And that means the cloud is white!
This idea that the tiny droplets that comprise clouds scales up. Anything larger that light can enter — drizzle, raindrops, hail — will also tend towards white.
But that raises the question — what about things in between tiny molecules (N, O) and the relatively enormous prism-like droplets? How do those things act?
Well, the dust in the sky of Mars is a great example 😉
Why is the sky on Mars red?
The answer to this question is the third of three “domains” you should understand in order to have a working model of atmosphere color. The physics are different from both the small-molecule scattering and large-droplet prism-dynamics above.
The Martian sky is red because it’s full of tiny, iron-rich dust particles that absorb blue — leaving only red to scatter.
Yeah, yeah, I hear you. This answer is can-kicking! “Dust, schmust. Why does it absorb blue?”, you demand.
OK, so the answer is actually fairly straightforward. And it generalizes. Here’s the rule: whenever you have solid particles in the atmosphere (very small ones, approximately the size of the wavelength of visible light), they generally tend to turn the air warm colors — red, orange, yellow.
If you live in an area with wildfires, you’ve probably seen this effect here on Earth!
To really understand the reason, let’s back up and talk about some chemistry.
Compared to tiny gas molecules, solid particles tend to have a much wider range of light frequencies that they absorb.
For instance, we discussed how N and O have specific resonant frequencies at which they hungrily absorb UV photons. Move slightly away from those frequencies, and absorption drops off a cliff.
But even for a tiny dust nanoparticle, there are many constituent molecules, each in slightly different configurations, each being jostled slightly differently by its neighbors. Consequently, the constituent molecules all have slightly different preferences of which frequency to absorb.
Because the “peak” absorption of the molecules is usually violet or ultraviolet (as it is with small gases), blues/violets will make it to the surface much less than oranges/reds.
Approximate light absorption from Martian dust as a function of wavelength
Of course, a reasonable question is why are blue and violet absorbed so strongly by these dust particles?
Well, those are the only photons with enough energy to bump the dust molecules’s electrons up to a new energy state.
So, the exact specifics depend on the molecules in question, but generally, the level of energy needed to bump up the electron energy state in a dust or smog particle’s molecules corresponds to violet or UV photons.
This is actually true of solids in general, not just atmospheric dust or aerosols. If you’ve ever heard that purple was “the color of kings” or that the purple dye of antiquity was worth its weight in gold, it’s true! To get something purple, you’d need to find a material whose electrons were excited by low-energy red photons, but had no use for higher-energy violet photons.
So this is why the Martian sky is red — and why reds and browns are more common in nature (for solid things, at least) than purple and blue.
Why is the Martian sunset blue?
It’s less famous than the red daytime sky of Mars, but the Martian sunset is blue!
Martian sunset photo taken by the Spirit rover.
In the last section, we talked about Martian dust absorbing violet/blue. But the dust also scatters light — which it can do totally unrelated to how it absorbs (remember, since photons can — and usually do — cruise straight through a molecule, scattering and absorbing can have their own interesting frequency-dependent characteristics. They don’t simply sum to 100%)
Small atmospheric particles, like dust and smog, are equal-opportunity scatterers. The absolute probability they’ll scatter a photon does not change significantly with the photon’s wavelength. However, different-frequency photons can be more or less likely to scatter in different directions.
For our purposes, it suffices to know that Martian dust — like many atmospheric particles of similar size — generally scatters blue light closer to the direction it was already going. Red light has a higher probability of deflecting at a greater angle.
Because red light deflects MORE and blue light LESS when scattering off dust particles, the area directly around the sun will be blue — even though more blue is absorbed en route.
When molecules deflect photons only a tiny angle, it’s called “forward scattering”. Forward scattering is the most pronounced for larger particles, like dust or smog aerosols. It’s actually so strong on Mars that even at midday, red light doesn’t fill the sky evenly — the sky opposite the sun is noticeably darker!
But blue’s tendency to forward-scatter more directly against Martian dust means the Martian sunset has a blue halo.
At the beginning of this article, I said being able to predict something is a good measure of how well you understand it. Let’s do that now. Let’s build a model for predicting the sky color on new planets/moons, or during different scenarios on our own planet.
Here are the three general rules of thumb we’ve already talked about.
Atmospheric gases tend to be much, much smaller than the wavelengths of visible light. In these cases, they tend to preferentially scatter blue/violet/UV. This means that gaseous atmospheres are usually blue or blue-green.
Uranus: upper atmosphere is 98% hydrogen and helium. We don’t have pictures from the surface.
Neptune: upper atmosphere is 99% hydrogen and helium. We don’t have pictures from the surface.
This is pleasingly true for Earth, Uranus, and Neptune.
You may recall Neptune as looking like a much darker, richer blue. However, more recent analysis by Patrick Irwin shows the true color is very likely closer to what’s shown here.
It’s also worth noting that Neptune and Uranus’s blue color is made noticeably richer by the red-absorbing methane in their atmospheres.
When visible light hits particles that are in the ballpark of its own wavelength, things get more complicated and can differ on a case-by-case basis.
These particles are typically either:
Haze: solid particles formed by chemical reactions in the atmosphere
All three significantly dusty/hazy atmospheres in our solar system hold to this rule!
Titan’s sky is orange due to a haze of tholins (organic molecules)
Venus’s sky is yellow to a haze of sulfurous compounds
When visible light hits clouds of droplets (or ice crystals) that are much bigger than light’s wavelength, the droplets act akin to a vast army of floating prisms, sending out all colors in all directions.
Consequently, clouds tend to appear white, gray, or desaturated hues.
Venus: high-altitude clouds of sulfuric acid (!). The tan/orange is from the aforementioned haze.
Putting it all together
The largest and most complex atmosphere in our solar system is Jupiter. But we know enough to start making some smart guesses about it!
QUIZ: looking at this picture, what can you say about Jupiter’s atmosphere? Answers below the image, so take a guess before scrolling 😉
Here’s a comparison of how a basic guess — informed by our simplistic model — compares to scientific consensus.
Clouds, probably of ice because of coldness
Small atmospheric molecules. But potentially a chemically odd haze, if something absorbed the visible spectrum pretty strongly?
The Galileo probe that descended into Jupiter entered one of these spots. It’s most surprising finding was how dry Jupiter’s atmosphere seemed to be. But knowing it fell between where the clouds were, this makes total sense. Instead of ice crystals, it found hydrogen and helium.
...
Read the original on explainers.blog »
This project uses an WEMOS D1 Mini ESP8266 module and an Arduino sketch to connect to a NTP (Network Time Protocol) server to automatically retrieve and display the local time on a inexpensive analog quartz clock. The ESP8266 reconnects to the NTP server every 15 minutes which keeps the clock accurate. The clock also automatically adjusts for daylight savings time.
WEMOS D1 Mini ESP8266 Module with EERAM IC and Components on a Piece of Perfboard
I’m using an analog clock with a quartz movement I found at my local Walmart for $3.88. Whatever analog clock you decide to use, its quartz movement will need to be modified so that it can be controlled by the ESP8266 module. Open up the movement (most of them snap together without any fasteners), disconnect the internal coil of the Lavet stepping motor from its quartz oscillator and then solder a wire to each of the coil’s leads to make connections for the ESP8266. If you search around on the web you’ll find articles showing how others have done it. Be careful when working with the coil. The coil’s wires are typically thinner than a human hair and extremely fragile.
The sketch: AnalogClock.ino should be (I hope) clear enough, but here, in brief, is a summary of how it operates. Ten times each second the ESP8266 compares the time displayed on the analog clock to the actual time retrieved from an NTP server. If the analog clock lags behind the actual time, the ESP8266 advances the clock’s second hand until the clock agrees with the actual time. If the time displayed on the analog clock is ahead of the actual time, the ESP8266 simply waits until the actual time catches up with the analog clock since it can’t move the clock’s hands backwards.
The ESP8266 advances the analog clock’s second hand by generating bipolar pulses, alternately positive and negative to the clock’s Lavet motor coil. Because of differences in clock mechanisms, you may need to increase or decrease the “PULSETIME” constant in the sketch by few milliseconds to make your mechanism step reliably. Experimentally, I found that 30 milliseconds works best for my movement.
The biggest problem with using these cheap analog clocks for a project like this is that the clocks don’t provide any type of feedback to indicate the position of the clock’s hands. Thus if power is interrupted to the ESP8266 controlling the clock, the ESP8266 “forgets” where the clock’s hands are positioned. To get around this problem, the positions of the hour, minute and second hands are stored in a Microchip 47L04 Serial EERAM (4Kbit SRAM with EEPROM backup) and updated each second as the clock’s hands positions change. If power is interrupted, the ESP8266 can retrieve the last position of the clock’s hands from the EERAM when power is reapplied.
The very first time that the sketch is run, the user will be directed to a simple web page (see below) served by the ESP8266 which is used to tell it where the analog clock’s hands are initially positioned. From that point on, the ESP8266 will use the data stored in the EERAM to “remember” the positions of the clock’s hands.
Once the ESP8266 finishes its initialization and starts operation, it serves a simple web page showing the clock’s status. The status page can optionally show a graphic image representing the clock’s Face drawn using Scalable Vector Graphics, or HTML Canvas, or no image at all.
Analog Clock Status Page Using Scalable Vector Graphics to Draw the Clock Face
Analog Clock Status Page Using the HTML Canvas Element to Draw the Clock Face
...
Read the original on github.com »
...
Read the original on www.githubstatus.com »
...
Read the original on www.githubstatus.com »
...
Read the original on arxiv.org »
I’ve been running a Discord server for about four and a half years now. When I started streaming during the pando, I had no idea that I would end up building a community. Hell, I’d never even used Discord before. I only knew what it was because I had to stop my students from using it.
Don’t like reading? Click here for the final scores.
But folks kept asking for one. My viewers expected a community hub in which people who found their way to my Twitch streams could find each other, even when I was not live. As the whole streaming thing was itself an experiment in remote learning for me, this seemed a natural extension. So now, I have some mileage on me as a community moderator. I’m intimately familiar with the features Discord offers, and all the arguments against using it. I’m sensitive to them, FOSS dork that I am. I’m also keenly sensitive to the arguments about data loss inside of a forever-chat. In fact, I’m so sensitive to it that I even tried to address the problem in some small way.
But Discord, like all freemium services, is a risk. At any moment their advertising model could become intolerable, or their policy about using my data to train AI could change, or their pricing could get out of control, or some other rent-seeking nonsense common to internet services trying to stretch their profit margin.
I need an exit strategy. Anyone using Discord needs an exit strategy. The trick is to find a landing spot that users will tolerate, and that allows the community to continue in some fashion. Change is loss, and that is excruciatingly true for community platforms. Any switch comes with an attrition rate, meaning the destination better be worth the cost in headcount.
For this reason, and for another project, I’ve been deeply researching Discord alternatives for the better part of a year. Some of my colleagues may think me a bit obsessed about the importance of a “chat app,” but I’m convinced that the communication mechanism for online communities is critical to their success. Choosing a new one could be the a matter of life and death for the community. This is a decision we have to get right the first time.
So here, humbly submitted, are my rankings of many of the Discord-like alternatives for maintaining online communities.
I’ve arrived at five broad categories in which an online community platform needs to perform.
Functionality: can it do everything required of a platform for building, organizing, and sustaining a community?
Openness: what access is there to all the tool’s features and code without payment?
Security: how secure are the server and user data against common threats?
Safety: what features are available to moderate the community and protect it from malicious or unwanted behavior?
Decentralization: how reliant is the service on single points of failure?
These will be evaluated on a scale from 1-5, with 5 being the “best” for each criterion.
I’ve done my best to consider multiple use cases and threat models in these scores. I am, however, a flawed, biased meatsack with limited visibility. I may not have predicted your needs precisely. I may have omitted your favorite option. If so, I hope you’ll afford me some grace. I did the best I could.
Oh, and I’m not touching Slack or Teams. Reasons should be obvious.
We’ll start with Discord as a baseline.
As a product, Discord is very, very good. It serves its purpose with an absolutely minimum of friction—both from a user and administrator perspective. Even without paying, the features out of the box are well-considered and helpfully implemented. What is the product, anyway? Sometimes it seems like Discord themselves don’t really know. While they bristle at being called a “Slack clone,” there’s a reason many companies (especially tech startups) choose Discord as both their internal team communication tool, as well as their customer engagement tool. Some truly benighted groups even choose to document their product with it.
Whatever Discord thinks it is, the purpose of a system is what it does, and Discord builds online communities. Say what you want about the company, the closed nature, the increasingly-icky ad model, the core of Discord continues to work well for bringing people together in quasi-public online spaces. The medium of real-time text, aka instant messaging, aka IRC-again-but-not-IRC, has become a default, but one not without limitations. For example, what does this do to your heart rate:
Right?! We’ve embraced immediacy at the expense of depth. Also, in Discord’s case, accessibility. Searching Discord is a proper disaster. While messages are more or less permanent, it is by no means easy to find them again, weeks/months/years later.
But let’s get into the criteria before this becomes a treatise on the nature of the modern web.
As mentioned, Discord is highly functional—for what it does. But its limitations do start to grate as time goes on. Online communities have a predictable lifecycle, in which the excitement of the early days is well-served by real-time chat. The memes are flying; people are excited to meet each other; the future holds boundless possibilities. The space will categorize and fragment, trying to organize the chaos. Over time, most of the messages come from a core group of contributors, with more occasional arrivals and questions from newcomers. This is as it should be. But what happens to the history of that community as it heads up the scroll? How does the past usefully inform the future?
Discord has made some affordances for this with “Forum” type channels. Even so, the past is hard to explore.
Discord is not open, so not much to say on that front.
Discord messages are not end-to-end encrypted. Pretty famously, Discord will give up your data for law enforcement. Although they’ve recently added end-to-end encryption for video and audio, the implementation is clunky. And of course, all the text data in a Discord server is unencrypted. But hey, at least they support MFA?
Safety, in the sense of “Trust and Safety,” may be Discord’s greatest strength. I have greatly appreciated all the moderation tools at my disposal. Even a modestly sized server like mine (~3000 users) would be impossible to manage without automatic word catching, granular permissions on channels and roles, and multiple response options including timeouts, kicks, and bans. Discord also has a very involved onboarding flow that makes certain there is an agreement to community rules before users can participate.
And need we even mention decentralization here? If Discord fails, your community goes dark.
Best for: communities who value secrecy above all.
I love Signal. Like, a lot. I’m a daily user and a donor. I’ve even convinced most of my friends and family to use it as our primary mode of text communication. And yes, I’ve organized a community with it—one for which privacy was (at the time) of paramount importance. I am deeply familiar with all advantages and drawbacks of Signal.
As a secure chat, Signal does just fine. Well, better than fine from a cryptography perspective. It is the gold standard in end-to-end encrypted communications for good reason. But the strongest cryptography in the world is meaningless for a community if the platform is unusable. Fortunately, that’s not the case for Signal. Emoji reactions, stickers, (some) formatted text, and even voice/video calls make it an indispensable tool for secure communications that feel familiar and feature-filled enough for normies. Nobody will be totally lost moving from another chat app to Signal.
If you’re looking for nothing but chat, Signal is fantastic. But many aspects of community-building online are simply unavailable here. To start, there are only group chats. There is no conversation threading or channels to keep conversations organized. You can have multiple chats, but that gets messy quickly.
I can’t even pin posts. In fact, post searchability is a limited feature by design. Most group chats enable disappearing messages. That’s great to prevent incriminating evidence from piling up; it’s terrible for reviewing what a community discussed previously.
Also absent: granular roles in each chat, or anything resembling moderation tools. As an admin, I can only ban users for unwanted behavior. I can neither automatically prevent harassment nor provide a more measured response than the banhammer.
I should mention that almost all these tradeoffs are accepted limitations in service of Signal’s primary objectives.
On the point of decentralization, Signal has none. As Meredith Whitaker recently wrote, all Signal app traffic flows through the same cloud infrastructure, much of which depends on AWS.
If your community’s threat model is such that eliminating all possible points of evidence collection against you matters above all else, Signal is the clear winner. Maintaining that level of operational security naturally comes at the cost of some other creature comforts a community could come to covet.
I didn’t set out to alliterate the hell out of that sentence, but I didn’t stop it either.
Best for: communities who value independence over all, with security/privacy a runner-up.
Oh, Matrix. You are the football that I, in my zigzag-stripe shirt, keep trying to kick. In theory, the Matrix protocol and Element, its flagship client, should be the ideal for decentralized, encrypted communications. Using Element feels a whole lot like using Discord. Heck, it can even bridge communications from Discord and other platforms. Sadly, as time goes on, the nicks from the rough edges start to accumulate.
Before going further, we need to define some terms. Matrix is the federated, encrypted messaging protocol published and maintained by the Matrix Foundation. Synapse is their “reference implementation” server technology written in Python. Synapse is the most common way folks start their own Matrix servers. There are other server implementations, now including “Synapse Pro,” which I guess is a partial rewrite of Synapse in Rust? Element is the first-party client that users would use to connect to Matrix. They need an account on a server, and of course matrix.org is the flagship Matrix server where the vast majority of users have their accounts. But you can point Element at any Matrix server to log in, as long as you have an account on that server.
Confused yet? If users are unwilling to select a Mastodon server, do you think they’d be willing to put up with this?
Ah, but I get ahead of myself. Let’s start with what’s good.
Matrix uses a similar end-to-end cryptography scheme to Signal. “Rooms” (chats, channels) are not encrypted by default, but they can be made so. There have been noted issues with the previous cryptography library used by Element, but the newer vodozemac library is in much better shape. Of course, not all Matrix clients use the new hotness.
A given Matrix server can create multiple rooms (channels), and even group them into “spaces” such that they appear quite similar to Discord servers.
Inside the rooms, things feel familiar. We have threads, emoji reacts, and message search (sorta). On some clients (but not Element), there is the possibility of custom emoji.
And that’s…it. Element promises more, like native video conferencing, but heaven help you if you’re trying to self-host it. It is technically possible, but by no means simple.
“Technically possible, but by no means simple” aptly describes up the entire Matrix experience, actually.
I ran a private Matrix server for about a year and a half. Why private? In two public Matrix rooms I had joined—including the room for Synapse admins—I experienced a common attack in which troll accounts spam the room with CSAM material. Horrible, but not just for the participants and admins in the room. Through the magic of federation, every server who has a user participating in the room now has a copy of the CSAM material, and has to take action to remove it. This requires a manual curl request on the server itself, because Synapse has an appalling lack of moderation tools. It’s so bad that, without third-party tooling, you can’t even ban a user outright from a server; you have to manually ban them from every single room.
Then came September 2, 2025. The outageof matrix.org caused by drive failures was not an indictment of Matrix’s database management or recovery process—in fact, I was quite impressed with their response. But it did put the lie to Matrix’s decentralization for me. Almost none of my friends could use Matrix, even though I was hosting my own server. The onboarding pipeline (especially via Element) is so focused on the flagship server, I daresay it comprises the plurality of Matrix accounts. It’s not easy to get any statistics for all Matrix users, but that is my guess. How “decentralized” is that, really? Just because something can be decentralized doesn’t make it so.
I’m probably a little too close to this one. I so badly wanted Matrix to work, and I tried to make it work for my purposes for a long time. Ultimately, the pain points overcame the benefits. But if you care most about an intersection of message encryption, federation, and decentralization, and you’re willing to put in quite a lot of admin time, Matrix can be a viable community chat platform.
Best for: communities that want a smooth Slack-like experience and are willing to pay for independence
What if you could self-host Slack? That’s basically the Rocket. Chat experience. It’s slick, easy to get set up, and loaded with integrations. All of this comes, as you might expect, at a price. While there is an “open source” Community Edition, its featureset is limited, and you may quickly find yourself looking at the paid plans for additional features or support. Rocket.Chat is one of several platforms that follow this freemium model. I don’t really begrudge them this approach, but it can be frustrating for a community just finding its feet. To their credit, they do offer discounts for open source projects, not-for-profits, and other organizations on a per-request basis.
Rocket. Chat does support end-to-end encrypted communications. Key management can be a little clunky, but I was impressed it had the feature at all.
Be aware, however, that these centrally-managed services will of course allow administrators to audit messages. That is a documented part of the moderation flow for Rocket. Chat. If you demand anonymity or an inability for administrators to view your messages what are you doing in that community? Rocket.Chat might not be right for you.
I’ll quickly mention why I gave it a score of 3 on decentralization. Seems a bit high, right? Until recently, Rocket. Chat supported Matrix federation. Since October 2025, it has pursued a native federation scheme that would allow separate Rocket.Chat instances to share rooms and DMs across server boundaries. This, although not open source, is extremely compelling.
I really enjoyed my experimentation with Rocket. Chat, and found myself thinking seriously about it as an alternative to where I was. The cost is just steep.
Best for: A split between forums and real-time chat
I’ve been playing with Zulip for a bit now, and I still don’t really know what to make of it. From one perspective, it has a bit of an identity crisis, unsure of whether it’s a forum or a chat platform. From another perspective, this dual identity is its greatest strength: real-time when you want it, asynchronous when you don’t.
Zulip is self-hostable, with some caveats. As the plans and pricing detail, anything beyond 10 users starts costing some cash. It adds up quickly. Seemingly everything can be done in a self-hosted manner, you’re at the mercy of some truly byzantine documentation.
While there is great functionality to be found, it comes at a rather steep price for organizations of any size—whether administrative overhead, or just plain cash for the managed services. Although to their credit, they do offer a community plan with many of those higher-tier features available for qualifying organizations.
One feature you won’t find anywhere is end-to-end encryption. The developers seem rather against the idea. Multi-factor authentication must be enabled in the config files, not the admin frontend—hardly ideal.
Unless I’m missing it, there do not appear to be any serious content moderation tools in Zulip. The community moderation toolkit is, in my opinion, the barest of essentials. Nearly all of these capabilities are reactive, not proactive. It seems the expectation is good-faith participation, with those agreements and guarantees handled elsewhere. Having been on the wrong end of malicious intent, I don’t feel safe enough with these tools.
Lastly, on decentralization, it’s mostly a miss. Even for self-hosted plans, anything above the free tier requires a zulip.com account for plan management. And federation? Forget about it. Although every Zulip server can technically host multiple Zulip instances, they don’t interact with one another.
If anything, writing this overview has left me more confused about Zulip than when I began. I just don’t know where it fits, or who can afford these prices for a growing community.
Best for: Fortune 100s and governments
Take a look at the front page of the Mattermost website, and you’ll get an idea of the kind of organization they expect to be using this thing. Odds are, your nascent online community ain’t that. While the software may superficially look like some of these others, its intention is entirely other. Community building is not what’s going on here. Rather, Mattermost’s objective is highly-focused, integrated workflows that involve human communication alongside machine automation. Business operations are what…matter most.
Mattermost describes itself as “Open core,” and the core is…rather tiny. Even when installing the self-hosted version, you’ll soon need a rather expensive license for real work. Starting at $10/user is a clear indicator of the intended customer base. It ain’t me, that’s for sure.
Mattermost prides itself on a certain kind of security—specifically, the regulatory kind. Configurations for all manner of compliance regimes are provided in the documentation. Normal security is present as well, including MFA. Not so much end-to-end encryption, although mention is made of encrypting the PostgreSQL database. That’s novel, although not a solution to the problem addressed by E2EE.
I honestly don’t think Mattermost’s developers are capable of imagining a positive argument for an audit-resistant application. This thing is designed for monitoring user activity six ways from Sunday.
Consequently, “safety” in the way we’ve defined it here is absent from Mattermost’s conception of the universe. If you’re logging on to a Mattermost server, about a thousand other trust mechanisms are in place to guarantee you won’t act like a doofus on this app.
Hardly a point to mentioning decentralization here, beyond the possibility of self-hosting. Ultimately though, you only get what your license key allows, and since the server is only open core, Mattermost itself is quite the point of failure.
Best for: anything but real-time chat, really.
I’m gonna be honest: I kind of love Discourse. I’m not sure I have a reason to deploy it, but I want to. Everything Joan Westenberg writes in this piece in praise of Discourse resonates with me. Community for the long-haul? Transparency in governance? Built-in systems for establishing human trust?
But Discourse has one significant difference from everything else on this list: it is primarily a forum, not a real-time chat app. I’m not saying that’s a bad thing, necessarily, but it sure is different. If your community expects instantaneous communication, Discourse may be a big adjustment. Or it might not be sufficient on its own for your needs.
But what does it do well? Forums! It’s very easy to navigate categories and topics. The UI provides clear signals for when something happened. Oh, and search is simple.
Maybe the best way to think of Discourse is as an anti-Discord. It’s everything Discord isn’t: asynchronous, open source, and self-hostable.
Discourse is 100% open source. I’m running it right now in my homelab, with access to all the plugins and features I’d expect, costing me only the time it took to install.
I was additionally quite impressed with the moderation tools. Not only are they plenty of tools to track user activity, but the moderation decisions are public by default. This is a good thing! The community can hold its leaders accountable for upholding their end of the bargain: to act in good faith in support of the community.
One area in which it falters a bit is, of course, end-to-end encryption. Very few of these tools enable it, and when they do, it can be clunky. It’s entirely possible that the right option for a community is one of these and Signal for sensitive, out-of-band communications.
If you start to look around, you’ll notice Discourse fora everywhere. There’s a good reason for that! The software is rock solid for what it is. And maybe your community needs its depth of features more than it needs instantaneous messaging.
Best for: Appreciating how much work it takes to make one of these work
Stoat, née Revolt, was meant to be an open source Discord alternative. Recently, they received a cease-and-desist regarding the name Revolt, and renamed to a…weasel.
Anyway this thing is so far from being ready for prime time, I only include it here to call out the project. I wish them the best and hope for good things, especially since you can self-host the server. But a lack of stability and features prevent this from being useful for anything beyond experimentation. Maybe someday.
The Tool is Not the Community
Choosing a platform on which to build a community is just the beginning. It’s vitally important, yet insufficient to a community’s success. Tools do not make a culture; the people engaging on it do. Most of my time building the culture of TTI has not been a technical endeavor. What we have—and I think it’s pretty special—has little to do with Discord’s featureset. It just happens to be where the people are. The options presented to you here allow you to seek a path that aligns with your objectives, principals, and needs at a purely mechanical level. The rest depends on the human element.
...
Read the original on taggart-tech.com »
We strive to create an environment conducive to many different types of research across many different time scales and levels of risk.
We strive to create an environment conducive to many different types of research across many different time scales and levels of risk.
We establish a positive association between hard-braking events (HBEs) collected via Android Auto and actual road segment crash rates. We confirm that roads with a higher rate of HBEs have a significantly higher crash risk and suggest that such events could be used as leading measures for road safety assessment.
Traffic safety evaluation has traditionally relied on police-reported crash statistics, often considered the “gold standard” because they directly correlate with fatalities, injuries, and property damage. However, relying on historical crash data for predictive modeling presents significant challenges, because such data is inherently a “lagging” indicator. Also, crashes are statistically rare events on arterial and local roads, so it can take years to accumulate sufficient data to establish a valid safety profile for a specific road segment. This sparsity paired with inconsistent reporting standards across regions complicates the development of robust risk prediction models. Proactive safety assessment requires “leading” measures: proxies for crash risk that correlate with safety outcomes but occur more frequently than crashes. In “From Lagging to Leading: Validating Hard Braking Events as High-Density Indicators of Segment Crash Risk”, we evaluate the efficacy of hard-braking events (HBEs) as a scalable surrogate for crash risk. An HBE is an instance where a vehicle’s forward deceleration exceeds a specific threshold (-3m/s²), which we interpret as an evasive maneuver. HBEs facilitate network-wide analysis because they are sourced from connected vehicle data, unlike proximity-based surrogates like time-to-collision that frequently necessitate the use of fixed sensors. We established a statistically significant positive correlation between the rates of crashes (of any severity level) and HBE frequency by combining public crash data from Virginia and California with anonymized, aggregated HBE information from the Android Auto platform.
To validate the utility of this metric, we analyzed 10 years of public crash data alongside aggregated HBE measurements. The immediate advantage of HBEs is the density of the signal. Our analysis of road segments in California and Virginia revealed that the number of segments with observed HBEs was 18 times greater than those with reported crashes. While crash data is notoriously sparse — requiring years to observe a single event on some local roads — HBEs provide a continuous stream of data, effectively filling the gaps in the safety map.
HBEs are observed on 18x more road segments compared to reported crashes.
The core objective was to determine if a high frequency of HBEs causally links to a high rate of crashes. We employed negative binomial (NB) regression models, a standard approach in the Highway Safety Manual (HSM), to account for a higher degree of variance than is typically found in crash data.Our model structure controlled for various confounding factors, including:Dynamics: Presence of ramps and change in the number of lanes.The results demonstrated a statistically significant association between HBE rates and crash rates across both states. Road segments with higher frequencies of hard braking consistently exhibited higher crash rates, a relationship that holds true across different road types, from local arterials to controlled-access highways.
Crash Rate vs. HBE rate for different types of roads in California and Virginia.
The regression analysis also quantified the impact of specific infrastructure elements. For instance, the presence of a ramp on a road segment was positively associated with crash risk in both states, likely due to the weaving maneuvers required for merging.
To visualize the practical application of this metric, we examined a freeway merge segment in California connecting Highway 101 and Highway 880. Historical data indicates this segment has an HBE rate approximately 70 times higher than the average California freeway, and averaging a crash every six weeks for a decade.
Freeway merge segment in California Bay Area with one crash every six weeks and a 70x higher than average HBE rate.
When analyzing the connected vehicle data for this location, we found that it ranked in the top 1% of all road segments for HBE frequency. The HBE signal successfully flagged this outlier without relying on the decade of crash reports it took to statistically confirm the risk. This alignment validates HBEs as a reliable proxy capable of identifying high-risk locations even in the absence of long-term collision history.
Validating HBEs as a reliable proxy for crash risk transforms a raw sensor metric into a trusted safety tool for road management. This validation supports the use of connected vehicle data for network-wide traffic safety assessment, offering enhanced spatial and temporal granularity. While these results indicate utility for road segment risk determination, they do not draw conclusions about location-independent driving behavior risk.The Mobility AI team at Google Research is working with Google Maps Platform to externalize these HBE datasets as a part of Roads Management Insights offering. By integrating these high-density signals, transportation agencies can access aggregated, anonymized data that is substantially more fresh and that covers a wider breadth of the road network compared to traditional crash statistics. This allows for the identification of high-risk locations using leading indicators rather than relying solely on lagging and sparse collision records.
While this study confirms that HBEs are a robust leading indicator of crash risk, there are opportunities to further refine this signal. We are currently investigating mechanisms to spatially cluster homogenous road segments to reduce data sparsity even further. Addressing these limitations will enable the transition from risk identification to targeted engineering, where high-density data informs specific infrastructure interventions ranging from signal timing adjustments and improved signage to the geometric redesign of high-risk merge lanes.
This work was a collaborative effort involving researchers from Google and Virginia Tech. We thank our co-authors Shantanu Shahane, Shoshana Vasserman, Carolina Osorio, Yi-fan Chen, Ivan Kuznetsov, Kristin White, Justyna Swiatkowska, and Feng Guo. We also appreciate the contributions of Aurora Cheung, Andrew Stober, Reymund Dumlao, and Nick Kan in translating this research into practical applications.
Introducing GIST: The next stage in smart sampling
Gemini provides automated feedback for theoretical computer scientists at STOC 2026
Comparison graphs for California and Virginia showing the correlation between HBE rates and crash rates by road type.
Line graph titled “Crash Segments” showing the number of road segments with crashes from 2016 to 2025.
Street view and aerial map of the Highway 101 and 880 merge with arrows and crash warning icons indicating an intersection with high crash rate.
...
Read the original on research.google »
An Irish man has spent five months in US Immigration and Customs Enforcement detention and faces deportation despite having a valid work permit and no criminal record.
Seamus Culleton was a “model immigrant” who had become the victim of a capricious and inept system, said his lawyer, Ogor Winnie Okoye.
Originally from County Kilkenny, Culleton is married to a US citizen and runs a plastering business in the Boston area. While buying supplies at a hardware store on 9 September 2025 he was arrested in a random immigration sweep, according to Okoye, of BOS Legal Group in Massachusetts.
Culleton entered the US in 2009 on a visa waiver programme and overstayed the 90 day-limit but, after marrying a US citizen and applying for lawful permanent residence, he obtained a statutory exemption that allowed him to work, Okoye told the Guardian. “He had a work-approved authorisation that is tied to a green card application,” she said.
Culleton’s detention prevented him from attending the final interview in October, she said. “It’s inexplicable that this man has been in detention. It does not make sense. There’s no reason why the government shouldn’t just release him and allow him to attend the interview that will confirm his legal status.”
After being held in ICE facilities near Boston and in Buffalo, New York, he was flown to a facility in El Paso, Texas, where he is sharing a cell with more than 70 men. Culleton said the detention centre was cold, damp and squalid, and there were fights over insufficient food — “like a concentration camp, absolute hell”, he told the Irish Times, which first reported the story on Monday.
Culleton said that when he was arrested he was carrying a Massachusetts driving licence and a valid work permit issued as part of an application for a green card that he initiated in April 2025. He has a final interview remaining.
When asked at the Buffalo facility to sign a form agreeing to deportation, Culleton said he refused and instead ticked a box expressing a wish to contest his arrest, which he intended to do on the grounds that he was married to a US citizen, Tiffany Smyth, and had a valid work permit.
At a November hearing a judge approved his release on a $4,000 bond, which Smyth paid, but authorities continued to detain Culleton, initially without explanation.
When his attorney appealed to a federal court, two ICE agents said that in Buffalo Culleton had signed documents agreeing to be deported. Culleton said he did not agree and that the signatures were not his. “My whole life is here. I worked so hard to build my business. My wife is here.”
The judge noted irregularities in ICE’s court documents but sided with the agency. Under US law Culleton cannot appeal but he wants handwriting experts to examine the signatures and believes a video of his interview with ICE in Buffalo would prove he refused to sign deportation documents.
Previous high-profile cases involving people from Ireland include Cliona Ward, who had a green card but was detained by ICE for 17 days over a criminal record from more than 20 years ago. A visiting Irish tech worker who overstayed his visa by three days and agreed to deportation was jailed for about 100 days.
Culleton told the Irish Times he did not know what would happen next and that the uncertainty was “psychological torture”. Facility officials tried to get him to sign a deportation order last week but he refused, he said.
Okoye said the US government had discretionary power to release her client and was acting in an inept and capricious manner towards an immigrant who was following the green card process. “He’s never been arrested. He’s married to a US citizen. He owns his own business. He’s established a life here. He’s done everything right.”
Smyth said she had endured five months of heartbreak, stress, anxiety and anger. “I would never wish this on anyone or their family. I am still praying for a miracle every day.”
After a video call with her husband on Sunday night — their first in five months — Smyth told Culleton’s family in Ireland he had lost weight and hair and had sores and infections. “There’s no hygiene there. He’s been asking for antibiotics for the last four weeks,” his sister, Caroline Culleton, told RTÉ. The detainees were seldom allowed out for exercise or air, she said.
“It’s heartbreaking. We’ve talked about what he endures physically but what about his mental health? How will he deal with this when he gets out? What long-term effect it has on him?”
Last week the Irish government said the number of Irish citizens seeking consular assistance about deportation from the US jumped from 15 in 2024 to 65 last year.
...
Read the original on www.theguardian.com »
Streaming speech recognition running natively and in the browser. A pure Rust implementation of Mistral’s Voxtral Mini 4B Realtime model using the Burn ML framework.
The Q4 GGUF quantized path (2.5 GB) runs entirely client-side in a browser tab via WASM + WebGPU. Try it live.
# Download model weights (~9 GB)
uv run –with huggingface_hub \
hf download mistralai/Voxtral-Mini-4B-Realtime-2602 –local-dir models/voxtral
# Transcribe an audio file (f32 SafeTensors path)
cargo run –release –features “wgpu,cli,hub” –bin voxtral-transcribe — \
–audio audio.wav –model models/voxtral
# Or use the Q4 quantized path (~2.5 GB)
cargo run –release –features “wgpu,cli,hub” –bin voxtral-transcribe — \
–audio audio.wav –gguf models/voxtral-q4.gguf –tokenizer models/voxtral/tekken.json
# Build WASM package
wasm-pack build –target web –no-default-features –features wasm
# Generate self-signed cert (WebGPU requires secure context)
openssl req -x509 -newkey ec -pkeyopt ec_paramgen_curve:prime256v1 \
-keyout /tmp/voxtral-key.pem -out /tmp/voxtral-cert.pem \
-days 7 -nodes -subj “/CN=localhost”
# Start dev server
bun serve.mjs
Open https://localhost:8443, accept the certificate, and click Load from Server to download the model shards. Record from your microphone or upload a WAV file to transcribe.
Hosted demo on HuggingFace Spaces if you want to skip local setup.
The upstream mistral-common library left-pads audio with 32 silence tokens (at 12.5 Hz). After the mel/conv/reshape pipeline, this covers only 16 of the 38 decoder prefix positions with silence — the remaining 22 contain actual audio. The f32 model handles this fine, but Q4_0 quantization makes the decoder sensitive to speech content in the prefix: audio that starts immediately with speech (mic recordings, clips with no leading silence) produces all-pad tokens instead of text.
The left padding is increased to 76 tokens, which maps to exactly 38 decoder tokens of silence and covers the full streaming prefix. See src/audio/pad.rs for details.
No sync GPU readback — All tensor reads use into_data_async().await
# Native (default features: wgpu + native-tokenizer)
cargo build –release
# With all features
cargo build –release –features “wgpu,cli,hub”
# WASM
wasm-pack build –target web –no-default-features –features wasm
# Unit + integration tests (requires GPU for full suite)
cargo test –features “wgpu,cli,hub”
# Lint
cargo clippy –features “wgpu,cli,hub” — -D warnings
cargo clippy –no-default-features –features wasm –target wasm32-unknown-unknown — -D warnings
# E2E browser test (requires Playwright + model shards)
bunx playwright test tests/e2e_browser.spec.ts
GPU-dependent tests (model layer shapes, Q4 matmul, WGSL shader correctness) are skipped in CI since GitHub Actions runners lack a GPU adapter. These tests run locally on any machine with Vulkan, Metal, or WebGPU support.
The GGUF file must be split into shards of 512 MB or less to stay under the browser’s ArrayBuffer limit:
split -b 512m models/voxtral-q4.gguf models/voxtral-q4-shards/shard-
The dev server and E2E test discover shards automatically from models/voxtral-q4-shards/.
Coming soon: accuracy (WER) and inference speed benchmarks across native and browser targets.
...
Read the original on github.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.