10 interesting stories served every morning and every evening.
Everyone in San Francisco is talking about the singularity. At dinner parties, at coffee shops, at the OpenClaw meetup where Ashton Kutcher showed up for some reason. The conversations all have the same shape: someone says it’s coming, someone says it’s hype, and nobody has a number.
This seems like the wrong question. If things are accelerating (and they measurably are) the interesting question isn’t whether. It’s when. And if it’s accelerating, we can calculate exactly when.
I collected five real metrics of AI progress, fit a hyperbolic model to each one independently, and found the one with genuine curvature toward a pole. The date has millisecond precision. There is a countdown.
Five metrics, chosen for what I’m calling their anthropic significance (anthropic here in the Greek sense (“pertaining to humans”), not the company, though they appear in the dataset with suspicious frequency):
Tokens per dollar: cost collapse of intelligence (log-transformed, because the Gemini Flash outlier spans 150× the range otherwise)
Each metric normalized to . Release intervals inverted (shorter = better). Tokens per dollar log-transformed before normalizing (the raw values span five orders of magnitude; without the log, Gemini Flash at 2.5M tokens/$ dominates the fit and everything else is noise). Each series keeps its own scale, no merging into a single ensemble.
An exponential approaches infinity only as . You’d be waiting forever. Literally.
We need a function that hits infinity at a finite time. That’s the whole point of a singularity: a pole, a vertical asymptote, the math breaking:
As , the denominator goes to zero. . Not a bug. The feature.
Polynomial growth () never reaches infinity at finite time. You could wait until heat death and would still be finite. Polynomials are for people who think AGI is “decades away.”
Exponential growth reaches infinity at . Technically a singularity, but an infinitely patient one. Moore’s Law was exponential. We are no longer on Moore’s Law.
Hyperbolic growth is what happens when the thing that’s growing accelerates its own growth. Better AI → better AI research tools → better AI → better tools. Positive feedback with supralinear dynamics. The singularity is real and finite.
The procedure is straightforward, which should concern you.
The model fits a separate hyperbola to each metric:
Each series gets its own scale and offset . The singularity time is shared. MMLU scores and tokens-per-dollar have no business being on the same y-axis, but they can agree on when the pole is.
For each candidate , the per-series fits are linear in and . The question is: which makes the hyperbola fit best?
Here’s the thing nobody tells you about fitting singularities: most metrics don’t actually have one. If you minimize total RSS across all series, the best is always at infinity. A distant hyperbola degenerates into a line, and lines fit noisy data just fine. The “singularity date” ends up being whatever you set as the search boundary. You’re finding the edge of your search grid, not a singularity.
So instead, we look for the real signal. For each series independently, grid search and find the R² peak: the date where hyperbolic fits better than any nearby alternative. If a series genuinely curves toward a pole, its R² will peak at some finite and then decline. If it’s really just linear, R² will keep increasing as and never peak. No peak, no signal, no vote!
One series peaks! arXiv “emergent” (the count of AI papers about emergence) has a clear, unambiguous R² maximum. The other four are monotonically better fit by a line. The singularity date comes from the one metric that’s actually going hyperbolic.
This is more honest than forcing five metrics to average out to a date that none of them individually support.
Same inputs → same date. Deterministic. The stochasticity is in the universe, not the model.
The fit converged! Each series has its own R² at the shared , so you can see exactly which metrics the hyperbola captures well and which it doesn’t. arXiv’s R² is the one that matters. It’s the series that actually peaked.
The 95% confidence interval comes from profile likelihood on . We slide the singularity date forward and backward until the fit degrades past an F-threshold.
How much does the date move if we drop one metric entirely?
If dropping a single series shifts by years, that series was doing all the work. If the shifts are zero, the dropped series never had a signal in the first place.
The table tells the story plainly: arXiv is doing all the work. Drop it and the date jumps to the search boundary (no remaining series has a finite peak). Drop anything else and nothing moves. They were never contributing to the date, only providing context curves at the shared .
Note: Copilot has exactly 2 data points and 2 parameters ( and ), so it fits any hyperbola perfectly. Zero RSS, zero influence on . It’s along for the ride!
The model says at . But what does “infinity” mean for arXiv papers about emergence? It doesn’t mean infinitely many papers get published on a Tuesday in 2034.
It means the model breaks. is the point where the current trajectory’s curvature can no longer be sustained. The system either breaks through into something qualitatively new, or it saturates and the hyperbola was wrong. A phase transition marker, not a physical prediction.
But here’s the part that should unsettle you: the metric that’s actually going hyperbolic is human attention, not machine capability.
MMLU, tokens per dollar, release intervals. The actual capability and infrastructure metrics. All linear. No pole. No singularity signal. The only curve pointing at a finite date is the count of papers about emergence. Researchers noticing and naming new behaviors. Field excitement, measured memetically.
The data says: machines are improving at a constant rate. Humans are freaking out about it at an accelerating rate that accelerates its own acceleration.
That’s a very different singularity than the one people argue about.
If marks when the rate of AI surprises exceeds human capacity to process them, the interesting question isn’t what happens to the machines. It’s what happens to us.
And the uncomfortable answer is: it’s already happening.
The labor market isn’t adjusting. It’s snapping. In 2025, 1.1 million layoffs were announced. Only the sixth time that threshold has been breached since 1993. Over 55,000 explicitly cited AI. But HBR found that companies are cutting based on AI’s potential, not its performance. The displacement is anticipatory. The curve doesn’t need to reach the pole. It just needs to look like it will.
Institutions can’t keep up. The EU AI Act’s high-risk rules have already been delayed to 2027. The US revoked its own 2023 AI executive order in January 2025, then issued a new one in December trying to preempt state laws. California and Colorado are going their own way anyway. The laws being written today regulate 2023′s problems. By the time legislation catches up to GPT-4, we’re on GPT-7. When governments visibly can’t keep up, trust doesn’t erode. It collapses. Global trust in AI has dropped to 56%.
Capital is concentrating at dot-com levels. The top 10 S&P 500 stocks (almost all AI-adjacent) hit 40.7% of index weight in 2025, surpassing the dot-com peak. Since ChatGPT launched, AI-related stocks have captured 75% of S&P 500 returns, 80% of earnings growth, and 90% of capital spending growth. The Shiller CAPE is at 39.4. The last time it was this high was 1999. The money flooding in doesn’t require AI to actually reach superintelligence. It just requires enough people to believe the curve keeps going up.
People are losing the thread. Therapists are reporting a surge in what they’re calling FOBO (Fear of Becoming Obsolete). The clinical language is striking: patients describe it as “the universe saying, ‘You are no longer needed.’” 60% of US workers believe AI will cut more jobs than it creates. AI usage is up 13% year-over-year, but confidence in it has dropped 18%. The more people use it, the less they trust it.
The epistemics are cracking. Less than a third of AI research is reproducible. Under 5% of researchers share their code. Corporate labs are publishing less. The gap between what frontier labs know and what the public knows is growing, and the people making policy are operating on information that’s already obsolete. The experts who testify before Congress contradict each other, because the field is moving faster than expertise can form.
The politics are realigning. TIME is writing about populist AI backlash. Foreign Affairs published “The Coming AI Backlash: How the Anger Economy Will Supercharge Populism.” HuffPost says AI will define the 2026 midterms. MAGA is splitting over whether AI is pro-business or anti-worker. Sanders proposed a data center moratorium. The old left-right axis is buckling under the weight of a question it wasn’t built to answer.
All of this is happening eight years before tst_sts. The social singularity is front-running the technical one. The institutional and psychological disruption doesn’t wait for capabilities to go vertical. It starts as soon as the trajectory becomes legible.
The pole at isn’t when machines become superintelligent. It’s when humans lose the ability to make coherent collective decisions about machines. The actual capabilities are almost beside the point. The social fabric frays at the seams of attention and institutional response time, not at the frontier of model performance.
The date comes from one series. arXiv “emergent” is the only metric with genuine hyperbolic curvature. The other four are better fit by straight lines. The singularity date is really “the date when AI emergence research goes vertical.” Whether field excitement is a leading indicator or a lagging one is the crux of whether this means anything.
The model assumes stationarity. Like assuming the weather will continue to be “changing.” The curve will bend, either into a logistic (the hype saturates) or into something the model can’t represent (genuine phase transition). marks where the current regime can’t continue, not what comes after.
MMLU is hitting its ceiling. Benchmark saturation introduces a leptokurtic compression artifact. MMLU’s low R² reflects this. The hyperbola is the wrong shape for saturating data.
Tokens per dollar is log-transformed (values span five orders of magnitude) and non-monotonic (GPT-4 cost more than 3.5; Opus 4.5 costs more than DeepSeek-R1). The cost curve isn’t smooth: it’s Pareto advances interspersed with “we spent more on this one.”
Five metrics isn’t enough. More series with genuine hyperbolic curvature would make the date less dependent on arXiv alone. A proper study would add SWE-bench, ARC, GPQA, compute purchases, talent salaries. I used five because five fits in a table.
Copilot has two data points. Two parameters, two points, zero degrees of freedom, zero RSS contribution. The sensitivity analysis confirms it doesn’t matter.
The math found one metric curving toward a pole on a specific day at a specific millisecond: the rate at which humans are discovering emergent AI behaviors. The other four metrics are linear. The machines are improving steadily. We are the ones accelerating!
The social consequences of that acceleration (labor displacement, institutional failure, capital concentration, epistemic collapse, political realignment) are not predictions for 2034. They are descriptions of 2026. The singularity in the data is a singularity in human attention, and it is already exerting gravitational force on everything it touches.
I see no reason to let epistemological humility interfere with a perfectly good timer.
See you on the other side!
...
Read the original on campedersen.com »
ECB President Christine Lagarde has called for Europe to break its dependence on American payment infrastructure, warning that every card transaction sends European consumer data to the United States. A coalition of 16 banks thinks it has the answer.
What’s happening? ECB President Christine Lagarde told Irish radio that Europe needs its own digital payment system “urgently,” warning that virtually all European card and mobile payments currently run through non-European infrastructure controlled by Visa, Mastercard, PayPal or Alipay. Days later, on 2 February, the European Payments Initiative (EPI) and the EuroPA Alliance signed a landmark agreement to build a pan-European interoperable payment network covering 130 million users across 13 countries. The system, built around the digital wallet Wero, aims to let Europeans pay and transfer money across borders without touching a single American network.
Every time a European taps a card, pays online or splits a bill with friends, the transaction flows through infrastructure owned and operated by American companies. Visa and Mastercard together process approximately $24 trillion in transactions annually. Card payments account for 56% of all cashless transactions in the EU. And the data — who bought what, where, when and for how much — leaves European jurisdiction every time.
“It’s important for us to have digital payment under our control,” Lagarde told The Pat Kenny Show. “Whether you use a card or whether you use a phone, typically it goes through Visa, Mastercard, PayPal, Alipay. Where are all those coming from? Well, either the US or China.”
The host’s response — “I didn’t realise this” — captured the broader European blind spot. Most consumers have no idea that their payment data routinely exits the EU. In a geopolitical environment where Europe is scrambling to reduce dependence on the United States across defence, energy and trade, payments remain an overlooked vulnerability.
The lesson of Russia sharpened the urgency. When Western sanctions cut Russia off from Visa and Mastercard in 2022, the country’s domestic payments were immediately disrupted. European policymakers asked the obvious question: what would happen if the US decided — or was pressured — to restrict European access to those same networks?
The European Payments Initiative, a consortium of 16 major banks and payment processors including BNP Paribas, Deutsche Bank and Worldline, launched Wero in July 2024 as Europe’s answer. Built on SEPA instant credit transfers, Wero lets users send money using just a phone number — no IBAN, no card, no intermediary.
The numbers so far are encouraging. Wero already has over 47 million registered users in Belgium, France and Germany, has processed over €7.5 billion in transfers, and counts more than 1,100 member institutions. Retail payments went live in Germany at the end of 2025, with merchants including Lidl, Decathlon, Rossmann and Air Europa already accepting Wero online. France and Belgium follow in 2026.
But the real breakthrough came on 2 February, when EPI signed a memorandum of understanding with the EuroPA Alliance — a coalition of national payment systems including Italy’s Bancomat, Spain’s Bizum, Portugal’s MB WAY and the Nordics’ Vipps MobilePay. The deal instantly connects approximately 130 million users across 13 countries, covering roughly 72% of the EU and Norway population. Cross-border peer-to-peer payments launch this year, with e-commerce and point-of-sale payments following in 2027.
“European payment sovereignty is not a vision, but a reality in the making,” said Martina Weimert, CEO of EPI.
Europe has tried this before. The Monnet Project, launched in 2008 by twenty European banks, collapsed in 2012. The original EPI vision itself was scaled back after several founding members withdrew, forcing a pivot from a full card-replacement scheme to a narrower account-to-account model.
The core problem has always been fragmentation. Each EU country developed its own domestic payment solution — Bizum in Spain, iDEAL in the Netherlands, Payconiq in Belgium, Girocard in Germany — but none could work across borders. A Belgian consumer buying from a Dutch retailer still needed Visa or Mastercard. National pride and competing banking interests repeatedly sabotaged attempts at unification.
The network effect compounds the challenge. Merchants accept Visa and Mastercard because consumers carry them. Consumers carry them because merchants accept them. Breaking that loop requires either regulatory force or a critical mass of users large enough to make merchants care — which is precisely what the EuroPA deal attempts to deliver by connecting existing national user bases rather than building from scratch.
Running in parallel is the ECB’s digital euro project, which would create a central bank-backed digital currency usable across the eurozone. EU finance ministers have accelerated discussions on the initiative, though the European Parliament has not yet passed the required legislation. Once approved, the ECB estimates it would need a further two to three years to launch.
EPI is careful to distinguish Wero from the digital euro. Wero is a private-sector initiative; the digital euro is public money. They are designed to complement rather than compete — though the overlap in ambition is obvious. Both exist because Europe’s political establishment has finally accepted that payments sovereignty is as strategically important as energy independence or defence autonomy.
Sceptics have good reasons for doubt. Creating a viable alternative to Visa and Mastercard requires “several billion euros” in investment, according to EPI’s own estimates. Low interchange fees under EU regulation make profitability difficult. Consumer habits are deeply entrenched — and neither Visa nor Mastercard will sit idle while Europe tries to dismantle their most profitable market.
Weimert herself concedes that calling Wero a “challenger” may be premature, describing it as functioning like a startup — albeit one with €500 million in backing and 47 million users already on board.
But the political tailwinds are stronger than they have ever been. The EU’s instant payments regulation, the Capital Markets Union push, the broader drive for European strategic autonomy in a world of tariff wars and great power rivalry — all point in the same direction. The question is no longer whether Europe wants its own payment infrastructure. It is whether it can execute fast enough to matter.
As Lagarde put it: “We have the assets and opportunities to do that ourselves. And if we were to remove the internal barriers that we have set for ourselves in Europe, our economic wealth would increase significantly.”
...
Read the original on europeanbusinessmagazine.com »
Google fulfilled an Immigration and Customs Enforcement subpoena that demanded a wide array of personal data on a student activist and journalist, including his credit card and bank account numbers, according to a copy of an ICE subpoena obtained by The Intercept.
Amandla Thomas-Johnson had attended a protest targeting companies that supplied weapons to Israel at a Cornell University job fair in 2024 for all of five minutes, but the action got him banned from campus. When President Donald Trump assumed office and issued a series of executive orders targeting students who protested in support of Palestinians, Thomas-Johnson and his friend Momodou Taal went into hiding.
Google informed Thomas-Johnson via a brief email in April that it had already shared his metadata with the Department of Homeland Security, as The Intercept previously reported. But the full extent of the information the agency sought — including usernames, addresses, itemized list of services, including any IP masking services, telephone or instrument numbers, subscriber numbers or identities, and credit card and bank account numbers — was not previously known.
“I’d already seen the subpoena request that Google and Meta had sent to Momodou [Taal], and I knew that he had gotten in touch with a lawyer and the lawyer successfully challenged that,” Thomas-Johnson said. “I was quite surprised to see that I didn’t have that opportunity.”
The subpoena provides no justification for why ICE is asking for this information, except that it’s required “in connection with an investigation or inquiry relating to the enforcement of U. S. immigration laws.” In the subpoena, ICE requests that Google not “disclose the existence of this summons for indefinite period of time.”
Thomas-Johnson, who is British, believes that ICE requested that information to track and eventually detain him — but he had already fled to Geneva, Switzerland, and is now in Dakar, Senegal.
The Electronic Frontier Foundation, which is representing Thomas-Johnson, and the ACLU of Northern California sent a letter to Google, Amazon, Apple, Discord, Meta, Microsoft, and Reddit last week calling on tech companies to resist similar subpoenas in the future from DHS without court intervention. The letter asks the companies to provide users with as much notice as possible before complying with a subpoena to give them the opportunity to fight it, and to resist gag orders that would prevent the tech companies from informing targets that a subpoena was issued.
“Your promises to protect the privacy of users are being tested right now. As part of the federal government’s unprecedented campaign to target critics of its conduct and policies, agencies like DHS have repeatedly demanded access to the identities and information of people on your services,” the letter reads. “Based on our own contact with targeted users, we are deeply concerned your companies are failing to challenge unlawful surveillance and defend user privacy and speech.”
In addition to Thomas-Johnson’s case, the letter refers to other instances in which technology companies provided user data to DHS, including a subpoena sent to Meta to “unmask” the identities of users who documented immigration raids in California. Unlike Thomas-Johnson, users in that case were given the chance to fight the subpoena because they were made aware of it before Meta complied.
“Google has already fulfilled this subpoena,” an attorney for Google told Thomas-Johnson’s lawyer, as The Intercept previously reported. “Production consisted of basic subscriber information.”
The ICE subpoena requested the detailed information linked to Thomas-Johnson’s Gmail account. Thomas-Johnson confirmed to The Intercept that he had attached his bank and credit card numbers to his account to buy apps.
Google did not respond to a request for comment.
Lindsay Nash, a professor at Cardozo Law and a former staff attorney with ACLU Immigrants’ Rights Project, said that by not giving prior notice, Google deprived Thomas-Johnson of his ability to protect his information.
“The problem is that it doesn’t allow the person whose personal information is on the line and whose privacy may be being invaded to raise challenges to the disclosure of that potentially private information,” Nash said. “And I think that’s important to protect rights that they may have to their own information.”
Tech companies’ data sharing practices are primarily governed by two federal laws, the Stored Communications Act, which protects the privacy of digital communications, including emails, and Section 5 of the Federal Trade Commission Act, which prohibits unfair or deceptive trade practices.
“Under both federal law and the law of every state, you cannot deceive consumers,” said Neil Richards, a law professor at Washington University St. Louis who specializes in privacy, the internet, and civil liberties. “And if you make a material misrepresentation about your data practices, that’s a deceptive trade practice.”
Whether or not corporations are clear enough with consumers about how they collect and share their data has been litigated for decades, Richards said, referencing the infamous Cambridge Analytica lawsuit brought by the Federal Trade Commission, alleging that the company misled Facebook users about data collection and sharing.
Google’s public privacy policy acknowledges that it will share personal information in response to an “enforceable governmental request,” adding that its legal team will “frequently push back when a request appears to be overly broad or doesn’t follow the correct process.”
According to Google, the company overwhelmingly complied with the millions of requests made by the government for user information over the last decade. Its data also shows that those requests have spiked over the last five years. It’s unclear how many of those users were given notice of those requests ahead of time or after.
Richards said that cases like these emphasize the need for legal reforms around data privacy and urged Congress to amend the Stored Communications Act to require a higher standard before the government can access our digital data. He also said the federal government needs to regulate Big Tech and place “substantive restrictions on their ability to share information with the government.”
It’s hard to know exactly how tech companies are handling our personal data in relation to the government, but there seems to have been a shift in optics, Richards said. “What we have seen in the 12 months since the leaders of Big Tech were there on the podium at the inauguration,” Richards said, “is much more friendliness of Big Tech towards the government and towards state power.”
From Dakar, Thomas-Johnson said that understanding the extent of the subpoena was terrifying but had not changed his commitment to his work.
“As a journalist, what’s weird is that you’re so used to seeing things from the outside,” said Thomas-Johnson, whose work has appeared in outlets including Al Jazeera and The Guardian. “We need to think very hard about what resistance looks like under these conditions… where government and Big Tech know so much about us, can track us, can imprison, can destroy us in a variety of ways.”
This story has been updated to reflect that Thomas-Johnson’s legal team still does not know the full extent of the information that Google provided to ICE, but that Thomas-Johnson said his bank and credit card numbers were attached to his account.
...
Read the original on theintercept.com »
I’ve been running a Discord server for about four and a half years now. When I started streaming during the pando, I had no idea that I would end up building a community. Hell, I’d never even used Discord before. I only knew what it was because I had to stop my students from using it.
Don’t like reading? Click here for the final scores.
But folks kept asking for one. My viewers expected a community hub in which people who found their way to my Twitch streams could find each other, even when I was not live. As the whole streaming thing was itself an experiment in remote learning for me, this seemed a natural extension. So now, I have some mileage on me as a community moderator. I’m intimately familiar with the features Discord offers, and all the arguments against using it. I’m sensitive to them, FOSS dork that I am. I’m also keenly sensitive to the arguments about data loss inside of a forever-chat. In fact, I’m so sensitive to it that I even tried to address the problem in some small way.
But Discord, like all freemium services, is a risk. At any moment their advertising model could become intolerable, or their policy about using my data to train AI could change, or their pricing could get out of control, or some other rent-seeking nonsense common to internet services trying to stretch their profit margin.
I need an exit strategy. Anyone using Discord needs an exit strategy. The trick is to find a landing spot that users will tolerate, and that allows the community to continue in some fashion. Change is loss, and that is excruciatingly true for community platforms. Any switch comes with an attrition rate, meaning the destination better be worth the cost in headcount.
For this reason, and for another project, I’ve been deeply researching Discord alternatives for the better part of a year. Some of my colleagues may think me a bit obsessed about the importance of a “chat app,” but I’m convinced that the communication mechanism for online communities is critical to their success. Choosing a new one could be the a matter of life and death for the community. This is a decision we have to get right the first time.
So here, humbly submitted, are my rankings of many of the Discord-like alternatives for maintaining online communities.
I’ve arrived at five broad categories in which an online community platform needs to perform.
Functionality: can it do everything required of a platform for building, organizing, and sustaining a community?
Openness: what access is there to all the tool’s features and code without payment?
Security: how secure are the server and user data against common threats?
Safety: what features are available to moderate the community and protect it from malicious or unwanted behavior?
Decentralization: how reliant is the service on single points of failure?
These will be evaluated on a scale from 1-5, with 5 being the “best” for each criterion.
I’ve done my best to consider multiple use cases and threat models in these scores. I am, however, a flawed, biased meatsack with limited visibility. I may not have predicted your needs precisely. I may have omitted your favorite option. If so, I hope you’ll afford me some grace. I did the best I could.
Oh, and I’m not touching Slack or Teams. Reasons should be obvious.
We’ll start with Discord as a baseline.
As a product, Discord is very, very good. It serves its purpose with an absolutely minimum of friction—both from a user and administrator perspective. Even without paying, the features out of the box are well-considered and helpfully implemented. What is the product, anyway? Sometimes it seems like Discord themselves don’t really know. While they bristle at being called a “Slack clone,” there’s a reason many companies (especially tech startups) choose Discord as both their internal team communication tool, as well as their customer engagement tool. Some truly benighted groups even choose to document their product with it.
Whatever Discord thinks it is, the purpose of a system is what it does, and Discord builds online communities. Say what you want about the company, the closed nature, the increasingly-icky ad model, the core of Discord continues to work well for bringing people together in quasi-public online spaces. The medium of real-time text, aka instant messaging, aka IRC-again-but-not-IRC, has become a default, but one not without limitations. For example, what does this do to your heart rate:
Right?! We’ve embraced immediacy at the expense of depth. Also, in Discord’s case, accessibility. Searching Discord is a proper disaster. While messages are more or less permanent, it is by no means easy to find them again, weeks/months/years later.
But let’s get into the criteria before this becomes a treatise on the nature of the modern web.
As mentioned, Discord is highly functional—for what it does. But its limitations do start to grate as time goes on. Online communities have a predictable lifecycle, in which the excitement of the early days is well-served by real-time chat. The memes are flying; people are excited to meet each other; the future holds boundless possibilities. The space will categorize and fragment, trying to organize the chaos. Over time, most of the messages come from a core group of contributors, with more occasional arrivals and questions from newcomers. This is as it should be. But what happens to the history of that community as it heads up the scroll? How does the past usefully inform the future?
Discord has made some affordances for this with “Forum” type channels. Even so, the past is hard to explore.
Discord is not open, so not much to say on that front.
Discord messages are not end-to-end encrypted. Pretty famously, Discord will give up your data for law enforcement. Although they’ve recently added end-to-end encryption for video and audio, the implementation is clunky. And of course, all the text data in a Discord server is unencrypted. But hey, at least they support MFA?
Safety, in the sense of “Trust and Safety,” may be Discord’s greatest strength. I have greatly appreciated all the moderation tools at my disposal. Even a modestly sized server like mine (~3000 users) would be impossible to manage without automatic word catching, granular permissions on channels and roles, and multiple response options including timeouts, kicks, and bans. Discord also has a very involved onboarding flow that makes certain there is an agreement to community rules before users can participate.
And need we even mention decentralization here? If Discord fails, your community goes dark.
Best for: communities who value secrecy above all.
I love Signal. Like, a lot. I’m a daily user and a donor. I’ve even convinced most of my friends and family to use it as our primary mode of text communication. And yes, I’ve organized a community with it—one for which privacy was (at the time) of paramount importance. I am deeply familiar with all advantages and drawbacks of Signal.
As a secure chat, Signal does just fine. Well, better than fine from a cryptography perspective. It is the gold standard in end-to-end encrypted communications for good reason. But the strongest cryptography in the world is meaningless for a community if the platform is unusable. Fortunately, that’s not the case for Signal. Emoji reactions, stickers, (some) formatted text, and even voice/video calls make it an indispensable tool for secure communications that feel familiar and feature-filled enough for normies. Nobody will be totally lost moving from another chat app to Signal.
If you’re looking for nothing but chat, Signal is fantastic. But many aspects of community-building online are simply unavailable here. To start, there are only group chats. There is no conversation threading or channels to keep conversations organized. You can have multiple chats, but that gets messy quickly.
I can’t even pin posts. In fact, post searchability is a limited feature by design. Most group chats enable disappearing messages. That’s great to prevent incriminating evidence from piling up; it’s terrible for reviewing what a community discussed previously.
Also absent: granular roles in each chat, or anything resembling moderation tools. As an admin, I can only ban users for unwanted behavior. I can neither automatically prevent harassment nor provide a more measured response than the banhammer.
I should mention that almost all these tradeoffs are accepted limitations in service of Signal’s primary objectives.
On the point of decentralization, Signal has none. As Meredith Whitaker recently wrote, all Signal app traffic flows through the same cloud infrastructure, much of which depends on AWS.
If your community’s threat model is such that eliminating all possible points of evidence collection against you matters above all else, Signal is the clear winner. Maintaining that level of operational security naturally comes at the cost of some other creature comforts a community could come to covet.
I didn’t set out to alliterate the hell out of that sentence, but I didn’t stop it either.
Best for: communities who value independence over all, with security/privacy a runner-up.
Oh, Matrix. You are the football that I, in my zigzag-stripe shirt, keep trying to kick. In theory, the Matrix protocol and Element, its flagship client, should be the ideal for decentralized, encrypted communications. Using Element feels a whole lot like using Discord. Heck, it can even bridge communications from Discord and other platforms. Sadly, as time goes on, the nicks from the rough edges start to accumulate.
Before going further, we need to define some terms. Matrix is the federated, encrypted messaging protocol published and maintained by the Matrix Foundation. Synapse is their “reference implementation” server technology written in Python. Synapse is the most common way folks start their own Matrix servers. There are other server implementations, now including “Synapse Pro,” which I guess is a partial rewrite of Synapse in Rust? Element is the first-party client that users would use to connect to Matrix. They need an account on a server, and of course matrix.org is the flagship Matrix server where the vast majority of users have their accounts. But you can point Element at any Matrix server to log in, as long as you have an account on that server.
Confused yet? If users are unwilling to select a Mastodon server, do you think they’d be willing to put up with this?
Ah, but I get ahead of myself. Let’s start with what’s good.
Matrix uses a similar end-to-end cryptography scheme to Signal. “Rooms” (chats, channels) are not encrypted by default, but they can be made so. There have been noted issues with the previous cryptography library used by Element, but the newer vodozemac library is in much better shape. Of course, not all Matrix clients use the new hotness.
A given Matrix server can create multiple rooms (channels), and even group them into “spaces” such that they appear quite similar to Discord servers.
Inside the rooms, things feel familiar. We have threads, emoji reacts, and message search (sorta). On some clients (but not Element), there is the possibility of custom emoji.
And that’s…it. Element promises more, like native video conferencing, but heaven help you if you’re trying to self-host it. It is technically possible, but by no means simple.
“Technically possible, but by no means simple” aptly describes up the entire Matrix experience, actually.
I ran a private Matrix server for about a year and a half. Why private? In two public Matrix rooms I had joined—including the room for Synapse admins—I experienced a common attack in which troll accounts spam the room with CSAM material. Horrible, but not just for the participants and admins in the room. Through the magic of federation, every server who has a user participating in the room now has a copy of the CSAM material, and has to take action to remove it. This requires a manual curl request on the server itself, because Synapse has an appalling lack of moderation tools. It’s so bad that, without third-party tooling, you can’t even ban a user outright from a server; you have to manually ban them from every single room.
Then came September 2, 2025. The outageof matrix.org caused by drive failures was not an indictment of Matrix’s database management or recovery process—in fact, I was quite impressed with their response. But it did put the lie to Matrix’s decentralization for me. Almost none of my friends could use Matrix, even though I was hosting my own server. The onboarding pipeline (especially via Element) is so focused on the flagship server, I daresay it comprises the plurality of Matrix accounts. It’s not easy to get any statistics for all Matrix users, but that is my guess. How “decentralized” is that, really? Just because something can be decentralized doesn’t make it so.
I’m probably a little too close to this one. I so badly wanted Matrix to work, and I tried to make it work for my purposes for a long time. Ultimately, the pain points overcame the benefits. But if you care most about an intersection of message encryption, federation, and decentralization, and you’re willing to put in quite a lot of admin time, Matrix can be a viable community chat platform.
Best for: communities that want a smooth Slack-like experience and are willing to pay for independence
What if you could self-host Slack? That’s basically the Rocket. Chat experience. It’s slick, easy to get set up, and loaded with integrations. All of this comes, as you might expect, at a price. While there is an “open source” Community Edition, its featureset is limited, and you may quickly find yourself looking at the paid plans for additional features or support. Rocket.Chat is one of several platforms that follow this freemium model. I don’t really begrudge them this approach, but it can be frustrating for a community just finding its feet. To their credit, they do offer discounts for open source projects, not-for-profits, and other organizations on a per-request basis.
Rocket. Chat does support end-to-end encrypted communications. Key management can be a little clunky, but I was impressed it had the feature at all.
Be aware, however, that these centrally-managed services will of course allow administrators to audit messages. That is a documented part of the moderation flow for Rocket. Chat. If you demand anonymity or an inability for administrators to view your messages what are you doing in that community? Rocket.Chat might not be right for you.
I’ll quickly mention why I gave it a score of 3 on decentralization. Seems a bit high, right? Until recently, Rocket. Chat supported Matrix federation. Since October 2025, it has pursued a native federation scheme that would allow separate Rocket.Chat instances to share rooms and DMs across server boundaries. This, although not open source, is extremely compelling.
I really enjoyed my experimentation with Rocket. Chat, and found myself thinking seriously about it as an alternative to where I was. The cost is just steep.
Best for: A split between forums and real-time chat
I’ve been playing with Zulip for a bit now, and I still don’t really know what to make of it. From one perspective, it has a bit of an identity crisis, unsure of whether it’s a forum or a chat platform. From another perspective, this dual identity is its greatest strength: real-time when you want it, asynchronous when you don’t.
Zulip is self-hostable, with some caveats. As the plans and pricing detail, anything beyond 10 users starts costing some cash. It adds up quickly. Seemingly everything can be done in a self-hosted manner, you’re at the mercy of some truly byzantine documentation.
While there is great functionality to be found, it comes at a rather steep price for organizations of any size—whether administrative overhead, or just plain cash for the managed services. Although to their credit, they do offer a community plan with many of those higher-tier features available for qualifying organizations.
One feature you won’t find anywhere is end-to-end encryption. The developers seem rather against the idea. Multi-factor authentication must be enabled in the config files, not the admin frontend—hardly ideal.
Unless I’m missing it, there do not appear to be any serious content moderation tools in Zulip. The community moderation toolkit is, in my opinion, the barest of essentials. Nearly all of these capabilities are reactive, not proactive. It seems the expectation is good-faith participation, with those agreements and guarantees handled elsewhere. Having been on the wrong end of malicious intent, I don’t feel safe enough with these tools.
Lastly, on decentralization, it’s mostly a miss. Even for self-hosted plans, anything above the free tier requires a zulip.com account for plan management. And federation? Forget about it. Although every Zulip server can technically host multiple Zulip instances, they don’t interact with one another.
If anything, writing this overview has left me more confused about Zulip than when I began. I just don’t know where it fits, or who can afford these prices for a growing community.
Best for: Fortune 100s and governments
Take a look at the front page of the Mattermost website, and you’ll get an idea of the kind of organization they expect to be using this thing. Odds are, your nascent online community ain’t that. While the software may superficially look like some of these others, its intention is entirely other. Community building is not what’s going on here. Rather, Mattermost’s objective is highly-focused, integrated workflows that involve human communication alongside machine automation. Business operations are what…matter most.
Mattermost describes itself as “Open core,” and the core is…rather tiny. Even when installing the self-hosted version, you’ll soon need a rather expensive license for real work. Starting at $10/user is a clear indicator of the intended customer base. It ain’t me, that’s for sure.
Mattermost prides itself on a certain kind of security—specifically, the regulatory kind. Configurations for all manner of compliance regimes are provided in the documentation. Normal security is present as well, including MFA. Not so much end-to-end encryption, although mention is made of encrypting the PostgreSQL database. That’s novel, although not a solution to the problem addressed by E2EE.
I honestly don’t think Mattermost’s developers are capable of imagining a positive argument for an audit-resistant application. This thing is designed for monitoring user activity six ways from Sunday.
Consequently, “safety” in the way we’ve defined it here is absent from Mattermost’s conception of the universe. If you’re logging on to a Mattermost server, about a thousand other trust mechanisms are in place to guarantee you won’t act like a doofus on this app.
Hardly a point to mentioning decentralization here, beyond the possibility of self-hosting. Ultimately though, you only get what your license key allows, and since the server is only open core, Mattermost itself is quite the point of failure.
Best for: anything but real-time chat, really.
I’m gonna be honest: I kind of love Discourse. I’m not sure I have a reason to deploy it, but I want to. Everything Joan Westenberg writes in this piece in praise of Discourse resonates with me. Community for the long-haul? Transparency in governance? Built-in systems for establishing human trust?
But Discourse has one significant difference from everything else on this list: it is primarily a forum, not a real-time chat app. I’m not saying that’s a bad thing, necessarily, but it sure is different. If your community expects instantaneous communication, Discourse may be a big adjustment. Or it might not be sufficient on its own for your needs.
But what does it do well? Forums! It’s very easy to navigate categories and topics. The UI provides clear signals for when something happened. Oh, and search is simple.
Maybe the best way to think of Discourse is as an anti-Discord. It’s everything Discord isn’t: asynchronous, open source, and self-hostable.
Discourse is 100% open source. I’m running it right now in my homelab, with access to all the plugins and features I’d expect, costing me only the time it took to install.
I was additionally quite impressed with the moderation tools. Not only are they plenty of tools to track user activity, but the moderation decisions are public by default. This is a good thing! The community can hold its leaders accountable for upholding their end of the bargain: to act in good faith in support of the community.
One area in which it falters a bit is, of course, end-to-end encryption. Very few of these tools enable it, and when they do, it can be clunky. It’s entirely possible that the right option for a community is one of these and Signal for sensitive, out-of-band communications.
If you start to look around, you’ll notice Discourse fora everywhere. There’s a good reason for that! The software is rock solid for what it is. And maybe your community needs its depth of features more than it needs instantaneous messaging.
Best for: Appreciating how much work it takes to make one of these work
Stoat, née Revolt, was meant to be an open source Discord alternative. Recently, they received a cease-and-desist regarding the name Revolt, and renamed to a…weasel.
Anyway this thing is so far from being ready for prime time, I only include it here to call out the project. I wish them the best and hope for good things, especially since you can self-host the server. But a lack of stability and features prevent this from being useful for anything beyond experimentation. Maybe someday.
The Tool is Not the Community
Choosing a platform on which to build a community is just the beginning. It’s vitally important, yet insufficient to a community’s success. Tools do not make a culture; the people engaging on it do. Most of my time building the culture of TTI has not been a technical endeavor. What we have—and I think it’s pretty special—has little to do with Discord’s featureset. It just happens to be where the people are. The options presented to you here allow you to seek a path that aligns with your objectives, principals, and needs at a purely mechanical level. The rest depends on the human element.
...
Read the original on taggart-tech.com »
I wrote my first line of code in 1983. I was seven years old, typing BASIC into a machine that had less processing power than the chip in your washing machine. I understood that machine completely. Every byte of RAM had a purpose I could trace. Every pixel on screen was there because I’d put it there. The path from intention to result was direct, visible, and mine.
Forty-two years later, I’m sitting in front of hardware that would have seemed like science fiction to that kid, and I’m trying to figure out what “building things” even means anymore.
This isn’t a rant about AI. It’s not a “back in my day” piece. It’s something I’ve been circling for months, and I think a lot of experienced developers are circling it too, even if they haven’t said it out loud yet.
My favourite period of computing runs from the 8-bits through to about the 486DX2-66. Every machine in that era had character. The Sinclair Spectrum with its attribute clash. The Commodore 64 with its SID chip doing things the designers never intended. The NES with its 8-sprite-per-scanline limit that made developers invent flickering tricks to cheat the hardware. And the PC — starting life as a boring beige box for spreadsheets, then evolving at breakneck pace through the 286, 386, and 486 until it became a gaming powerhouse that could run Doom. You could feel each generation leap. Upgrading your CPU wasn’t a spec sheet exercise — it was transformative.
These weren’t just products. They were engineering adventures with visible tradeoffs. You had to understand the machine to use it. IRQ conflicts, DMA channels, CONFIG. SYS and AUTOEXEC.BAT optimisation, memory managers — getting a game to run was the game. You weren’t just a user. You were a systems engineer by necessity.
And the software side matched. Small teams like id Software were going their own way, making bold technical decisions because nobody had written the rules yet. Carmack’s raycasting in Wolfenstein, the VGA Mode X tricks in Doom — these were people pushing against real constraints and producing something genuinely new. Creative constraints bred creativity.
Then it professionalised. Plug and Play arrived. Windows abstracted everything. The Wild West closed. Computers stopped being fascinating, cantankerous machines that demanded respect and understanding, and became appliances. The craft became invisible.
But it wasn’t just the craft that changed. The promise changed.
When I started, there was a genuine optimism about what computers could be. A kid with a Spectrum could teach themselves to build anything. The early web felt like the greatest levelling force in human history. Small teams made bold decisions because nobody had written the rules yet.
That hope gave way to something I find genuinely distasteful. The machines I fell in love with became instruments of surveillance and extraction. The platforms that promised to connect us were really built to monetise us. The tinkerer spirit didn’t die of natural causes — it was bought out and put to work optimising ad clicks.
The thing I loved changed, and then it was put to work doing things I’m not proud to be associated with. That’s a different kind of loss than just “the tools moved on.”
But I adapted. That’s what experienced developers, human beings, do.
Over four decades I’ve been through more technology transitions than I can count. New languages, new platforms, new paradigms. CLI to GUI. Desktop to web. Web to mobile. Monoliths to microservices. Tapes, floppy discs, hard drives, SSDs. JavaScript frameworks arriving and dying like mayflies.
Each wave required learning new things, but the core skill transferred. You learned the new platform, you applied your existing understanding of how systems work, and you kept building. The tool changed; the craft didn’t. You were still the person who understood why things broke, how systems composed, where today’s shortcut became next month’s mess.
I’ve written production code in more languages than some developers have heard of. I’ve shipped software on platforms that no longer exist. I’ve chased C-beams off the shoulder of Orion. And every time the industry lurched in a new direction, the experience compounded. You didn’t start over. You brought everything with you and applied it somewhere new.
That’s the deal experienced developers made with the industry: things change, but understanding endures.
I say that knowing how often those words have been wrong throughout history. But hear me out.
Previous technology shifts were “learn the new thing, apply existing skills.” AI isn’t that. It’s not a new platform or a new language or a new paradigm. It’s a shift in what it means to be good at this.
I noticed it gradually. I’d be working on something — building a feature, designing an architecture — and I’d realise I was still doing the same thing I’d always done, just with the interesting bits hollowed out. The part where you figure out the elegant solution, where you wrestle with the constraints, where you feel the satisfaction of something clicking into place — that was increasingly being handled by a model that doesn’t care about elegance and has never felt satisfaction.
I’m not typing the code anymore. I’m reviewing it, directing it, correcting it. And I’m good at that — 42 years of accumulated judgment about what works and what doesn’t, what’s elegant versus what’s expedient, how systems compose and where they fracture. That’s valuable. I know it’s valuable. But it’s a different kind of work, and it doesn’t feel the same.
The feedback loop has changed. The intimacy has gone. The thing that kept me up at night for decades — the puzzle, the chase, the moment where you finally understand why something isn’t working — that’s been compressed into a prompt and a response. And I’m watching people with a fraction of my experience produce superficially similar output. The craft distinction is real, but it’s harder to see from the outside. Harder to value. Maybe harder to feel internally.
Here’s the part that makes me laugh, darkly.
I saw someone on LinkedIn recently — early twenties, a few years into their career — lamenting that with AI they “didn’t really know what was going on anymore.” And I thought: mate, you were already so far up the abstraction chain you didn’t even realise you were teetering on top of a wobbly Jenga tower.
They’re writing TypeScript that compiles to JavaScript that runs in a V8 engine written in C++ that’s making system calls to an OS kernel that’s scheduling threads across cores they’ve never thought about, hitting RAM through a memory controller with caching layers they couldn’t diagram, all while npm pulls in 400 packages they’ve never read a line of.
But sure. AI is the moment they lost track of what’s happening.
The abstraction ship sailed decades ago. We just didn’t notice because each layer arrived gradually enough that we could pretend we still understood the whole stack. AI is just the layer that made the pretence impossible to maintain.
The difference is: I remember what it felt like to understand the whole machine. I’ve had that experience. And losing it — even acknowledging that it was lost long before AI arrived — is a kind of grief that someone who never had it can’t fully feel.
I don’t want to be dishonest about this. There’s a version of this post where I tell you that experience is more valuable than ever, that systems thinking and architectural judgment are the things AI can’t replace, that the craft endures in a different form.
And that’s true. When I’m working on something complex — juggling system-level dependencies, holding a mental model across multiple interacting specifications, making the thousand small decisions that determine whether something feels coherent or just works — I can see how I still bring something AI doesn’t. The taste. The judgment. The pattern recognition from decades of seeing things go wrong.
AI tools actually make that kind of thinking more valuable, not less. When code generation is cheap, the bottleneck shifts to the person who knows what to ask for, can spot when the output is subtly wrong, and can hold the whole picture together. Typing was never the hard part.
But I’d be lying if I said it felt the same. It doesn’t. The wonder is harder to access. The sense of discovery, of figuring something out through sheer persistence and ingenuity — that’s been compressed. Not eliminated, but compressed. And something is lost in the compression, even if something is gained.
I turned 50 recently. Four decades of intensity, of crafting and finding satisfaction and identity in the building.
And now I’m in what I’ve started calling a fallow period. Not burnout exactly. More like the ground shifting under a building you thought that although ever changing also had a permanence, and trying to figure out where the new foundation is.
I don’t have a neat conclusion. I’m not going to tell you that experienced developers just need to “push themselves up the stack” or “embrace the tools” or “focus on what AI can’t do.” All of that is probably right, and none of it addresses the feeling.
The feeling is: I gave 42 years to this thing, and the thing changed into something I’m not sure I recognise anymore. Not worse, necessarily. Just different. And different in a way that challenges the identity I built around it and doesn’t satisfy in the way it did.
I suspect a lot of developers over 40 are feeling something similar and not saying it, because the industry worships youth and adaptability and saying “this doesn’t feel like it used to” sounds like you’re falling behind.
I’m not falling behind. I’m moving ahead, taking advantage of the new tools, building faster than ever, and using these tools to help others accelerate their own work. I’m creating products I could only have dreamt of a few years ago. But at the same time I’m looking at the landscape, trying to figure out what building means to me now. The world’s still figuring out its shape too. Maybe that’s okay.
Maybe the fallow period is the point. Not something to push through, but something to be in for a while.
I started programming when I was seven because a machine did exactly what I told it to, felt like something I could explore and ultimately know, and that felt like magic. I’m fifty now, and the magic is different, and I’m learning to sit with that.
...
Read the original on www.jamesdrandall.com »
...
Read the original on arxiv.org »
We have raised a $200M Series C, and yes, you are permitted a double take: didn’t we just raise a
$100M Series B? And aren’t we the ones that are especially candid about the
perils of raising too much money?Well, yes, on both fronts, so let us explain a little. First, we have the luxury of having achieved real product-market fit: we are making a product that people want to buy. This takes on additional dimensions when making something physical: with complexities like manufacturing, inventory, cash-conversion, and shifting supply chains, product-market fit implies getting the unit economics of the business right. All of this is a long way of saying: we did not (and do not) need to raise capital to support the business.So if we didn’t need to raise, why seek the capital? Well, we weren’t seeking it, really. But our investors, seeing the business take off, were eager to support it. And we, in turn, were eager to have them: they were the ones, after all, who joined us in taking a real leap when it felt like there was a lot more risk on the table. They understood our vision for the company and shared our love for customers and our desire to build a singular team. They had been with us in some difficult moments; they know and trust us, as do we them. So being able to raise a Series C purely from our existing investors presented a real opportunity.Still, even from investors that we trust and with a quick close, if the business doesn’t need the money, does it make sense to raise? We have always believed that our biggest challenge at Oxide was time — and therefore capital. We spelled this out in our initial pitch deck from 2019:Six years later, we stand by this, which is not to minimize any of those challenges: the technical challenges were indeed hard; we feel fortunate to have attracted an extraordinary team; and we certainly caught some
lucky breaks
with respect to the market. With this large Series C, we have entirely de-risked capital going forward, which in turn assures our independence.This last bit is really important, because any buyer of infrastructure has had their heart broken countless times by promising startups that succumbed to acquisition by one of the established players that they were seeking to disrupt. The serial disappointments leave a refreshing bluntness in their wake, and it’s not uncommon for us to be asked directly: “How do I know you won’t be bought?“Our intent in starting Oxide was not to be an acquisition target but rather build a generational company; this is our life’s work, not a means to an end. With our Series C, customers don’t have to merely take our word for it: we have the capital to assure our survival into the indefinite future. If our Series B left us with confidence in achieving our mission, our Series C leaves us with certainty: we’re going to kick butt, have fun, not cheat (of course!), love our customers — and change computing forever.
...
Read the original on oxide.computer »
This article has been reviewed according to Science X’s editorial process
and policies. Editors have highlighted the following attributes while ensuring the content’s credibility:
This article has been reviewed according to Science X’s editorial process
and policies. Editors have highlighted the following attributes while ensuring the content’s credibility:
Meta and Google-owned YouTube were accused Monday of pushing highly addictive apps on children as a landmark social media trial began in earnest in a California court.
The blockbuster trial in front of a Los Angeles jury could establish a legal precedent on whether the social media juggernauts deliberately designed their platforms to lead to addiction in children.
The proceedings are expected to see Meta chief Mark Zuckerberg on the stand next week and Instagram boss Adam Mosseri in the courtroom as early as Wednesday. In addition to Instagram, Meta’s platforms include Facebook and WhatsApp.
“This case is about two of the richest corporations in history who have engineered addiction in children’s brains,” plaintiffs’ attorney Mark Lanier told the jury in his opening statement.
“This case is as easy as A-B-C,” Lanier said as he stacked children’s toy blocks bearing the letters.
He contended the A was for addicting, the B for brains and the C for children.
“They don’t only build apps; they build traps,” Lanier said, saying Meta and YouTube pursued “addiction by design,” making his arguments using props like a toy Ferrari and a mini slot machine.
Meta attorney Paul Schmidt countered in opening remarks to the jury that evidence will show problems with the plaintiff’s family and real-world bullying took a toll on her self-esteem, body image and happiness rather than Instagram.
“If you took Instagram away and everything else was the same in Kaley’s life, would her life be completely different, or would she still be struggling with the same things she is today?” Schmidt asked, pointing out an Instagram addiction is never mentioned in medical records included in the evidence.
The trial before Judge Carolyn Kuhl focuses on allegations that a 20-year-old woman identified as Kaley G. M. suffered severe mental harm because she became addicted to social media as a child.
The case is being treated as a bellwether proceeding because its outcome could set the tone, and the level of payouts to successful plaintiffs, for a tidal wave of similar litigation across the United States.
Social media firms are accused in hundreds of lawsuits of leading young users to become addicted to content and suffer from depression, eating disorders, psychiatric hospitalization and even suicide.
Lawyers for the plaintiffs are borrowing strategies used in the 1990s and 2000s against the tobacco industry, which faced a similar onslaught of lawsuits arguing that companies knowingly sold a harmful product.
Lanier told the jurors that Kaley began watching YouTube at six years old because the company never told her mother “the goal was viewer addiction,” or that toddlers as young as two were being targeted despite “critical” risk of addiction.
“This is the first time that a social media company has ever had to face a jury for harming kids,” Social Media Victims Law Center founder Matthew Bergman, whose team is involved in more than 1,000 such cases, told AFP.
Internet titans have argued that they are shielded by Section 230 of the US Communications Decency Act, which frees them from responsibility for what social media users post.
However, this case argues that those firms are culpable for business models designed to hold people’s attention and to promote content that can harm their mental health.
The plaintiffs said they would call expert witnesses that will argue that young people’s brains are not yet developed to withstand the powers of the algorithms being flung at them on Instagram and YouTube.
The company pointed to recent efforts to provide more safeguards for young people, adding that “we’re always working to do better.”
Jose Castaneda, a YouTube spokesperson, said “the allegations in these complaints are simply not true.”
Lawyers for YouTube are to present opening remarks to the jury on Tuesday.
Snapchat and TikTok were named as defendants in the suit, but struck settlement deals before the start of the trial. The terms were not disclosed.
Lawsuits, including some brought by school districts, accusing social media platforms of practices endangering young users are making their way through federal court in northern California and state courts across the country.
A separate lawsuit accusing Meta of putting profit over the well-being of young users was also getting underway in New Mexico on Monday.
...
Read the original on techxplore.com »
Streaming speech recognition running natively and in the browser. A pure Rust implementation of Mistral’s Voxtral Mini 4B Realtime model using the Burn ML framework.
The Q4 GGUF quantized path (2.5 GB) runs entirely client-side in a browser tab via WASM + WebGPU. Try it live.
# Download model weights (~9 GB)
uv run –with huggingface_hub \
hf download mistralai/Voxtral-Mini-4B-Realtime-2602 –local-dir models/voxtral
# Transcribe an audio file (f32 SafeTensors path)
cargo run –release –features “wgpu,cli,hub” –bin voxtral-transcribe — \
–audio audio.wav –model models/voxtral
# Or use the Q4 quantized path (~2.5 GB)
cargo run –release –features “wgpu,cli,hub” –bin voxtral-transcribe — \
–audio audio.wav –gguf models/voxtral-q4.gguf –tokenizer models/voxtral/tekken.json
# Build WASM package
wasm-pack build –target web –no-default-features –features wasm
# Generate self-signed cert (WebGPU requires secure context)
openssl req -x509 -newkey ec -pkeyopt ec_paramgen_curve:prime256v1 \
-keyout /tmp/voxtral-key.pem -out /tmp/voxtral-cert.pem \
-days 7 -nodes -subj “/CN=localhost”
# Start dev server
bun serve.mjs
Open https://localhost:8443, accept the certificate, and click Load from Server to download the model shards. Record from your microphone or upload a WAV file to transcribe.
Hosted demo on HuggingFace Spaces if you want to skip local setup.
The upstream mistral-common library left-pads audio with 32 silence tokens (at 12.5 Hz). After the mel/conv/reshape pipeline, this covers only 16 of the 38 decoder prefix positions with silence — the remaining 22 contain actual audio. The f32 model handles this fine, but Q4_0 quantization makes the decoder sensitive to speech content in the prefix: audio that starts immediately with speech (mic recordings, clips with no leading silence) produces all-pad tokens instead of text.
The left padding is increased to 76 tokens, which maps to exactly 38 decoder tokens of silence and covers the full streaming prefix. See src/audio/pad.rs for details.
No sync GPU readback — All tensor reads use into_data_async().await
# Native (default features: wgpu + native-tokenizer)
cargo build –release
# With all features
cargo build –release –features “wgpu,cli,hub”
# WASM
wasm-pack build –target web –no-default-features –features wasm
# Unit + integration tests (requires GPU for full suite)
cargo test –features “wgpu,cli,hub”
# Lint
cargo clippy –features “wgpu,cli,hub” — -D warnings
cargo clippy –no-default-features –features wasm –target wasm32-unknown-unknown — -D warnings
# E2E browser test (requires Playwright + model shards)
bunx playwright test tests/e2e_browser.spec.ts
GPU-dependent tests (model layer shapes, Q4 matmul, WGSL shader correctness) are skipped in CI since GitHub Actions runners lack a GPU adapter. These tests run locally on any machine with Vulkan, Metal, or WebGPU support.
The GGUF file must be split into shards of 512 MB or less to stay under the browser’s ArrayBuffer limit:
split -b 512m models/voxtral-q4.gguf models/voxtral-q4-shards/shard-
The dev server and E2E test discover shards automatically from models/voxtral-q4-shards/.
Coming soon: accuracy (WER) and inference speed benchmarks across native and browser targets.
...
Read the original on github.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.