10 interesting stories served every morning and every evening.




1 814 shares, 106 trendiness

The Singularity will Occur on a Tuesday

Everyone in San Francisco is talk­ing about the sin­gu­lar­ity. At din­ner par­ties, at cof­fee shops, at the OpenClaw meetup where Ashton Kutcher showed up for some rea­son. The con­ver­sa­tions all have the same shape: some­one says it’s com­ing, some­one says it’s hype, and no­body has a num­ber.

This seems like the wrong ques­tion. If things are ac­cel­er­at­ing (and they mea­sur­ably are) the in­ter­est­ing ques­tion is­n’t whether. It’s when. And if it’s ac­cel­er­at­ing, we can cal­cu­late ex­actly when.

I col­lected five real met­rics of AI progress, fit a hy­per­bolic model to each one in­de­pen­dently, and found the one with gen­uine cur­va­ture to­ward a pole. The date has mil­lisec­ond pre­ci­sion. There is a count­down.

Five met­rics, cho­sen for what I’m call­ing their an­thropic sig­nif­i­cance (anthropic here in the Greek sense (“pertaining to hu­mans”), not the com­pany, though they ap­pear in the dataset with sus­pi­cious fre­quency):

Tokens per dol­lar: cost col­lapse of in­tel­li­gence (log-transformed, be­cause the Gemini Flash out­lier spans 150× the range oth­er­wise)

Each met­ric nor­mal­ized to . Release in­ter­vals in­verted (shorter = bet­ter). Tokens per dol­lar log-trans­formed be­fore nor­mal­iz­ing (the raw val­ues span five or­ders of mag­ni­tude; with­out the log, Gemini Flash at 2.5M to­kens/$ dom­i­nates the fit and every­thing else is noise). Each se­ries keeps its own scale, no merg­ing into a sin­gle en­sem­ble.

An ex­po­nen­tial ap­proaches in­fin­ity only as . You’d be wait­ing for­ever. Literally.

We need a func­tion that hits in­fin­ity at a fi­nite time. That’s the whole point of a sin­gu­lar­ity: a pole, a ver­ti­cal as­ymp­tote, the math break­ing:

As , the de­nom­i­na­tor goes to zero. . Not a bug. The fea­ture.

Polynomial growth () never reaches in­fin­ity at fi­nite time. You could wait un­til heat death and would still be fi­nite. Polynomials are for peo­ple who think AGI is decades away.”

Exponential growth reaches in­fin­ity at . Technically a sin­gu­lar­ity, but an in­fi­nitely pa­tient one. Moore’s Law was ex­po­nen­tial. We are no longer on Moore’s Law.

Hyperbolic growth is what hap­pens when the thing that’s grow­ing ac­cel­er­ates its own growth. Better AI → bet­ter AI re­search tools → bet­ter AI → bet­ter tools. Positive feed­back with supra­lin­ear dy­nam­ics. The sin­gu­lar­ity is real and fi­nite.

The pro­ce­dure is straight­for­ward, which should con­cern you.

The model fits a sep­a­rate hy­per­bola to each met­ric:

Each se­ries gets its own scale and off­set . The sin­gu­lar­ity time is shared. MMLU scores and to­kens-per-dol­lar have no busi­ness be­ing on the same y-axis, but they can agree on when the pole is.

For each can­di­date , the per-se­ries fits are lin­ear in and . The ques­tion is: which makes the hy­per­bola fit best?

Here’s the thing no­body tells you about fit­ting sin­gu­lar­i­ties: most met­rics don’t ac­tu­ally have one. If you min­i­mize to­tal RSS across all se­ries, the best is al­ways at in­fin­ity. A dis­tant hy­per­bola de­gen­er­ates into a line, and lines fit noisy data just fine. The singularity date” ends up be­ing what­ever you set as the search bound­ary. You’re find­ing the edge of your search grid, not a sin­gu­lar­ity.

So in­stead, we look for the real sig­nal. For each se­ries in­de­pen­dently, grid search and find the peak: the date where hy­per­bolic fits bet­ter than any nearby al­ter­na­tive. If a se­ries gen­uinely curves to­ward a pole, its will peak at some fi­nite and then de­cline. If it’s re­ally just lin­ear, will keep in­creas­ing as and never peak. No peak, no sig­nal, no vote!

One se­ries peaks! arXiv emergent” (the count of AI pa­pers about emer­gence) has a clear, un­am­bigu­ous max­i­mum. The other four are mo­not­o­n­i­cally bet­ter fit by a line. The sin­gu­lar­ity date comes from the one met­ric that’s ac­tu­ally go­ing hy­per­bolic.

This is more hon­est than forc­ing five met­rics to av­er­age out to a date that none of them in­di­vid­u­ally sup­port.

Same in­puts → same date. Deterministic. The sto­chas­tic­ity is in the uni­verse, not the model.

The fit con­verged! Each se­ries has its own at the shared , so you can see ex­actly which met­rics the hy­per­bola cap­tures well and which it does­n’t. arX­iv’s is the one that mat­ters. It’s the se­ries that ac­tu­ally peaked.

The 95% con­fi­dence in­ter­val comes from pro­file like­li­hood on . We slide the sin­gu­lar­ity date for­ward and back­ward un­til the fit de­grades past an F-threshold.

How much does the date move if we drop one met­ric en­tirely?

If drop­ping a sin­gle se­ries shifts by years, that se­ries was do­ing all the work. If the shifts are zero, the dropped se­ries never had a sig­nal in the first place.

The table tells the story plainly: arXiv is do­ing all the work. Drop it and the date jumps to the search bound­ary (no re­main­ing se­ries has a fi­nite peak). Drop any­thing else and noth­ing moves. They were never con­tribut­ing to the date, only pro­vid­ing con­text curves at the shared .

Note: Copilot has ex­actly 2 data points and 2 pa­ra­me­ters ( and ), so it fits any hy­per­bola per­fectly. Zero RSS, zero in­flu­ence on . It’s along for the ride!

The model says at . But what does infinity” mean for arXiv pa­pers about emer­gence? It does­n’t mean in­fi­nitely many pa­pers get pub­lished on a Tuesday in 2034.

It means the model breaks. is the point where the cur­rent tra­jec­to­ry’s cur­va­ture can no longer be sus­tained. The sys­tem ei­ther breaks through into some­thing qual­i­ta­tively new, or it sat­u­rates and the hy­per­bola was wrong. A phase tran­si­tion marker, not a phys­i­cal pre­dic­tion.

But here’s the part that should un­set­tle you: the met­ric that’s ac­tu­ally go­ing hy­per­bolic is hu­man at­ten­tion, not ma­chine ca­pa­bil­ity.

MMLU, to­kens per dol­lar, re­lease in­ter­vals. The ac­tual ca­pa­bil­ity and in­fra­struc­ture met­rics. All lin­ear. No pole. No sin­gu­lar­ity sig­nal. The only curve point­ing at a fi­nite date is the count of pa­pers about emer­gence. Researchers notic­ing and nam­ing new be­hav­iors. Field ex­cite­ment, mea­sured memet­i­cally.

The data says: ma­chines are im­prov­ing at a con­stant rate. Humans are freak­ing out about it at an ac­cel­er­at­ing rate that ac­cel­er­ates its own ac­cel­er­a­tion.

That’s a very dif­fer­ent sin­gu­lar­ity than the one peo­ple ar­gue about.

If marks when the rate of AI sur­prises ex­ceeds hu­man ca­pac­ity to process them, the in­ter­est­ing ques­tion is­n’t what hap­pens to the ma­chines. It’s what hap­pens to us.

And the un­com­fort­able an­swer is: it’s al­ready hap­pen­ing.

The la­bor mar­ket is­n’t ad­just­ing. It’s snap­ping. In 2025, 1.1 mil­lion lay­offs were an­nounced. Only the sixth time that thresh­old has been breached since 1993. Over 55,000 ex­plic­itly cited AI. But HBR found that com­pa­nies are cut­ting based on AIs po­ten­tial, not its per­for­mance. The dis­place­ment is an­tic­i­pa­tory. The curve does­n’t need to reach the pole. It just needs to look like it will.

Institutions can’t keep up. The EU AI Act’s high-risk rules have al­ready been de­layed to 2027. The US re­voked its own 2023 AI ex­ec­u­tive or­der in January 2025, then is­sued a new one in December try­ing to pre­empt state laws. California and Colorado are go­ing their own way any­way. The laws be­ing writ­ten to­day reg­u­late 2023′s prob­lems. By the time leg­is­la­tion catches up to GPT-4, we’re on GPT-7. When gov­ern­ments vis­i­bly can’t keep up, trust does­n’t erode. It col­lapses. Global trust in AI has dropped to 56%.

Capital is con­cen­trat­ing at dot-com lev­els. The top 10 S&P 500 stocks (almost all AI-adjacent) hit 40.7% of in­dex weight in 2025, sur­pass­ing the dot-com peak. Since ChatGPT launched, AI-related stocks have cap­tured 75% of S&P 500 re­turns, 80% of earn­ings growth, and 90% of cap­i­tal spend­ing growth. The Shiller CAPE is at 39.4. The last time it was this high was 1999. The money flood­ing in does­n’t re­quire AI to ac­tu­ally reach su­per­in­tel­li­gence. It just re­quires enough peo­ple to be­lieve the curve keeps go­ing up.

People are los­ing the thread. Therapists are re­port­ing a surge in what they’re call­ing FOBO (Fear of Becoming Obsolete). The clin­i­cal lan­guage is strik­ing: pa­tients de­scribe it as the uni­verse say­ing, You are no longer needed.’” 60% of US work­ers be­lieve AI will cut more jobs than it cre­ates. AI us­age is up 13% year-over-year, but con­fi­dence in it has dropped 18%. The more peo­ple use it, the less they trust it.

The epis­temics are crack­ing. Less than a third of AI re­search is re­pro­ducible. Under 5% of re­searchers share their code. Corporate labs are pub­lish­ing less. The gap be­tween what fron­tier labs know and what the pub­lic knows is grow­ing, and the peo­ple mak­ing pol­icy are op­er­at­ing on in­for­ma­tion that’s al­ready ob­so­lete. The ex­perts who tes­tify be­fore Congress con­tra­dict each other, be­cause the field is mov­ing faster than ex­per­tise can form.

The pol­i­tics are re­align­ing. TIME is writ­ing about pop­ulist AI back­lash. Foreign Affairs pub­lished The Coming AI Backlash: How the Anger Economy Will Supercharge Populism.” HuffPost says AI will de­fine the 2026 midterms. MAGA is split­ting over whether AI is pro-busi­ness or anti-worker. Sanders pro­posed a data cen­ter mora­to­rium. The old left-right axis is buck­ling un­der the weight of a ques­tion it was­n’t built to an­swer.

All of this is hap­pen­ing eight years be­fore tst_sts​. The so­cial sin­gu­lar­ity is front-run­ning the tech­ni­cal one. The in­sti­tu­tional and psy­cho­log­i­cal dis­rup­tion does­n’t wait for ca­pa­bil­i­ties to go ver­ti­cal. It starts as soon as the tra­jec­tory be­comes leg­i­ble.

The pole at is­n’t when ma­chines be­come su­per­in­tel­li­gent. It’s when hu­mans lose the abil­ity to make co­her­ent col­lec­tive de­ci­sions about ma­chines. The ac­tual ca­pa­bil­i­ties are al­most be­side the point. The so­cial fab­ric frays at the seams of at­ten­tion and in­sti­tu­tional re­sponse time, not at the fron­tier of model per­for­mance.

The date comes from one se­ries. arXiv emergent” is the only met­ric with gen­uine hy­per­bolic cur­va­ture. The other four are bet­ter fit by straight lines. The sin­gu­lar­ity date is re­ally the date when AI emer­gence re­search goes ver­ti­cal.” Whether field ex­cite­ment is a lead­ing in­di­ca­tor or a lag­ging one is the crux of whether this means any­thing.

The model as­sumes sta­tion­ar­ity. Like as­sum­ing the weather will con­tinue to be changing.” The curve will bend, ei­ther into a lo­gis­tic (the hype sat­u­rates) or into some­thing the model can’t rep­re­sent (genuine phase tran­si­tion). marks where the cur­rent regime can’t con­tinue, not what comes af­ter.

MMLU is hit­ting its ceil­ing. Benchmark sat­u­ra­tion in­tro­duces a lep­tokur­tic com­pres­sion ar­ti­fact. MMLUs low re­flects this. The hy­per­bola is the wrong shape for sat­u­rat­ing data.

Tokens per dol­lar is log-trans­formed (values span five or­ders of mag­ni­tude) and non-mo­not­o­nic (GPT-4 cost more than 3.5; Opus 4.5 costs more than DeepSeek-R1). The cost curve is­n’t smooth: it’s Pareto ad­vances in­ter­spersed with we spent more on this one.”

Five met­rics is­n’t enough. More se­ries with gen­uine hy­per­bolic cur­va­ture would make the date less de­pen­dent on arXiv alone. A proper study would add SWE-bench, ARC, GPQA, com­pute pur­chases, tal­ent salaries. I used five be­cause five fits in a table.

Copilot has two data points. Two pa­ra­me­ters, two points, zero de­grees of free­dom, zero RSS con­tri­bu­tion. The sen­si­tiv­ity analy­sis con­firms it does­n’t mat­ter.

The math found one met­ric curv­ing to­ward a pole on a spe­cific day at a spe­cific mil­lisec­ond: the rate at which hu­mans are dis­cov­er­ing emer­gent AI be­hav­iors. The other four met­rics are lin­ear. The ma­chines are im­prov­ing steadily. We are the ones ac­cel­er­at­ing!

The so­cial con­se­quences of that ac­cel­er­a­tion (labor dis­place­ment, in­sti­tu­tional fail­ure, cap­i­tal con­cen­tra­tion, epis­temic col­lapse, po­lit­i­cal re­align­ment) are not pre­dic­tions for 2034. They are de­scrip­tions of 2026. The sin­gu­lar­ity in the data is a sin­gu­lar­ity in hu­man at­ten­tion, and it is al­ready ex­ert­ing grav­i­ta­tional force on every­thing it touches.

I see no rea­son to let epis­te­mo­log­i­cal hu­mil­ity in­ter­fere with a per­fectly good timer.

See you on the other side!

...

Read the original on campedersen.com »

2 645 shares, 59 trendiness

Europe's $24 Trillion Breakup With Visa and Mastercard

ECB President Christine Lagarde has called for Europe to break its de­pen­dence on American pay­ment in­fra­struc­ture, warn­ing that every card trans­ac­tion sends European con­sumer data to the United States. A coali­tion of 16 banks thinks it has the an­swer.

What’s hap­pen­ing? ECB President Christine Lagarde told Irish ra­dio that Europe needs its own dig­i­tal pay­ment sys­tem urgently,” warn­ing that vir­tu­ally all European card and mo­bile pay­ments cur­rently run through non-Eu­ro­pean in­fra­struc­ture con­trolled by Visa, Mastercard, PayPal or Alipay. Days later, on 2 February, the European Payments Initiative (EPI) and the EuroPA Alliance signed a land­mark agree­ment to build a pan-Eu­ro­pean in­ter­op­er­a­ble pay­ment net­work cov­er­ing 130 mil­lion users across 13 coun­tries. The sys­tem, built around the dig­i­tal wal­let Wero, aims to let Europeans pay and trans­fer money across bor­ders with­out touch­ing a sin­gle American net­work.

Every time a European taps a card, pays on­line or splits a bill with friends, the trans­ac­tion flows through in­fra­struc­ture owned and op­er­ated by American com­pa­nies. Visa and Mastercard to­gether process ap­prox­i­mately $24 tril­lion in trans­ac­tions an­nu­ally. Card pay­ments ac­count for 56% of all cash­less trans­ac­tions in the EU. And the data — who bought what, where, when and for how much — leaves European ju­ris­dic­tion every time.

It’s im­por­tant for us to have dig­i­tal pay­ment un­der our con­trol,” Lagarde told The Pat Kenny Show. Whether you use a card or whether you use a phone, typ­i­cally it goes through Visa, Mastercard, PayPal, Alipay. Where are all those com­ing from? Well, ei­ther the US or China.”

The host’s re­sponse — I did­n’t re­alise this” — cap­tured the broader European blind spot. Most con­sumers have no idea that their pay­ment data rou­tinely ex­its the EU. In a geopo­lit­i­cal en­vi­ron­ment where Europe is scram­bling to re­duce de­pen­dence on the United States across de­fence, en­ergy and trade, pay­ments re­main an over­looked vul­ner­a­bil­ity.

The les­son of Russia sharp­ened the ur­gency. When Western sanc­tions cut Russia off from Visa and Mastercard in 2022, the coun­try’s do­mes­tic pay­ments were im­me­di­ately dis­rupted. European pol­i­cy­mak­ers asked the ob­vi­ous ques­tion: what would hap­pen if the US de­cided — or was pres­sured — to re­strict European ac­cess to those same net­works?

The European Payments Initiative, a con­sor­tium of 16 ma­jor banks and pay­ment proces­sors in­clud­ing BNP Paribas, Deutsche Bank and Worldline, launched Wero in July 2024 as Europe’s an­swer. Built on SEPA in­stant credit trans­fers, Wero lets users send money us­ing just a phone num­ber — no IBAN, no card, no in­ter­me­di­ary.

The num­bers so far are en­cour­ag­ing. Wero al­ready has over 47 mil­lion reg­is­tered users in Belgium, France and Germany, has processed over €7.5 bil­lion in trans­fers, and counts more than 1,100 mem­ber in­sti­tu­tions. Retail pay­ments went live in Germany at the end of 2025, with mer­chants in­clud­ing Lidl, Decathlon, Rossmann and Air Europa al­ready ac­cept­ing Wero on­line. France and Belgium fol­low in 2026.

But the real break­through came on 2 February, when EPI signed a mem­o­ran­dum of un­der­stand­ing with the EuroPA Alliance — a coali­tion of na­tional pay­ment sys­tems in­clud­ing Italy’s Bancomat, Spain’s Bizum, Portugal’s MB WAY and the Nordics’ Vipps MobilePay. The deal in­stantly con­nects ap­prox­i­mately 130 mil­lion users across 13 coun­tries, cov­er­ing roughly 72% of the EU and Norway pop­u­la­tion. Cross-border peer-to-peer pay­ments launch this year, with e-com­merce and point-of-sale pay­ments fol­low­ing in 2027.

European pay­ment sov­er­eignty is not a vi­sion, but a re­al­ity in the mak­ing,” said Martina Weimert, CEO of EPI.

Europe has tried this be­fore. The Monnet Project, launched in 2008 by twenty European banks, col­lapsed in 2012. The orig­i­nal EPI vi­sion it­self was scaled back af­ter sev­eral found­ing mem­bers with­drew, forc­ing a pivot from a full card-re­place­ment scheme to a nar­rower ac­count-to-ac­count model.

The core prob­lem has al­ways been frag­men­ta­tion. Each EU coun­try de­vel­oped its own do­mes­tic pay­ment so­lu­tion — Bizum in Spain, iDEAL in the Netherlands, Payconiq in Belgium, Girocard in Germany — but none could work across bor­ders. A Belgian con­sumer buy­ing from a Dutch re­tailer still needed Visa or Mastercard. National pride and com­pet­ing bank­ing in­ter­ests re­peat­edly sab­o­taged at­tempts at uni­fi­ca­tion.

The net­work ef­fect com­pounds the chal­lenge. Merchants ac­cept Visa and Mastercard be­cause con­sumers carry them. Consumers carry them be­cause mer­chants ac­cept them. Breaking that loop re­quires ei­ther reg­u­la­tory force or a crit­i­cal mass of users large enough to make mer­chants care — which is pre­cisely what the EuroPA deal at­tempts to de­liver by con­nect­ing ex­ist­ing na­tional user bases rather than build­ing from scratch.

Running in par­al­lel is the ECBs dig­i­tal euro pro­ject, which would cre­ate a cen­tral bank-backed dig­i­tal cur­rency us­able across the eu­ro­zone. EU fi­nance min­is­ters have ac­cel­er­ated dis­cus­sions on the ini­tia­tive, though the European Parliament has not yet passed the re­quired leg­is­la­tion. Once ap­proved, the ECB es­ti­mates it would need a fur­ther two to three years to launch.

EPI is care­ful to dis­tin­guish Wero from the dig­i­tal euro. Wero is a pri­vate-sec­tor ini­tia­tive; the dig­i­tal euro is pub­lic money. They are de­signed to com­ple­ment rather than com­pete — though the over­lap in am­bi­tion is ob­vi­ous. Both ex­ist be­cause Europe’s po­lit­i­cal es­tab­lish­ment has fi­nally ac­cepted that pay­ments sov­er­eignty is as strate­gi­cally im­por­tant as en­ergy in­de­pen­dence or de­fence au­ton­omy.

Sceptics have good rea­sons for doubt. Creating a vi­able al­ter­na­tive to Visa and Mastercard re­quires several bil­lion eu­ros” in in­vest­ment, ac­cord­ing to EPIs own es­ti­mates. Low in­ter­change fees un­der EU reg­u­la­tion make prof­itabil­ity dif­fi­cult. Consumer habits are deeply en­trenched — and nei­ther Visa nor Mastercard will sit idle while Europe tries to dis­man­tle their most prof­itable mar­ket.

Weimert her­self con­cedes that call­ing Wero a challenger” may be pre­ma­ture, de­scrib­ing it as func­tion­ing like a startup — al­beit one with €500 mil­lion in back­ing and 47 mil­lion users al­ready on board.

But the po­lit­i­cal tail­winds are stronger than they have ever been. The EUs in­stant pay­ments reg­u­la­tion, the Capital Markets Union push, the broader drive for European strate­gic au­ton­omy in a world of tar­iff wars and great power ri­valry — all point in the same di­rec­tion. The ques­tion is no longer whether Europe wants its own pay­ment in­fra­struc­ture. It is whether it can ex­e­cute fast enough to mat­ter.

As Lagarde put it: We have the as­sets and op­por­tu­ni­ties to do that our­selves. And if we were to re­move the in­ter­nal bar­ri­ers that we have set for our­selves in Europe, our eco­nomic wealth would in­crease sig­nif­i­cantly.”

...

Read the original on europeanbusinessmagazine.com »

3 644 shares, 74 trendiness

Google Fulfilled ICE Subpoena Demanding Student Journalist’s Bank and Credit Card Numbers

Google ful­filled an Immigration and Customs Enforcement sub­poena that de­manded a wide ar­ray of per­sonal data on a stu­dent ac­tivist and jour­nal­ist, in­clud­ing his credit card and bank ac­count num­bers, ac­cord­ing to a copy of an ICE sub­poena ob­tained by The Intercept.

Amandla Thomas-Johnson had at­tended a protest tar­get­ing com­pa­nies that sup­plied weapons to Israel at a Cornell University job fair in 2024 for all of five min­utes, but the ac­tion got him banned from cam­pus. When President Donald Trump as­sumed of­fice and is­sued a se­ries of ex­ec­u­tive or­ders tar­get­ing stu­dents who protested in sup­port of Palestinians, Thomas-Johnson and his friend Momodou Taal went into hid­ing.

Google in­formed Thomas-Johnson via a brief email in April that it had al­ready shared his meta­data with the Department of Homeland Security, as The Intercept pre­vi­ously re­ported. But the full ex­tent of the in­for­ma­tion the agency sought — including user­names, ad­dresses, item­ized list of ser­vices, in­clud­ing any IP mask­ing ser­vices, tele­phone or in­stru­ment num­bers, sub­scriber num­bers or iden­ti­ties, and credit card and bank ac­count num­bers — was not pre­vi­ously known.

I’d al­ready seen the sub­poena re­quest that Google and Meta had sent to Momodou [Taal], and I knew that he had got­ten in touch with a lawyer and the lawyer suc­cess­fully chal­lenged that,” Thomas-Johnson said. I was quite sur­prised to see that I did­n’t have that op­por­tu­nity.”

The sub­poena pro­vides no jus­ti­fi­ca­tion for why ICE is ask­ing for this in­for­ma­tion, ex­cept that it’s re­quired in con­nec­tion with an in­ves­ti­ga­tion or in­quiry re­lat­ing to the en­force­ment of U. S. im­mi­gra­tion laws.” In the sub­poena, ICE re­quests that Google not disclose the ex­is­tence of this sum­mons for in­def­i­nite pe­riod of time.”

Thomas-Johnson, who is British, be­lieves that ICE re­quested that in­for­ma­tion to track and even­tu­ally de­tain him — but he had al­ready fled to Geneva, Switzerland, and is now in Dakar, Senegal.

The Electronic Frontier Foundation, which is rep­re­sent­ing Thomas-Johnson, and the ACLU of Northern California sent a let­ter to Google, Amazon, Apple, Discord, Meta, Microsoft, and Reddit last week call­ing on tech com­pa­nies to re­sist sim­i­lar sub­poe­nas in the fu­ture from DHS with­out court in­ter­ven­tion. The let­ter asks the com­pa­nies to pro­vide users with as much no­tice as pos­si­ble be­fore com­ply­ing with a sub­poena to give them the op­por­tu­nity to fight it, and to re­sist gag or­ders that would pre­vent the tech com­pa­nies from in­form­ing tar­gets that a sub­poena was is­sued.

Your promises to pro­tect the pri­vacy of users are be­ing tested right now. As part of the fed­eral gov­ern­men­t’s un­prece­dented cam­paign to tar­get crit­ics of its con­duct and poli­cies, agen­cies like DHS have re­peat­edly de­manded ac­cess to the iden­ti­ties and in­for­ma­tion of peo­ple on your ser­vices,” the let­ter reads. Based on our own con­tact with tar­geted users, we are deeply con­cerned your com­pa­nies are fail­ing to chal­lenge un­law­ful sur­veil­lance and de­fend user pri­vacy and speech.”

In ad­di­tion to Thomas-Johnson’s case, the let­ter refers to other in­stances in which tech­nol­ogy com­pa­nies pro­vided user data to DHS, in­clud­ing a sub­poena sent to Meta to unmask” the iden­ti­ties of users who doc­u­mented im­mi­gra­tion raids in California. Unlike Thomas-Johnson, users in that case were given the chance to fight the sub­poena be­cause they were made aware of it be­fore Meta com­plied.

Google has al­ready ful­filled this sub­poena,” an at­tor­ney for Google told Thomas-Johnson’s lawyer, as The Intercept pre­vi­ously re­ported. Production con­sisted of ba­sic sub­scriber in­for­ma­tion.”

The ICE sub­poena re­quested the de­tailed in­for­ma­tion linked to Thomas-Johnson’s Gmail ac­count. Thomas-Johnson con­firmed to The Intercept that he had at­tached his bank and credit card num­bers to his ac­count to buy apps.

Google did not re­spond to a re­quest for com­ment.

Lindsay Nash, a pro­fes­sor at Cardozo Law and a for­mer staff at­tor­ney with ACLU Immigrants’ Rights Project, said that by not giv­ing prior no­tice, Google de­prived Thomas-Johnson of his abil­ity to pro­tect his in­for­ma­tion.

The prob­lem is that it does­n’t al­low the per­son whose per­sonal in­for­ma­tion is on the line and whose pri­vacy may be be­ing in­vaded to raise chal­lenges to the dis­clo­sure of that po­ten­tially pri­vate in­for­ma­tion,” Nash said. And I think that’s im­por­tant to pro­tect rights that they may have to their own in­for­ma­tion.”

Tech com­pa­nies’ data shar­ing prac­tices are pri­mar­ily gov­erned by two fed­eral laws, the Stored Communications Act, which pro­tects the pri­vacy of dig­i­tal com­mu­ni­ca­tions, in­clud­ing emails, and Section 5 of the Federal Trade Commission Act, which pro­hibits un­fair or de­cep­tive trade prac­tices.

Under both fed­eral law and the law of every state, you can­not de­ceive con­sumers,” said Neil Richards, a law pro­fes­sor at Washington University St. Louis who spe­cial­izes in pri­vacy, the in­ter­net, and civil lib­er­ties. And if you make a ma­te­r­ial mis­rep­re­sen­ta­tion about your data prac­tices, that’s a de­cep­tive trade prac­tice.”

Whether or not cor­po­ra­tions are clear enough with con­sumers about how they col­lect and share their data has been lit­i­gated for decades, Richards said, ref­er­enc­ing the in­fa­mous Cambridge Analytica law­suit brought by the Federal Trade Commission, al­leg­ing that the com­pany mis­led Facebook users about data col­lec­tion and shar­ing.

Google’s pub­lic pri­vacy pol­icy ac­knowl­edges that it will share per­sonal in­for­ma­tion in re­sponse to an enforceable gov­ern­men­tal re­quest,” adding that its le­gal team will frequently push back when a re­quest ap­pears to be overly broad or does­n’t fol­low the cor­rect process.”

According to Google, the com­pany over­whelm­ingly com­plied with the mil­lions of re­quests made by the gov­ern­ment for user in­for­ma­tion over the last decade. Its data also shows that those re­quests have spiked over the last five years. It’s un­clear how many of those users were given no­tice of those re­quests ahead of time or af­ter.

Richards said that cases like these em­pha­size the need for le­gal re­forms around data pri­vacy and urged Congress to amend the Stored Communications Act to re­quire a higher stan­dard be­fore the gov­ern­ment can ac­cess our dig­i­tal data. He also said the fed­eral gov­ern­ment needs to reg­u­late Big Tech and place substantive re­stric­tions on their abil­ity to share in­for­ma­tion with the gov­ern­ment.”

It’s hard to know ex­actly how tech com­pa­nies are han­dling our per­sonal data in re­la­tion to the gov­ern­ment, but there seems to have been a shift in op­tics, Richards said. What we have seen in the 12 months since the lead­ers of Big Tech were there on the podium at the in­au­gu­ra­tion,” Richards said, is much more friend­li­ness of Big Tech to­wards the gov­ern­ment and to­wards state power.”

From Dakar, Thomas-Johnson said that un­der­stand­ing the ex­tent of the sub­poena was ter­ri­fy­ing but had not changed his com­mit­ment to his work.

As a jour­nal­ist, what’s weird is that you’re so used to see­ing things from the out­side,” said Thomas-Johnson, whose work has ap­peared in out­lets in­clud­ing Al Jazeera and The Guardian. We need to think very hard about what re­sis­tance looks like un­der these con­di­tions… where gov­ern­ment and Big Tech know so much about us, can track us, can im­prison, can de­stroy us in a va­ri­ety of ways.”

This story has been up­dated to re­flect that Thomas-Johnson’s le­gal team still does not know the full ex­tent of the in­for­ma­tion that Google pro­vided to ICE, but that Thomas-Johnson said his bank and credit card num­bers were at­tached to his ac­count.

...

Read the original on theintercept.com »

4 629 shares, 28 trendiness

Discord Alternatives, Ranked

I’ve been run­ning a Discord server for about four and a half years now. When I started stream­ing dur­ing the pando, I had no idea that I would end up build­ing a com­mu­nity. Hell, I’d never even used Discord be­fore. I only knew what it was be­cause I had to stop my stu­dents from us­ing it.

Don’t like read­ing? Click here for the fi­nal scores.

But folks kept ask­ing for one. My view­ers ex­pected a com­mu­nity hub in which peo­ple who found their way to my Twitch streams could find each other, even when I was not live. As the whole stream­ing thing was it­self an ex­per­i­ment in re­mote learn­ing for me, this seemed a nat­ural ex­ten­sion. So now, I have some mileage on me as a com­mu­nity mod­er­a­tor. I’m in­ti­mately fa­mil­iar with the fea­tures Discord of­fers, and all the ar­gu­ments against us­ing it. I’m sen­si­tive to them, FOSS dork that I am. I’m also keenly sen­si­tive to the ar­gu­ments about data loss in­side of a for­ever-chat. In fact, I’m so sen­si­tive to it that I even tried to ad­dress the prob­lem in some small way.

But Discord, like all freemium ser­vices, is a risk. At any mo­ment their ad­ver­tis­ing model could be­come in­tol­er­a­ble, or their pol­icy about us­ing my data to train AI could change, or their pric­ing could get out of con­trol, or some other rent-seek­ing non­sense com­mon to in­ter­net ser­vices try­ing to stretch their profit mar­gin.

I need an exit strat­egy. Anyone us­ing Discord needs an exit strat­egy. The trick is to find a land­ing spot that users will tol­er­ate, and that al­lows the com­mu­nity to con­tinue in some fash­ion. Change is loss, and that is ex­cru­ci­at­ingly true for com­mu­nity plat­forms. Any switch comes with an at­tri­tion rate, mean­ing the des­ti­na­tion bet­ter be worth the cost in head­count.

For this rea­son, and for an­other pro­ject, I’ve been deeply re­search­ing Discord al­ter­na­tives for the bet­ter part of a year. Some of my col­leagues may think me a bit ob­sessed about the im­por­tance of a chat app,” but I’m con­vinced that the com­mu­ni­ca­tion mech­a­nism for on­line com­mu­ni­ties is crit­i­cal to their suc­cess. Choosing a new one could be the a mat­ter of life and death for the com­mu­nity. This is a de­ci­sion we have to get right the first time.

So here, humbly sub­mit­ted, are my rank­ings of many of the Discord-like al­ter­na­tives for main­tain­ing on­line com­mu­ni­ties.

I’ve ar­rived at five broad cat­e­gories in which an on­line com­mu­nity plat­form needs to per­form.

Functionality: can it do every­thing re­quired of a plat­form for build­ing, or­ga­niz­ing, and sus­tain­ing a com­mu­nity?

Openness: what ac­cess is there to all the tool’s fea­tures and code with­out pay­ment?

Security: how se­cure are the server and user data against com­mon threats?

Safety: what fea­tures are avail­able to mod­er­ate the com­mu­nity and pro­tect it from ma­li­cious or un­wanted be­hav­ior?

Decentralization: how re­liant is the ser­vice on sin­gle points of fail­ure?

These will be eval­u­ated on a scale from 1-5, with 5 be­ing the best” for each cri­te­rion.

I’ve done my best to con­sider mul­ti­ple use cases and threat mod­els in these scores. I am, how­ever, a flawed, bi­ased meat­sack with lim­ited vis­i­bil­ity. I may not have pre­dicted your needs pre­cisely. I may have omit­ted your fa­vorite op­tion. If so, I hope you’ll af­ford me some grace. I did the best I could.

Oh, and I’m not touch­ing Slack or Teams. Reasons should be ob­vi­ous.

We’ll start with Discord as a base­line.

As a prod­uct, Discord is very, very good. It serves its pur­pose with an ab­solutely min­i­mum of fric­tion—both from a user and ad­min­is­tra­tor per­spec­tive. Even with­out pay­ing, the fea­tures out of the box are well-con­sid­ered and help­fully im­ple­mented. What is the prod­uct, any­way? Sometimes it seems like Discord them­selves don’t re­ally know. While they bris­tle at be­ing called a Slack clone,” there’s a rea­son many com­pa­nies (especially tech star­tups) choose Discord as both their in­ter­nal team com­mu­ni­ca­tion tool, as well as their cus­tomer en­gage­ment tool. Some truly be­nighted groups even choose to doc­u­ment their prod­uct with it.

Whatever Discord thinks it is, the pur­pose of a sys­tem is what it does, and Discord builds on­line com­mu­ni­ties. Say what you want about the com­pany, the closed na­ture, the in­creas­ingly-icky ad model, the core of Discord con­tin­ues to work well for bring­ing peo­ple to­gether in quasi-pub­lic on­line spaces. The medium of real-time text, aka in­stant mes­sag­ing, aka IRC-again-but-not-IRC, has be­come a de­fault, but one not with­out lim­i­ta­tions. For ex­am­ple, what does this do to your heart rate:

Right?! We’ve em­braced im­me­di­acy at the ex­pense of depth. Also, in Discord’s case, ac­ces­si­bil­ity. Searching Discord is a proper dis­as­ter. While mes­sages are more or less per­ma­nent, it is by no means easy to find them again, weeks/​months/​years later.

But let’s get into the cri­te­ria be­fore this be­comes a trea­tise on the na­ture of the mod­ern web.

As men­tioned, Discord is highly func­tional—for what it does. But its lim­i­ta­tions do start to grate as time goes on. Online com­mu­ni­ties have a pre­dictable life­cy­cle, in which the ex­cite­ment of the early days is well-served by real-time chat. The memes are fly­ing; peo­ple are ex­cited to meet each other; the fu­ture holds bound­less pos­si­bil­i­ties. The space will cat­e­go­rize and frag­ment, try­ing to or­ga­nize the chaos. Over time, most of the mes­sages come from a core group of con­trib­u­tors, with more oc­ca­sional ar­rivals and ques­tions from new­com­ers. This is as it should be. But what hap­pens to the his­tory of that com­mu­nity as it heads up the scroll? How does the past use­fully in­form the fu­ture?

Discord has made some af­for­dances for this with Forum” type chan­nels. Even so, the past is hard to ex­plore.

Discord is not open, so not much to say on that front.

Discord mes­sages are not end-to-end en­crypted. Pretty fa­mously, Discord will give up your data for law en­force­ment. Although they’ve re­cently added end-to-end en­cryp­tion for video and au­dio, the im­ple­men­ta­tion is clunky. And of course, all the text data in a Discord server is un­en­crypted. But hey, at least they sup­port MFA?

Safety, in the sense of Trust and Safety,” may be Discord’s great­est strength. I have greatly ap­pre­ci­ated all the mod­er­a­tion tools at my dis­posal. Even a mod­estly sized server like mine (~3000 users) would be im­pos­si­ble to man­age with­out au­to­matic word catch­ing, gran­u­lar per­mis­sions on chan­nels and roles, and mul­ti­ple re­sponse op­tions in­clud­ing time­outs, kicks, and bans. Discord also has a very in­volved on­board­ing flow that makes cer­tain there is an agree­ment to com­mu­nity rules be­fore users can par­tic­i­pate.

And need we even men­tion de­cen­tral­iza­tion here? If Discord fails, your com­mu­nity goes dark.

Best for: com­mu­ni­ties who value se­crecy above all.

I love Signal. Like, a lot. I’m a daily user and a donor. I’ve even con­vinced most of my friends and fam­ily to use it as our pri­mary mode of text com­mu­ni­ca­tion. And yes, I’ve or­ga­nized a com­mu­nity with it—one for which pri­vacy was (at the time) of para­mount im­por­tance. I am deeply fa­mil­iar with all ad­van­tages and draw­backs of Signal.

As a se­cure chat, Signal does just fine. Well, bet­ter than fine from a cryp­tog­ra­phy per­spec­tive. It is the gold stan­dard in end-to-end en­crypted com­mu­ni­ca­tions for good rea­son. But the strongest cryp­tog­ra­phy in the world is mean­ing­less for a com­mu­nity if the plat­form is un­us­able. Fortunately, that’s not the case for Signal. Emoji re­ac­tions, stick­ers, (some) for­mat­ted text, and even voice/​video calls make it an in­dis­pens­able tool for se­cure com­mu­ni­ca­tions that feel fa­mil­iar and fea­ture-filled enough for normies. Nobody will be to­tally lost mov­ing from an­other chat app to Signal.

If you’re look­ing for noth­ing but chat, Signal is fan­tas­tic. But many as­pects of com­mu­nity-build­ing on­line are sim­ply un­avail­able here. To start, there are only group chats. There is no con­ver­sa­tion thread­ing or chan­nels to keep con­ver­sa­tions or­ga­nized. You can have mul­ti­ple chats, but that gets messy quickly.

I can’t even pin posts. In fact, post search­a­bil­ity is a lim­ited fea­ture by de­sign. Most group chats en­able dis­ap­pear­ing mes­sages. That’s great to pre­vent in­crim­i­nat­ing ev­i­dence from pil­ing up; it’s ter­ri­ble for re­view­ing what a com­mu­nity dis­cussed pre­vi­ously.

Also ab­sent: gran­u­lar roles in each chat, or any­thing re­sem­bling mod­er­a­tion tools. As an ad­min, I can only ban users for un­wanted be­hav­ior. I can nei­ther au­to­mat­i­cally pre­vent ha­rass­ment nor pro­vide a more mea­sured re­sponse than the ban­ham­mer.

I should men­tion that al­most all these trade­offs are ac­cepted lim­i­ta­tions in ser­vice of Signal’s pri­mary ob­jec­tives.

On the point of de­cen­tral­iza­tion, Signal has none. As Meredith Whitaker re­cently wrote, all Signal app traf­fic flows through the same cloud in­fra­struc­ture, much of which de­pends on AWS.

If your com­mu­ni­ty’s threat model is such that elim­i­nat­ing all pos­si­ble points of ev­i­dence col­lec­tion against you mat­ters above all else, Signal is the clear win­ner. Maintaining that level of op­er­a­tional se­cu­rity nat­u­rally comes at the cost of some other crea­ture com­forts a com­mu­nity could come to covet.

I did­n’t set out to al­lit­er­ate the hell out of that sen­tence, but I did­n’t stop it ei­ther.

Best for: com­mu­ni­ties who value in­de­pen­dence over all, with se­cu­rity/​pri­vacy a run­ner-up.

Oh, Matrix. You are the foot­ball that I, in my zigzag-stripe shirt, keep try­ing to kick. In the­ory, the Matrix pro­to­col and Element, its flag­ship client, should be the ideal for de­cen­tral­ized, en­crypted com­mu­ni­ca­tions. Using Element feels a whole lot like us­ing Discord. Heck, it can even bridge com­mu­ni­ca­tions from Discord and other plat­forms. Sadly, as time goes on, the nicks from the rough edges start to ac­cu­mu­late.

Before go­ing fur­ther, we need to de­fine some terms. Matrix is the fed­er­ated, en­crypted mes­sag­ing pro­to­col pub­lished and main­tained by the Matrix Foundation. Synapse is their reference im­ple­men­ta­tion” server tech­nol­ogy writ­ten in Python. Synapse is the most com­mon way folks start their own Matrix servers. There are other server im­ple­men­ta­tions, now in­clud­ing Synapse Pro,” which I guess is a par­tial rewrite of Synapse in Rust? Element is the first-party client that users would use to con­nect to Matrix. They need an ac­count on a server, and of course ma­trix.org is the flag­ship Matrix server where the vast ma­jor­ity of users have their ac­counts. But you can point Element at any Matrix server to log in, as long as you have an ac­count on that server.

Confused yet? If users are un­will­ing to se­lect a Mastodon server, do you think they’d be will­ing to put up with this?

Ah, but I get ahead of my­self. Let’s start with what’s good.

Matrix uses a sim­i­lar end-to-end cryp­tog­ra­phy scheme to Signal. Rooms” (chats, chan­nels) are not en­crypted by de­fault, but they can be made so. There have been noted is­sues with the pre­vi­ous cryp­tog­ra­phy li­brary used by Element, but the newer vodozemac li­brary is in much bet­ter shape. Of course, not all Matrix clients use the new hot­ness.

A given Matrix server can cre­ate mul­ti­ple rooms (channels), and even group them into spaces” such that they ap­pear quite sim­i­lar to Discord servers.

Inside the rooms, things feel fa­mil­iar. We have threads, emoji re­acts, and mes­sage search (sorta). On some clients (but not Element), there is the pos­si­bil­ity of cus­tom emoji.

And that’s…it. Element promises more, like na­tive video con­fer­enc­ing, but heaven help you if you’re try­ing to self-host it. It is tech­ni­cally pos­si­ble, but by no means sim­ple.

Technically pos­si­ble, but by no means sim­ple” aptly de­scribes up the en­tire Matrix ex­pe­ri­ence, ac­tu­ally.

I ran a pri­vate Matrix server for about a year and a half. Why pri­vate? In two pub­lic Matrix rooms I had joined—in­clud­ing the room for Synapse ad­mins—I ex­pe­ri­enced a com­mon at­tack in which troll ac­counts spam the room with CSAM ma­te­r­ial. Horrible, but not just for the par­tic­i­pants and ad­mins in the room. Through the magic of fed­er­a­tion, every server who has a user par­tic­i­pat­ing in the room now has a copy of the CSAM ma­te­r­ial, and has to take ac­tion to re­move it. This re­quires a man­ual curl re­quest on the server it­self, be­cause Synapse has an ap­palling lack of mod­er­a­tion tools. It’s so bad that, with­out third-party tool­ing, you can’t even ban a user out­right from a server; you have to man­u­ally ban them from every sin­gle room.

Then came September 2, 2025. The out­a­geof ma­trix.org caused by drive fail­ures was not an in­dict­ment of Matrix’s data­base man­age­ment or re­cov­ery process—in fact, I was quite im­pressed with their re­sponse. But it did put the lie to Matrix’s de­cen­tral­iza­tion for me. Almost none of my friends could use Matrix, even though I was host­ing my own server. The on­board­ing pipeline (especially via Element) is so fo­cused on the flag­ship server, I dare­say it com­prises the plu­ral­ity of Matrix ac­counts. It’s not easy to get any sta­tis­tics for all Matrix users, but that is my guess. How decentralized” is that, re­ally? Just be­cause some­thing can be de­cen­tral­ized does­n’t make it so.

I’m prob­a­bly a lit­tle too close to this one. I so badly wanted Matrix to work, and I tried to make it work for my pur­poses for a long time. Ultimately, the pain points over­came the ben­e­fits. But if you care most about an in­ter­sec­tion of mes­sage en­cryp­tion, fed­er­a­tion, and de­cen­tral­iza­tion, and you’re will­ing to put in quite a lot of ad­min time, Matrix can be a vi­able com­mu­nity chat plat­form.

Best for: com­mu­ni­ties that want a smooth Slack-like ex­pe­ri­ence and are will­ing to pay for in­de­pen­dence

What if you could self-host Slack? That’s ba­si­cally the Rocket. Chat ex­pe­ri­ence. It’s slick, easy to get set up, and loaded with in­te­gra­tions. All of this comes, as you might ex­pect, at a price. While there is an open source” Community Edition, its fea­ture­set is lim­ited, and you may quickly find your­self look­ing at the paid plans for ad­di­tional fea­tures or sup­port. Rocket.Chat is one of sev­eral plat­forms that fol­low this freemium model. I don’t re­ally be­grudge them this ap­proach, but it can be frus­trat­ing for a com­mu­nity just find­ing its feet. To their credit, they do of­fer dis­counts for open source pro­jects, not-for-prof­its, and other or­ga­ni­za­tions on a per-re­quest ba­sis.

Rocket. Chat does sup­port end-to-end en­crypted com­mu­ni­ca­tions. Key man­age­ment can be a lit­tle clunky, but I was im­pressed it had the fea­ture at all.

Be aware, how­ever, that these cen­trally-man­aged ser­vices will of course al­low ad­min­is­tra­tors to au­dit mes­sages. That is a doc­u­mented part of the mod­er­a­tion flow for Rocket. Chat. If you de­mand anonymity or an in­abil­ity for ad­min­is­tra­tors to view your mes­sages what are you do­ing in that com­mu­nity? Rocket.Chat might not be right for you.

I’ll quickly men­tion why I gave it a score of 3 on de­cen­tral­iza­tion. Seems a bit high, right? Until re­cently, Rocket. Chat sup­ported Matrix fed­er­a­tion. Since October 2025, it has pur­sued a na­tive fed­er­a­tion scheme that would al­low sep­a­rate Rocket.Chat in­stances to share rooms and DMs across server bound­aries. This, al­though not open source, is ex­tremely com­pelling.

I re­ally en­joyed my ex­per­i­men­ta­tion with Rocket. Chat, and found my­self think­ing se­ri­ously about it as an al­ter­na­tive to where I was. The cost is just steep.

Best for: A split be­tween fo­rums and real-time chat

I’ve been play­ing with Zulip for a bit now, and I still don’t re­ally know what to make of it. From one per­spec­tive, it has a bit of an iden­tity cri­sis, un­sure of whether it’s a fo­rum or a chat plat­form. From an­other per­spec­tive, this dual iden­tity is its great­est strength: real-time when you want it, asyn­chro­nous when you don’t.

Zulip is self-hostable, with some caveats. As the plans and pric­ing de­tail, any­thing be­yond 10 users starts cost­ing some cash. It adds up quickly. Seemingly every­thing can be done in a self-hosted man­ner, you’re at the mercy of some truly byzan­tine doc­u­men­ta­tion.

While there is great func­tion­al­ity to be found, it comes at a rather steep price for or­ga­ni­za­tions of any size—whether ad­min­is­tra­tive over­head, or just plain cash for the man­aged ser­vices. Although to their credit, they do of­fer a com­mu­nity plan with many of those higher-tier fea­tures avail­able for qual­i­fy­ing or­ga­ni­za­tions.

One fea­ture you won’t find any­where is end-to-end en­cryp­tion. The de­vel­op­ers seem rather against the idea. Multi-factor au­then­ti­ca­tion must be en­abled in the con­fig files, not the ad­min fron­tend—hardly ideal.

Unless I’m miss­ing it, there do not ap­pear to be any se­ri­ous con­tent mod­er­a­tion tools in Zulip. The com­mu­nity mod­er­a­tion toolkit is, in my opin­ion, the barest of es­sen­tials. Nearly all of these ca­pa­bil­i­ties are re­ac­tive, not proac­tive. It seems the ex­pec­ta­tion is good-faith par­tic­i­pa­tion, with those agree­ments and guar­an­tees han­dled else­where. Having been on the wrong end of ma­li­cious in­tent, I don’t feel safe enough with these tools.

Lastly, on de­cen­tral­iza­tion, it’s mostly a miss. Even for self-hosted plans, any­thing above the free tier re­quires a zulip.com ac­count for plan man­age­ment. And fed­er­a­tion? Forget about it. Although every Zulip server can tech­ni­cally host mul­ti­ple Zulip in­stances, they don’t in­ter­act with one an­other.

If any­thing, writ­ing this overview has left me more con­fused about Zulip than when I be­gan. I just don’t know where it fits, or who can af­ford these prices for a grow­ing com­mu­nity.

Best for: Fortune 100s and gov­ern­ments

Take a look at the front page of the Mattermost web­site, and you’ll get an idea of the kind of or­ga­ni­za­tion they ex­pect to be us­ing this thing. Odds are, your nascent on­line com­mu­nity ain’t that. While the soft­ware may su­per­fi­cially look like some of these oth­ers, its in­ten­tion is en­tirely other. Community build­ing is not what’s go­ing on here. Rather, Mattermost’s ob­jec­tive is highly-fo­cused, in­te­grated work­flows that in­volve hu­man com­mu­ni­ca­tion along­side ma­chine au­toma­tion. Business op­er­a­tions are what…mat­ter most.

Mattermost de­scribes it­self as Open core,” and the core is…rather tiny. Even when in­stalling the self-hosted ver­sion, you’ll soon need a rather ex­pen­sive li­cense for real work. Starting at $10/user is a clear in­di­ca­tor of the in­tended cus­tomer base. It ain’t me, that’s for sure.

Mattermost prides it­self on a cer­tain kind of se­cu­rity—specif­i­cally, the reg­u­la­tory kind. Configurations for all man­ner of com­pli­ance regimes are pro­vided in the doc­u­men­ta­tion. Normal se­cu­rity is pre­sent as well, in­clud­ing MFA. Not so much end-to-end en­cryp­tion, al­though men­tion is made of en­crypt­ing the PostgreSQL data­base. That’s novel, al­though not a so­lu­tion to the prob­lem ad­dressed by E2EE.

I hon­estly don’t think Mattermost’s de­vel­op­ers are ca­pa­ble of imag­in­ing a pos­i­tive ar­gu­ment for an au­dit-re­sis­tant ap­pli­ca­tion. This thing is de­signed for mon­i­tor­ing user ac­tiv­ity six ways from Sunday.

Consequently, safety” in the way we’ve de­fined it here is ab­sent from Mattermost’s con­cep­tion of the uni­verse. If you’re log­ging on to a Mattermost server, about a thou­sand other trust mech­a­nisms are in place to guar­an­tee you won’t act like a doo­fus on this app.

Hardly a point to men­tion­ing de­cen­tral­iza­tion here, be­yond the pos­si­bil­ity of self-host­ing. Ultimately though, you only get what your li­cense key al­lows, and since the server is only open core, Mattermost it­self is quite the point of fail­ure.

Best for: any­thing but real-time chat, re­ally.

I’m gonna be hon­est: I kind of love Discourse. I’m not sure I have a rea­son to de­ploy it, but I want to. Everything Joan Westenberg writes in this piece in praise of Discourse res­onates with me. Community for the long-haul? Transparency in gov­er­nance? Built-in sys­tems for es­tab­lish­ing hu­man trust?

But Discourse has one sig­nif­i­cant dif­fer­ence from every­thing else on this list: it is pri­mar­ily a fo­rum, not a real-time chat app. I’m not say­ing that’s a bad thing, nec­es­sar­ily, but it sure is dif­fer­ent. If your com­mu­nity ex­pects in­stan­ta­neous com­mu­ni­ca­tion, Discourse may be a big ad­just­ment. Or it might not be suf­fi­cient on its own for your needs.

But what does it do well? Forums! It’s very easy to nav­i­gate cat­e­gories and top­ics. The UI pro­vides clear sig­nals for when some­thing hap­pened. Oh, and search is sim­ple.

Maybe the best way to think of Discourse is as an anti-Dis­cord. It’s every­thing Discord is­n’t: asyn­chro­nous, open source, and self-hostable.

Discourse is 100% open source. I’m run­ning it right now in my home­lab, with ac­cess to all the plu­g­ins and fea­tures I’d ex­pect, cost­ing me only the time it took to in­stall.

I was ad­di­tion­ally quite im­pressed with the mod­er­a­tion tools. Not only are they plenty of tools to track user ac­tiv­ity, but the mod­er­a­tion de­ci­sions are pub­lic by de­fault. This is a good thing! The com­mu­nity can hold its lead­ers ac­count­able for up­hold­ing their end of the bar­gain: to act in good faith in sup­port of the com­mu­nity.

One area in which it fal­ters a bit is, of course, end-to-end en­cryp­tion. Very few of these tools en­able it, and when they do, it can be clunky. It’s en­tirely pos­si­ble that the right op­tion for a com­mu­nity is one of these and Signal for sen­si­tive, out-of-band com­mu­ni­ca­tions.

If you start to look around, you’ll no­tice Discourse fora every­where. There’s a good rea­son for that! The soft­ware is rock solid for what it is. And maybe your com­mu­nity needs its depth of fea­tures more than it needs in­stan­ta­neous mes­sag­ing.

Best for: Appreciating how much work it takes to make one of these work

Stoat, née Revolt, was meant to be an open source Discord al­ter­na­tive. Recently, they re­ceived a cease-and-de­sist re­gard­ing the name Revolt, and re­named to a…weasel.

Anyway this thing is so far from be­ing ready for prime time, I only in­clude it here to call out the pro­ject. I wish them the best and hope for good things, es­pe­cially since you can self-host the server. But a lack of sta­bil­ity and fea­tures pre­vent this from be­ing use­ful for any­thing be­yond ex­per­i­men­ta­tion. Maybe some­day.

The Tool is Not the Community

Choosing a plat­form on which to build a com­mu­nity is just the be­gin­ning. It’s vi­tally im­por­tant, yet in­suf­fi­cient to a com­mu­ni­ty’s suc­cess. Tools do not make a cul­ture; the peo­ple en­gag­ing on it do. Most of my time build­ing the cul­ture of TTI has not been a tech­ni­cal en­deavor. What we have—and I think it’s pretty spe­cial—has lit­tle to do with Discord’s fea­ture­set. It just hap­pens to be where the peo­ple are. The op­tions pre­sented to you here al­low you to seek a path that aligns with your ob­jec­tives, prin­ci­pals, and needs at a purely me­chan­i­cal level. The rest de­pends on the hu­man el­e­ment.

...

Read the original on taggart-tech.com »

5 587 shares, 54 trendiness

I Started Programming When I Was 7. I'm 50 Now, and the Thing I Loved Has Changed

I wrote my first line of code in 1983. I was seven years old, typ­ing BASIC into a ma­chine that had less pro­cess­ing power than the chip in your wash­ing ma­chine. I un­der­stood that ma­chine com­pletely. Every byte of RAM had a pur­pose I could trace. Every pixel on screen was there be­cause I’d put it there. The path from in­ten­tion to re­sult was di­rect, vis­i­ble, and mine.

Forty-two years later, I’m sit­ting in front of hard­ware that would have seemed like sci­ence fic­tion to that kid, and I’m try­ing to fig­ure out what building things” even means any­more.

This is­n’t a rant about AI. It’s not a back in my day” piece. It’s some­thing I’ve been cir­cling for months, and I think a lot of ex­pe­ri­enced de­vel­op­ers are cir­cling it too, even if they haven’t said it out loud yet.

My favourite pe­riod of com­put­ing runs from the 8-bits through to about the 486DX2-66. Every ma­chine in that era had char­ac­ter. The Sinclair Spectrum with its at­tribute clash. The Commodore 64 with its SID chip do­ing things the de­sign­ers never in­tended. The NES with its 8-sprite-per-scanline limit that made de­vel­op­ers in­vent flick­er­ing tricks to cheat the hard­ware. And the PC — start­ing life as a bor­ing beige box for spread­sheets, then evolv­ing at break­neck pace through the 286, 386, and 486 un­til it be­came a gam­ing pow­er­house that could run Doom. You could feel each gen­er­a­tion leap. Upgrading your CPU was­n’t a spec sheet ex­er­cise — it was trans­for­ma­tive.

These weren’t just prod­ucts. They were en­gi­neer­ing ad­ven­tures with vis­i­ble trade­offs. You had to un­der­stand the ma­chine to use it. IRQ con­flicts, DMA chan­nels, CONFIG. SYS and AUTOEXEC.BAT op­ti­mi­sa­tion, mem­ory man­agers — get­ting a game to run was the game. You weren’t just a user. You were a sys­tems en­gi­neer by ne­ces­sity.

And the soft­ware side matched. Small teams like id Software were go­ing their own way, mak­ing bold tech­ni­cal de­ci­sions be­cause no­body had writ­ten the rules yet. Carmack’s ray­cast­ing in Wolfenstein, the VGA Mode X tricks in Doom — these were peo­ple push­ing against real con­straints and pro­duc­ing some­thing gen­uinely new. Creative con­straints bred cre­ativ­ity.

Then it pro­fes­sion­alised. Plug and Play ar­rived. Windows ab­stracted every­thing. The Wild West closed. Computers stopped be­ing fas­ci­nat­ing, can­tan­ker­ous ma­chines that de­manded re­spect and un­der­stand­ing, and be­came ap­pli­ances. The craft be­came in­vis­i­ble.

But it was­n’t just the craft that changed. The promise changed.

When I started, there was a gen­uine op­ti­mism about what com­put­ers could be. A kid with a Spectrum could teach them­selves to build any­thing. The early web felt like the great­est lev­el­ling force in hu­man his­tory. Small teams made bold de­ci­sions be­cause no­body had writ­ten the rules yet.

That hope gave way to some­thing I find gen­uinely dis­taste­ful. The ma­chines I fell in love with be­came in­stru­ments of sur­veil­lance and ex­trac­tion. The plat­forms that promised to con­nect us were re­ally built to mon­e­tise us. The tin­kerer spirit did­n’t die of nat­ural causes — it was bought out and put to work op­ti­mis­ing ad clicks.

The thing I loved changed, and then it was put to work do­ing things I’m not proud to be as­so­ci­ated with. That’s a dif­fer­ent kind of loss than just the tools moved on.”

But I adapted. That’s what ex­pe­ri­enced de­vel­op­ers, hu­man be­ings, do.

Over four decades I’ve been through more tech­nol­ogy tran­si­tions than I can count. New lan­guages, new plat­forms, new par­a­digms. CLI to GUI. Desktop to web. Web to mo­bile. Monoliths to mi­croser­vices. Tapes, floppy discs, hard dri­ves, SSDs. JavaScript frame­works ar­riv­ing and dy­ing like mayflies.

Each wave re­quired learn­ing new things, but the core skill trans­ferred. You learned the new plat­form, you ap­plied your ex­ist­ing un­der­stand­ing of how sys­tems work, and you kept build­ing. The tool changed; the craft did­n’t. You were still the per­son who un­der­stood why things broke, how sys­tems com­posed, where to­day’s short­cut be­came next mon­th’s mess.

I’ve writ­ten pro­duc­tion code in more lan­guages than some de­vel­op­ers have heard of. I’ve shipped soft­ware on plat­forms that no longer ex­ist. I’ve chased C-beams off the shoul­der of Orion. And every time the in­dus­try lurched in a new di­rec­tion, the ex­pe­ri­ence com­pounded. You did­n’t start over. You brought every­thing with you and ap­plied it some­where new.

That’s the deal ex­pe­ri­enced de­vel­op­ers made with the in­dus­try: things change, but un­der­stand­ing en­dures.

I say that know­ing how of­ten those words have been wrong through­out his­tory. But hear me out.

Previous tech­nol­ogy shifts were learn the new thing, ap­ply ex­ist­ing skills.” AI is­n’t that. It’s not a new plat­form or a new lan­guage or a new par­a­digm. It’s a shift in what it means to be good at this.

I no­ticed it grad­u­ally. I’d be work­ing on some­thing — build­ing a fea­ture, de­sign­ing an ar­chi­tec­ture — and I’d re­alise I was still do­ing the same thing I’d al­ways done, just with the in­ter­est­ing bits hol­lowed out. The part where you fig­ure out the el­e­gant so­lu­tion, where you wres­tle with the con­straints, where you feel the sat­is­fac­tion of some­thing click­ing into place — that was in­creas­ingly be­ing han­dled by a model that does­n’t care about el­e­gance and has never felt sat­is­fac­tion.

I’m not typ­ing the code any­more. I’m re­view­ing it, di­rect­ing it, cor­rect­ing it. And I’m good at that — 42 years of ac­cu­mu­lated judg­ment about what works and what does­n’t, what’s el­e­gant ver­sus what’s ex­pe­di­ent, how sys­tems com­pose and where they frac­ture. That’s valu­able. I know it’s valu­able. But it’s a dif­fer­ent kind of work, and it does­n’t feel the same.

The feed­back loop has changed. The in­ti­macy has gone. The thing that kept me up at night for decades — the puz­zle, the chase, the mo­ment where you fi­nally un­der­stand why some­thing is­n’t work­ing — that’s been com­pressed into a prompt and a re­sponse. And I’m watch­ing peo­ple with a frac­tion of my ex­pe­ri­ence pro­duce su­per­fi­cially sim­i­lar out­put. The craft dis­tinc­tion is real, but it’s harder to see from the out­side. Harder to value. Maybe harder to feel in­ter­nally.

Here’s the part that makes me laugh, darkly.

I saw some­one on LinkedIn re­cently — early twen­ties, a few years into their ca­reer — lament­ing that with AI they didn’t re­ally know what was go­ing on any­more.” And I thought: mate, you were al­ready so far up the ab­strac­tion chain you did­n’t even re­alise you were tee­ter­ing on top of a wob­bly Jenga tower.

They’re writ­ing TypeScript that com­piles to JavaScript that runs in a V8 en­gine writ­ten in C++ that’s mak­ing sys­tem calls to an OS ker­nel that’s sched­ul­ing threads across cores they’ve never thought about, hit­ting RAM through a mem­ory con­troller with caching lay­ers they could­n’t di­a­gram, all while npm pulls in 400 pack­ages they’ve never read a line of.

But sure. AI is the mo­ment they lost track of what’s hap­pen­ing.

The ab­strac­tion ship sailed decades ago. We just did­n’t no­tice be­cause each layer ar­rived grad­u­ally enough that we could pre­tend we still un­der­stood the whole stack. AI is just the layer that made the pre­tence im­pos­si­ble to main­tain.

The dif­fer­ence is: I re­mem­ber what it felt like to un­der­stand the whole ma­chine. I’ve had that ex­pe­ri­ence. And los­ing it — even ac­knowl­edg­ing that it was lost long be­fore AI ar­rived — is a kind of grief that some­one who never had it can’t fully feel.

I don’t want to be dis­hon­est about this. There’s a ver­sion of this post where I tell you that ex­pe­ri­ence is more valu­able than ever, that sys­tems think­ing and ar­chi­tec­tural judg­ment are the things AI can’t re­place, that the craft en­dures in a dif­fer­ent form.

And that’s true. When I’m work­ing on some­thing com­plex — jug­gling sys­tem-level de­pen­den­cies, hold­ing a men­tal model across mul­ti­ple in­ter­act­ing spec­i­fi­ca­tions, mak­ing the thou­sand small de­ci­sions that de­ter­mine whether some­thing feels co­her­ent or just works — I can see how I still bring some­thing AI does­n’t. The taste. The judg­ment. The pat­tern recog­ni­tion from decades of see­ing things go wrong.

AI tools ac­tu­ally make that kind of think­ing more valu­able, not less. When code gen­er­a­tion is cheap, the bot­tle­neck shifts to the per­son who knows what to ask for, can spot when the out­put is sub­tly wrong, and can hold the whole pic­ture to­gether. Typing was never the hard part.

But I’d be ly­ing if I said it felt the same. It does­n’t. The won­der is harder to ac­cess. The sense of dis­cov­ery, of fig­ur­ing some­thing out through sheer per­sis­tence and in­ge­nu­ity — that’s been com­pressed. Not elim­i­nated, but com­pressed. And some­thing is lost in the com­pres­sion, even if some­thing is gained.

I turned 50 re­cently. Four decades of in­ten­sity, of craft­ing and find­ing sat­is­fac­tion and iden­tity in the build­ing.

And now I’m in what I’ve started call­ing a fal­low pe­riod. Not burnout ex­actly. More like the ground shift­ing un­der a build­ing you thought that al­though ever chang­ing also had a per­ma­nence, and try­ing to fig­ure out where the new foun­da­tion is.

I don’t have a neat con­clu­sion. I’m not go­ing to tell you that ex­pe­ri­enced de­vel­op­ers just need to push them­selves up the stack” or embrace the tools” or focus on what AI can’t do.” All of that is prob­a­bly right, and none of it ad­dresses the feel­ing.

The feel­ing is: I gave 42 years to this thing, and the thing changed into some­thing I’m not sure I recog­nise any­more. Not worse, nec­es­sar­ily. Just dif­fer­ent. And dif­fer­ent in a way that chal­lenges the iden­tity I built around it and does­n’t sat­isfy in the way it did.

I sus­pect a lot of de­vel­op­ers over 40 are feel­ing some­thing sim­i­lar and not say­ing it, be­cause the in­dus­try wor­ships youth and adapt­abil­ity and say­ing this does­n’t feel like it used to” sounds like you’re falling be­hind.

I’m not falling be­hind. I’m mov­ing ahead, tak­ing ad­van­tage of the new tools, build­ing faster than ever, and us­ing these tools to help oth­ers ac­cel­er­ate their own work. I’m cre­at­ing prod­ucts I could only have dreamt of a few years ago. But at the same time I’m look­ing at the land­scape, try­ing to fig­ure out what build­ing means to me now. The world’s still fig­ur­ing out its shape too. Maybe that’s okay.

Maybe the fal­low pe­riod is the point. Not some­thing to push through, but some­thing to be in for a while.

I started pro­gram­ming when I was seven be­cause a ma­chine did ex­actly what I told it to, felt like some­thing I could ex­plore and ul­ti­mately know, and that felt like magic. I’m fifty now, and the magic is dif­fer­ent, and I’m learn­ing to sit with that.

...

Read the original on www.jamesdrandall.com »

6 521 shares, 22 trendiness

A Benchmark for Evaluating Outcome-Driven Constraint Violations in Autonomous AI Agents

...

Read the original on arxiv.org »

7 510 shares, 39 trendiness

Our $200M Series C / Oxide

We have raised a $200M Series C, and yes, you are per­mit­ted a dou­ble take: did­n’t we just raise a

$100M Series B? And aren’t we the ones that are es­pe­cially can­did about the

per­ils of rais­ing too much money?Well, yes, on both fronts, so let us ex­plain a lit­tle. First, we have the lux­ury of hav­ing achieved real prod­uct-mar­ket fit: we are mak­ing a prod­uct that peo­ple want to buy. This takes on ad­di­tional di­men­sions when mak­ing some­thing phys­i­cal: with com­plex­i­ties like man­u­fac­tur­ing, in­ven­tory, cash-con­ver­sion, and shift­ing sup­ply chains, prod­uct-mar­ket fit im­plies get­ting the unit eco­nom­ics of the busi­ness right. All of this is a long way of say­ing: we did not (and do not) need to raise cap­i­tal to sup­port the busi­ness.So if we did­n’t need to raise, why seek the cap­i­tal? Well, we weren’t seek­ing it, re­ally. But our in­vestors, see­ing the busi­ness take off, were ea­ger to sup­port it. And we, in turn, were ea­ger to have them: they were the ones, af­ter all, who joined us in tak­ing a real leap when it felt like there was a lot more risk on the table. They un­der­stood our vi­sion for the com­pany and shared our love for cus­tomers and our de­sire to build a sin­gu­lar team. They had been with us in some dif­fi­cult mo­ments; they know and trust us, as do we them. So be­ing able to raise a Series C purely from our ex­ist­ing in­vestors pre­sented a real op­por­tu­nity.Still, even from in­vestors that we trust and with a quick close, if the busi­ness does­n’t need the money, does it make sense to raise? We have al­ways be­lieved that our biggest chal­lenge at Oxide was time — and there­fore cap­i­tal. We spelled this out in our ini­tial pitch deck from 2019:Six years later, we stand by this, which is not to min­i­mize any of those chal­lenges: the tech­ni­cal chal­lenges were in­deed hard; we feel for­tu­nate to have at­tracted an ex­tra­or­di­nary team; and we cer­tainly caught some

lucky breaks

with re­spect to the mar­ket. With this large Series C, we have en­tirely de-risked cap­i­tal go­ing for­ward, which in turn as­sures our in­de­pen­dence.This last bit is re­ally im­por­tant, be­cause any buyer of in­fra­struc­ture has had their heart bro­ken count­less times by promis­ing star­tups that suc­cumbed to ac­qui­si­tion by one of the es­tab­lished play­ers that they were seek­ing to dis­rupt. The se­r­ial dis­ap­point­ments leave a re­fresh­ing blunt­ness in their wake, and it’s not un­com­mon for us to be asked di­rectly: How do I know you won’t be bought?“Our in­tent in start­ing Oxide was not to be an ac­qui­si­tion tar­get but rather build a gen­er­a­tional com­pany; this is our life’s work, not a means to an end. With our Series C, cus­tomers don’t have to merely take our word for it: we have the cap­i­tal to as­sure our sur­vival into the in­def­i­nite fu­ture. If our Series B left us with con­fi­dence in achiev­ing our mis­sion, our Series C leaves us with cer­tainty: we’re go­ing to kick butt, have fun, not cheat (of course!), love our cus­tomers — and change com­put­ing for­ever.

...

Read the original on oxide.computer »

8 449 shares, 31 trendiness

Jury told that Meta, Google 'engineered addiction' at landmark US trial

This ar­ti­cle has been re­viewed ac­cord­ing to Science X’s editorial process

and poli­cies. Editors have high­lighted the fol­low­ing at­trib­utes while en­sur­ing the con­tent’s cred­i­bil­ity:

This ar­ti­cle has been re­viewed ac­cord­ing to Science X’s editorial process

and poli­cies. Editors have high­lighted the fol­low­ing at­trib­utes while en­sur­ing the con­tent’s cred­i­bil­ity:

Meta and Google-owned YouTube were ac­cused Monday of push­ing highly ad­dic­tive apps on chil­dren as a land­mark so­cial me­dia trial be­gan in earnest in a California court.

The block­buster trial in front of a Los Angeles jury could es­tab­lish a le­gal prece­dent on whether the so­cial me­dia jug­ger­nauts de­lib­er­ately de­signed their plat­forms to lead to ad­dic­tion in chil­dren.

The pro­ceed­ings are ex­pected to see Meta chief Mark Zuckerberg on the stand next week and Instagram boss Adam Mosseri in the court­room as early as Wednesday. In ad­di­tion to Instagram, Meta’s plat­forms in­clude Facebook and WhatsApp.

This case is about two of the rich­est cor­po­ra­tions in his­tory who have en­gi­neered ad­dic­tion in chil­dren’s brains,” plain­tiffs’ at­tor­ney Mark Lanier told the jury in his open­ing state­ment.

This case is as easy as A-B-C,” Lanier said as he stacked chil­dren’s toy blocks bear­ing the let­ters.

He con­tended the A was for ad­dict­ing, the B for brains and the C for chil­dren.

They don’t only build apps; they build traps,” Lanier said, say­ing Meta and YouTube pur­sued addiction by de­sign,” mak­ing his ar­gu­ments us­ing props like a toy Ferrari and a mini slot ma­chine.

Meta at­tor­ney Paul Schmidt coun­tered in open­ing re­marks to the jury that ev­i­dence will show prob­lems with the plain­tiff’s fam­ily and real-world bul­ly­ing took a toll on her self-es­teem, body im­age and hap­pi­ness rather than Instagram.

If you took Instagram away and every­thing else was the same in Kaley’s life, would her life be com­pletely dif­fer­ent, or would she still be strug­gling with the same things she is to­day?” Schmidt asked, point­ing out an Instagram ad­dic­tion is never men­tioned in med­ical records in­cluded in the ev­i­dence.

The trial be­fore Judge Carolyn Kuhl fo­cuses on al­le­ga­tions that a 20-year-old woman iden­ti­fied as Kaley G. M. suf­fered se­vere men­tal harm be­cause she be­came ad­dicted to so­cial me­dia as a child.

The case is be­ing treated as a bell­wether pro­ceed­ing be­cause its out­come could set the tone, and the level of pay­outs to suc­cess­ful plain­tiffs, for a tidal wave of sim­i­lar lit­i­ga­tion across the United States.

Social me­dia firms are ac­cused in hun­dreds of law­suits of lead­ing young users to be­come ad­dicted to con­tent and suf­fer from de­pres­sion, eat­ing dis­or­ders, psy­chi­atric hos­pi­tal­iza­tion and even sui­cide.

Lawyers for the plain­tiffs are bor­row­ing strate­gies used in the 1990s and 2000s against the to­bacco in­dus­try, which faced a sim­i­lar on­slaught of law­suits ar­gu­ing that com­pa­nies know­ingly sold a harm­ful prod­uct.

Lanier told the ju­rors that Kaley be­gan watch­ing YouTube at six years old be­cause the com­pany never told her mother the goal was viewer ad­dic­tion,” or that tod­dlers as young as two were be­ing tar­geted de­spite critical” risk of ad­dic­tion.

This is the first time that a so­cial me­dia com­pany has ever had to face a jury for harm­ing kids,” Social Media Victims Law Center founder Matthew Bergman, whose team is in­volved in more than 1,000 such cases, told AFP.

Internet ti­tans have ar­gued that they are shielded by Section 230 of the US Communications Decency Act, which frees them from re­spon­si­bil­ity for what so­cial me­dia users post.

However, this case ar­gues that those firms are cul­pa­ble for busi­ness mod­els de­signed to hold peo­ple’s at­ten­tion and to pro­mote con­tent that can harm their men­tal health.

The plain­tiffs said they would call ex­pert wit­nesses that will ar­gue that young peo­ple’s brains are not yet de­vel­oped to with­stand the pow­ers of the al­go­rithms be­ing flung at them on Instagram and YouTube.

The com­pany pointed to re­cent ef­forts to pro­vide more safe­guards for young peo­ple, adding that we’re al­ways work­ing to do bet­ter.”

Jose Castaneda, a YouTube spokesper­son, said the al­le­ga­tions in these com­plaints are sim­ply not true.”

Lawyers for YouTube are to pre­sent open­ing re­marks to the jury on Tuesday.

Snapchat and TikTok were named as de­fen­dants in the suit, but struck set­tle­ment deals be­fore the start of the trial. The terms were not dis­closed.

Lawsuits, in­clud­ing some brought by school dis­tricts, ac­cus­ing so­cial me­dia plat­forms of prac­tices en­dan­ger­ing young users are mak­ing their way through fed­eral court in north­ern California and state courts across the coun­try.

A sep­a­rate law­suit ac­cus­ing Meta of putting profit over the well-be­ing of young users was also get­ting un­der­way in New Mexico on Monday.

...

Read the original on techxplore.com »

9 381 shares, 17 trendiness

TrevorS/voxtral-mini-realtime-rs: Streaming speech recognition running natively and in the browser. A pure Rust implementation of Mistral's Voxtral Mini 4B Realtime model using the Burn ML framework.

Streaming speech recog­ni­tion run­ning na­tively and in the browser. A pure Rust im­ple­men­ta­tion of Mistral’s Voxtral Mini 4B Realtime model us­ing the Burn ML frame­work.

The Q4 GGUF quan­tized path (2.5 GB) runs en­tirely client-side in a browser tab via WASM + WebGPU. Try it live.

# Download model weights (~9 GB)

uv run –with hug­ging­face_hub \

hf down­load mis­tralai/​Vox­tral-Mini-4B-Re­al­time-2602 –local-dir mod­els/​vox­tral

# Transcribe an au­dio file (f32 SafeTensors path)

cargo run –release –features wgpu,cli,hub” –bin vox­tral-tran­scribe — \

–audio au­dio.wav –model mod­els/​vox­tral

# Or use the Q4 quan­tized path (~2.5 GB)

cargo run –release –features wgpu,cli,hub” –bin vox­tral-tran­scribe — \

–audio au­dio.wav –gguf mod­els/​vox­tral-q4.gguf –tokenizer mod­els/​vox­tral/​tekken.json

# Build WASM pack­age

wasm-pack build –target web –no-default-features –features wasm

# Generate self-signed cert (WebGPU re­quires se­cure con­text)

openssl req -x509 -newkey ec -pkeyopt ec_­paramgen_curve:prime256v1 \

-keyout /tmp/voxtral-key.pem -out /tmp/voxtral-cert.pem \

-days 7 -nodes -subj /CN=localhost”

# Start dev server

bun serve.mjs

Open https://​lo­cal­host:8443, ac­cept the cer­tifi­cate, and click Load from Server to down­load the model shards. Record from your mi­cro­phone or up­load a WAV file to tran­scribe.

Hosted demo on HuggingFace Spaces if you want to skip lo­cal setup.

The up­stream mis­tral-com­mon li­brary left-pads au­dio with 32 si­lence to­kens (at 12.5 Hz). After the mel/​conv/​re­shape pipeline, this cov­ers only 16 of the 38 de­coder pre­fix po­si­tions with si­lence — the re­main­ing 22 con­tain ac­tual au­dio. The f32 model han­dles this fine, but Q4_0 quan­ti­za­tion makes the de­coder sen­si­tive to speech con­tent in the pre­fix: au­dio that starts im­me­di­ately with speech (mic record­ings, clips with no lead­ing si­lence) pro­duces all-pad to­kens in­stead of text.

The left padding is in­creased to 76 to­kens, which maps to ex­actly 38 de­coder to­kens of si­lence and cov­ers the full stream­ing pre­fix. See src/​au­dio/​pad.rs for de­tails.

No sync GPU read­back — All ten­sor reads use in­to_­da­ta_a­sync().await

# Native (default fea­tures: wgpu + na­tive-to­k­enizer)

cargo build –release

# With all fea­tures

cargo build –release –features wgpu,cli,hub”

# WASM

wasm-pack build –target web –no-default-features –features wasm

# Unit + in­te­gra­tion tests (requires GPU for full suite)

cargo test –features wgpu,cli,hub”

# Lint

cargo clippy –features wgpu,cli,hub” — -D warn­ings

cargo clippy –no-default-features –features wasm –target was­m32-un­known-un­known — -D warn­ings

# E2E browser test (requires Playwright + model shards)

bunx play­wright test tests/​e2e_browser.spec.ts

GPU-dependent tests (model layer shapes, Q4 mat­mul, WGSL shader cor­rect­ness) are skipped in CI since GitHub Actions run­ners lack a GPU adapter. These tests run lo­cally on any ma­chine with Vulkan, Metal, or WebGPU sup­port.

The GGUF file must be split into shards of 512 MB or less to stay un­der the browser’s ArrayBuffer limit:

split -b 512m mod­els/​vox­tral-q4.gguf mod­els/​vox­tral-q4-shards/​shard-

The dev server and E2E test dis­cover shards au­to­mat­i­cally from mod­els/​vox­tral-q4-shards/.

Coming soon: ac­cu­racy (WER) and in­fer­ence speed bench­marks across na­tive and browser tar­gets.

...

Read the original on github.com »

10 381 shares, 23 trendiness

Qwen

...

Read the original on qwen.ai »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.