10 interesting stories served every morning and every evening.




1 818 shares, 39 trendiness

Personal Encyclopedias — whoami.wiki

Last year, I vis­ited my grand­moth­er’s house for the first time af­ter the pan­demic and came across a cup­board full of loose old pho­tos. I counted 1,351 of them span­ning all the way from my grand­par­ents in their early 20s, my mom as a baby, to me in mid­dle school, just around the time when we got our first smart­phone and all pho­tos since then were backed up on­line.

Everything was all over the place so I spent some time go­ing through them in­di­vid­u­ally and or­ga­niz­ing them into groups. Some of the ini­tial groups were based on the phys­i­cal at­trib­utes of the pho­to­graph like sim­i­lar as­pect ra­tios or film stock. For ex­am­ple, there was a group of black/​white 32mm square pic­tures that were taken around the time when my grand­fa­ther was in his mid 20s.

As I got done with group­ing all of them, I was able to see flashes of sto­ries in my head, but they were ephemeral and frag­ile. For in­stance, there was a group of pho­tos that looked like it was taken dur­ing my grand­par­ents’ wed­ding but I did­n’t know the chrono­log­i­cal or­der they were taken be­cause EXIF meta­data did­n’t ex­ist around that time.

So I sat down with my grand­mother and asked her to re­order the pho­tos and tell me every­thing she could re­mem­ber about her wed­ding. Her face lit up as she nar­rated the back­story be­hind the oc­ca­sion, go­ing from photo to photo, resur­fac­ing de­tails that had been dor­mant for decades. I wrote every­thing down, recorded the names of peo­ple in some of the pho­tos, some of whom I rec­og­nized as younger ver­sions of my un­cles and aunts.

After the interview”, I had mul­ti­ple pages of notes con­nect­ing the pho­tos to events that hap­pened 50 years ago. Since the ac­count was his­tor­i­cal, as an in­side joke I wanted to see if I could clean it up and pre­sent it as a page on Wikipedia so I could print it and give it to her. So I cloned MediaWiki, spun up a lo­cal in­stance, and be­gan my ed­i­to­r­ial work. I used the 2011 Royal Wedding as ref­er­ence and drafted a page start­ing with the clas­sic in­fobox and the lead para­graph.

I split up the rest of the con­tent into sec­tions and filled them with every­thing I could ver­ify like dates, names, places, who sat where. I scanned all the pho­tos and spent some time fig­ur­ing out what to place where. For every photo place­ment, there was a fol­low up to in­clude a de­scrip­tive cap­tion too.

Whenever I men­tioned a per­son, I linked them to an empty stub page. After I found out I could also link to the real Wikipedia, I was able to link things to real pages that pro­vided wider con­text to things like venues, rit­u­als, and the po­lit­i­cal cli­mate around that time, like for in­stance a le­gal amend­ment that was rel­e­vant to the wed­ding cer­e­mony.

In two evenings, I was able to doc­u­ment a full back­story for the pho­tos into a neat ar­ti­cle. These two evenings also made me re­al­ize just how pow­er­ful en­cy­clo­pe­dia soft­ware is to record and pre­serve me­dia and knowl­edge that would’ve oth­er­wise been lost over time.

This was so much fun that I spent the fol­low­ing months writ­ing pages to ac­count for all the pho­tos that needed to be stitched to­gether.

I got help from r/​ge­neal­ogy about how to ap­proach record­ing oral his­tory and I was given re­sources to bet­ter con­duct in­ter­views, shoutout to u/​stem­ma­tis! I would get on calls with my grand­mother and peo­ple in the fam­ily, ask them a cou­ple of ques­tions, and then write. It was also around this time that I be­gan us­ing au­dio tran­scrip­tion and lan­guage mod­els to make the ed­i­to­r­ial process eas­ier.

Over time, I man­aged to write a lot of pages con­nect­ing peo­ple to dif­fer­ent life events. The en­cy­clo­pe­dia for­mat made it easy to con­nect dots I would have never found on my own, like dis­cov­er­ing that one of the singers at my grand­par­ents’ wed­ding was the same nurse who helped de­liver me.

After find­ing all the sto­ries be­hind the phys­i­cal pho­tos, I started to work on dig­i­tal pho­tos and videos that I had stored on Google Photos. The won­der­ful thing about dig­i­tal pho­tos is that they come with EXIF meta­data that can re­veal ex­tra in­for­ma­tion like date, time, and some­times ge­o­graph­i­cal co­or­di­nates.

This time, with­out any in­ter­views, I wanted to see if I could use a lan­guage model to cre­ate a page based on just brows­ing through the pho­tos. As my first ex­per­i­ment, I cre­ated a folder with 625 pho­tos of a fam­ily trip to Coorg back in 2012.

I pointed Claude Code at the di­rec­tory and asked it to draft a wiki page by brows­ing through the im­ages. I hinted at us­ing ImageMagick to cre­ate con­tact sheets so it would help with brows­ing through mul­ti­ple pho­tos at once.

After a few min­utes and a cou­ple of to­kens later, it had cre­ated a com­pelling draft with a de­tailed ac­count of every­thing we did dur­ing the trip by time of day. The model had no lo­ca­tion data to work with, just time­stamps and vi­sual con­tent, but it was able to iden­tify the places from the pho­tos alone, in­clud­ing ones that I had for­got­ten by now. It picked up de­tails on the modes of trans­porta­tion we used to get be­tween places just from what it could see.

After I had clar­i­fied who some of the peo­ple in the pic­tures were, it went on to iden­tify them au­to­mat­i­cally in the cap­tions. Now that I had a de­tailed out­line ready, the page still only had con­tent based on the avail­able data, so to fill in the gaps I shared a list of anec­dotes from my point of view and the model in­serted them into places where the nar­ra­tive called for them.

The Coorg trip only had pho­tos to work with. My trip to Mexico City in 2022 had a lot more. I had taken 291 pho­tos and 343 videos with an iPhone 12 Pro that in­cluded ge­o­graph­i­cal co­or­di­nates as part of the EXIF meta­data.

On top of that, I ex­ported my lo­ca­tion time­line from Google Maps, my Uber trips, my bank trans­ac­tions, and Shazam his­tory. I would ask Claude Code to start with the pho­tos and then grad­u­ally give it ac­cess to the dif­fer­ent data ex­ports.

Here are some of the things it did across mul­ti­ple runs:

It cross-ref­er­enced my bank trans­ac­tions with lo­ca­tion data to as­cer­tain the restau­rants I went to.

Some of the pho­tos and videos showed me in at­ten­dance at a soc­cer match, how­ever, it was un­known which teams were play­ing. The model looked up my bank trans­ac­tions and found a Ticketmaster in­voice with in­for­ma­tion about the teams and name of the tour­na­ment.

It looked up my Uber trips to fig­ure out travel times and ex­act lo­ca­tions of pickup and drop.

It used my Shazam tracks to write about the kinds of songs that were play­ing at a place, like Cuban songs at a Cuban restau­rant.

In a fol­low-up, I men­tioned re­mem­ber­ing an evening din­ner with a gui­tarist play­ing in the back­ground. It fil­tered my me­dia to evening cap­tures, found a frame in a video with the gui­tarist, up­loaded it, and ref­er­enced the mo­ment in the page.

The MediaWiki ar­chi­tec­ture worked well with the ed­its, since for every new data source it would make amend­ments like a real Wikipedia con­trib­u­tor would. I leaned heav­ily on fea­tures that al­ready ex­isted. Talk pages to clar­ify gaps and con­sol­i­date re­search notes, cat­e­gories to group pages by theme, re­vi­sion his­tory to track how a page evolved as new data came in. I did­n’t have to build any of this, it was all just there.

What started as me help­ing the model fill in gaps from my mem­ory grad­u­ally in­verted. The model was now sur­fac­ing things I had com­pletely for­got­ten, cross-ref­er­enc­ing de­tails across data sources in ways I never would have done man­u­ally.

So I started point­ing Claude Code at other data ex­ports. My Facebook, Instagram, and WhatsApp archives held around 100k mes­sages and a cou­ple thou­sand voice notes ex­changed with close friends over a decade.

The model traced the arc of our friend­ships through the mes­sages, pulled out the life episodes we had talked each other through, and wove them into mul­ti­ple pages that read like it was writ­ten by some­one who knew us both. When I shared the pages with my friends, they wanted to read every sin­gle one.

This is when I re­al­ized I was no longer work­ing on a fam­ily his­tory pro­ject. What I had been build­ing, page by page, was a per­sonal en­cy­clo­pe­dia. A struc­tured, brows­able, in­ter­con­nected ac­count of my life com­piled from the data I al­ready had ly­ing around.

I’ve been work­ing on this as whoami.wiki. It uses MediaWiki as its foun­da­tion, which turns out to be a great fit be­cause lan­guage mod­els al­ready un­der­stand Wikipedia con­ven­tions deeply from their train­ing data. You bring your data ex­ports, and agents draft the pages for you to re­view.

A page about your grand­moth­er’s wed­ding works the same way as a page about a royal wed­ding. A page about your best friend works the same way as a page about a pub­lic fig­ure.

Oh and it’s gen­uinely fun! Putting to­gether the en­cy­clo­pe­dia felt like the early days of Facebook time­line, brows­ing through fin­ished pages, fol­low­ing links be­tween peo­ple and events, and stum­bling on a de­tail I for­got.

But more than the tech­nol­ogy, it’s the sto­ries that stayed with me. Writing about my grand­moth­er’s life sur­faced things I’d never known, her years as a sin­gle mother, the de­ci­sions she had to make, the re­silience it took. She was a stronger woman than I ever re­al­ized. Going through my friend­ships, I found mo­ments of en­dear­ment that I had nearly for­got­ten, the days friends went the ex­tra mile to be good to me. Seeing those mo­ments laid out on a page made me pick up the phone and call a few of them. The en­cy­clo­pe­dia did­n’t just or­ga­nize my data, it made me pay closer at­ten­tion to the peo­ple in my life.

Today I’m re­leas­ing whoami.wiki as an open source pro­ject. The en­cy­clo­pe­dia is yours, it runs on your ma­chine, your data stays with you, and any model can read it. The pro­ject is early and I’m still fig­ur­ing a lot of it out, but if this sounds in­ter­est­ing, you can get started here and tell me what you think!

...

Read the original on whoami.wiki »

2 660 shares, 59 trendiness

Why So Many Control Rooms Were Seafoam Green

Hello! This is a long, hope­fully fun one! If you’re read­ing this in your email, you may need to click expand” to read all the way to the end of this post. Thank you!

When I lived in Nashville, my girl­friends and I would take our­selves on field trips” across the state. We once went on a tour to spot bald ea­gles in West Tennessee, and upon ar­rival, a woman with fluffy hair in the state park bath­room told us she had seen 113 bald ea­gles the day be­fore. We ended up see­ing (counts on one hand)…2.

In the sum­mer of 2017, we went on an­other field trip to the National Park’s Manhattan Project Site in Oak Ridge, TN. In 1942, Oak Ridge, TN, was cho­sen as the site for a plu­to­nium and ura­nium en­rich­ment plant as part of the Manhattan Project, a top-se­cret WWII ef­fort to de­velop the first atomic bomb. Once a small and rural farm­ing com­mu­nity set­tled in the val­ley of East Tennessee, the swift task to cre­ate a nu­clear bomb grew the se­cret set­tle­ment ti­tled Site X” from 3,000 peo­ple in 1942 to 75,000 by 1945. Alongside the pop­u­la­tion growth, enor­mously com­plex build­ings were built.

A Note: The Manhattan Project cre­ated the nu­clear bomb that caused ex­treme dev­as­ta­tion in Japan and ended the war. There’s a lot of U. S. his­tory that’s aw­ful and in­de­fen­si­ble. Today, though, I’d like to talk about the in­dus­trial de­sign and color the­ory from that era.

Our first stop on the tour was the X-10 Graphite Reactor room and its con­trol panel room. The X-10 Graphite Reactor, a 24-foot-square block of graphite, was the world’s sec­ond full-scale nu­clear re­ac­tor. The plu­to­nium pro­duced from ura­nium there was shipped to Los Alamos, New Mexico, for re­search into the atomic bomb Fat Man.

What caught my eye as a de­signer, as with most in­dus­trial plants and con­trol rooms of that time, be­sides the knobs, levers, and but­tons, was the use of a very spe­cific seafoam green, seen here on the re­ac­tor’s walls and in the con­trol panel room.

Thus be­gan my day-long search, traips­ing through the in­ter­net for his­tor­i­cal in­for­ma­tion about this spe­cific shade of seafoam green.

Thankfully, this path led me to the work of color the­o­rist Faber Birren.

In the fall of 1919, Faber Birren en­tered the Art Institute at the University of Chicago, only to drop out in the spring of 1921 to com­mit him­self to self-ed­u­ca­tion in color, as such a pro­gram did­n’t ex­ist. He spent his days in­ter­view­ing psy­chol­o­gists and physi­cists and con­ducted his own color stud­ies, which were con­sid­ered un­con­ven­tional at the time. He painted his bed­room walls red ver­mil­lion to test if it would make him go mad.

In 1933, he moved to New York City and be­came a self-ap­pointed color con­sul­tant, ap­proach­ing ma­jor cor­po­ra­tions to sell the idea that ap­pro­pri­ate use of color could boost sales. He con­vinced a Chicago whole­sale meat com­pany that the com­pa­ny’s white walls made the meat un­ap­peal­ing. He stud­ied the steaks on var­i­ous col­ored back­grounds and de­ter­mined that a blue/​green back­ground would make the beef ap­pear red­der. Sales went up, and soon a num­ber of in­dus­tries hired Faber to bring color the­ory into their work, in­clud­ing the lead­ing chem­i­cal and wartime con­tract com­pany, as well as the Manhattan Project build­ing de­signer, DuPont.

With the in­crease in wartime pro­duc­tion in the US dur­ing WWII, Birren and DuPont cre­ated a mas­ter color safety code for the in­dus­trial plant in­dus­try, with the aim of re­duc­ing ac­ci­dents and in­creas­ing ef­fi­ciency within plants. These color codes were ap­proved by the National Safety Council in 1944 and are now in­ter­na­tion­ally rec­og­nized, hav­ing been manda­tory prac­tice since 1948. The color cod­ing went as such:

* Fire Red: All fire pro­tec­tion, emer­gency stop but­tons, and flam­ma­ble liq­uids should be red

* Solar Yellow: Signifies cau­tion and phys­i­cal haz­ards such as falling

* Safety Green: Indicates safety fea­tures such as first-aid equip­ment, emer­gency ex­its, and eye­wash sta­tions.

* Light Green: Used on walls to re­duce vi­sual fa­tigue

My in­dus­trial seafoam” light green mys­tery has fi­nally been solved thanks to this ar­ti­cle from UChicago Magazine.

Keeping in theme with control rooms”, I re­searched the sec­ond Manhattan Project plant, the Hanford Site, home to the B Reactor, the first full-scale plu­to­nium pro­duc­tion re­ac­tor in the world. To my sur­prise, this site looked like an ode to Birren’s light green and color codes, which makes sense, since his client, DuPont, was also re­spon­si­ble for the de­sign and con­struc­tion of Hanford.

In Birren’s 1963 book Color for Interiors: Historical and Modern, he writes about re­search un­der­taken to mea­sure eye fa­tigue in the in­dus­trial work­place and the ef­fects of in­te­rior color on hu­man ef­fi­ciency and well-be­ing. Using the color chart above, he states that the proper use of color hues can re­duce ac­ci­dents, raise stan­dards of ma­chine main­te­nance, and im­prove la­bor morale.

The im­por­tance of color in fac­to­ries is first to con­trol bright­ness in the gen­eral field of view for an ef­fi­cient see­ing con­di­tion. Interiors can then be con­di­tioned for emo­tional plea­sure and in­ter­est, us­ing warm, cool, or lu­mi­nois hues as work­ing con­di­tions sug­gest. Color should be func­tional and not merely dec­o­ra­tive.” - Faber Birren

Now, look­ing at the in­te­ri­ors of the Manhattan Project con­trol rooms and plants, the broad use of Light and Medium Green makes sense. One mis­take and mass dev­as­ta­tion could have oc­curred within these towns. Birren writes, Note that most of the stan­dards are soft in tone. This is de­lib­er­ate and in­tended to es­tab­lish a non-dis­tract­ing en­vi­ron­ment. Green is a rest­ful and nat­ural-look­ing color for av­er­age fac­tory in­te­ri­ors. Light Green with Medium Green is sug­gested.”

Let’s put these the­o­ries to work with this photo of the B-Reactor room found at the Hanford Site of the Manhattan Project. In Birren’s book, he di­rected the fol­low­ing color ap­pli­ca­tions for small in­dus­trial ar­eas:

* ✔️ Medium Gray is pro­posed for ma­chin­ery, equip­ment, and racks

* ✔️ Beige walls may be ap­plied to in­te­ri­ors de­prived of nat­ural light

As we can see, his color the­ory was fol­lowed to a T.

Other US Industrial Plants that Used these Color Methods

This color the­ory re­search just opened a whole can of de­sign worms for me, and I’m ex­cited to dive into them more. For ex­am­ple, Germany de­vel­oped its own seafoam green, specif­i­cally de­signed for bridges, called Cologne Bridge Green. That’s a post for an­other day.

And fi­nally, if you en­joy this sort of de­sign, I de­signed a font called Parts List” that is meant to evoke the feel­ing of sit­ting in an oil change wait­ing room, with the smell of burnt cof­fee. I cre­ated this font out of old auto parts lists, and it’s a per­fectly wob­bly type­face that will give you that Is it a type­writer or hand­writ­ing?’ feel­ing. It’s now avail­able on my web­site.

PS: I have an old friend whose dad still works at the Uranium plant in Oak Ridge. I told him that I was sur­prised that al­most all of the fa­cil­i­ties had been torn down, and he just looked at me straight in the face and said, Who said it’s ac­tu­ally gone?” Noted. ✌️

Thanks for be­ing here!

...

Read the original on bethmathews.substack.com »

3 624 shares, 53 trendiness

We Haven’t Seen the Worst of What Gambling and Prediction Markets Will Do to America

Here are three sto­ries about the state of gam­bling in America.

In November 2025, two pitch­ers for the Cleveland Guardians, Emmanuel Clase and Luis Ortiz, were charged in a con­spir­acy for rigging pitches.” Frankly, I had never heard of rigged pitches be­fore, but the fed­eral in­dict­ment de­scribes a scheme so sim­ple that it’s a mir­a­cle that this sort of thing does­n’t hap­pen all the time. Three years ago, a few cor­rupt bet­tors ap­proached the pitch­ers with a tan­ta­liz­ing deal: (1) We’ll bet that cer­tain pitches will be balls; (2) you throw those pitches into the dirt; (3) we’ll win the bets and give you some money.

The plan worked. Why would­n’t it? There are hun­dreds of pitches thrown in a base­ball game, and no­body cares about one bad pitch. The bets were so de­vi­ously clever be­cause they of­fered enor­mous re­wards for bet­tors and only in­ci­den­tal in­con­ve­nience for play­ers and view­ers. Before their plan was snuffed out, the fraud­sters won $450,000 from pitches that not even the most ar­dent Cleveland base­ball fan would ever re­mem­ber the next day. Nobody watch­ing America’s pas­time could have guessed that they were wit­ness­ing a six-fig­ure fraud.

On the morn­ing of February 28th, some­one logged onto the pre­dic­tion mar­ket web­site Polymarket and made an un­usu­ally large bet. This bet was­n’t placed on a base­ball game. It was­n’t placed on any sport. This was a bet that the United States would bomb Iran on a spe­cific day, de­spite ex­tremely low odds of such a thing hap­pen­ing.

A few hours later, bombs landed in Iran. This one bet was part of a $553,000 pay­day for a user named Magamyman.” And it was just one of dozens of sus­pi­cious, per­fectly-timed wa­gers, to­tal­ing mil­lions of dol­lars, placed in the hours be­fore a war be­gan.

It is al­most im­pos­si­ble to be­lieve that, who­ever Magamyman is, he did­n’t have in­side in­for­ma­tion from mem­bers of the ad­min­is­tra­tion. The term war prof­i­teer­ing typ­i­cally refers to arms deal­ers who get rich from war. But we now live in a world not only where on­line bet­tors stand to profit from war, but also where key de­ci­sion mak­ers in gov­ern­ment have the tan­ta­liz­ing op­tions to make hun­dreds of thou­sands of dol­lars by syn­chro­niz­ing mil­i­tary en­gage­ments with their gam­bling po­si­tion.

On March 10, sev­eral days into the Iran War, the jour­nal­ist Emanuel Fabian re­ported that a war­head launched from Iran struck a site out­side Jerusalem.

Meanwhile on Polymarket, users had placed bets on the pre­cise lo­ca­tion of mis­sile strikes on March 10. Fabian’s ar­ti­cle was there­fore poised to de­ter­mine pay­outs of $14 mil­lion in bet­ting. As The Atlantic’s Charlie Warzel re­ported, bet­tors en­cour­aged him to rewrite his story to pro­duce the out­come that they’d bet on. Others threat­ened to make his life miserable.”

A clever dystopian nov­el­ist might con­ceive of a fu­ture where poorly paid jour­nal­ists for news wires are of­fered six-fig­ure deals to re­port fic­tions that cash out bets from on­line pre­dic­tion mar­kets. But just how fan­ci­ful is that sce­nario when we have good rea­son to be­lieve that jour­nal­ists are al­ready be­ing pres­sured, bul­lied, and threat­ened to pub­lish spe­cific sto­ries that align with multi-thou­sand dol­lar bets about the fu­ture?

Put it all to­gether: rigged pitches, rigged war bets, and at­tempts to rig wartime jour­nal­ism. Without con­text, each story would sound like a wacky con­spir­acy the­ory. But these are not con­spir­acy the­o­ries. These are things that have hap­pened. These are con­spir­a­cies—full stop.

If you’re not para­noid, you’re not pay­ing at­ten­tion” has his­tor­i­cally been one of those bumper­stick­ers you find on the back of a car with so many other bumper­stick­ers that you worry for the san­ity of its oc­cu­pants. But in this weird new re­al­ity where every event on the planet has a price, and be­hind every price is a shad­owy coun­ter­party, the jit­tery gam­bler’s para­noia—is what I’m watch­ing hap­pen­ing be­cause some­body more pow­er­ful than me bet on it?—is start­ing to seem, eerily, like a kind of per­verse com­mon sense.

What’s re­mark­able is not just the fact that on­line sports books have taken over sports, or that bet­ting mar­kets have metas­ta­sized in pol­i­tics and cul­ture, but the speed with which both have taken place.

For most of the last cen­tury, the ma­jor sports leagues were ve­he­mently against gam­bling, as the Atlantic staff writer McKay Coppins ex­plained in his re­cent fea­ture. In 1992, NFL com­mis­sioner Paul Tagliabue told Congress that nothing has done more to de­spoil the games Americans play and watch than wide­spread gam­bling on them.” In 2012, NBA com­mis­sioner David Stern loudly threat­ened New Jersey Gov. Chris Christie for sign­ing a bill to le­gal­ize sports bet­ting in the Garden State, re­port­edly scream­ing, we’re go­ing to come af­ter you with every­thing we’ve got.”

So much for that. Following the 2018 Supreme Court de­ci­sion Murphy vs. NCAA, sports gam­bling was un­leashed into the world, and the leagues haven’t looked back. Last year, the NFL saw $30 bil­lion gam­bled on foot­ball games, and the league it­self made half a bil­lion dol­lars in ad­ver­tis­ing, li­cens­ing, and data deals.

Nine years ago, Americans bet less than $5 bil­lion on sports. Last year, that num­ber rose to at least $160 bil­lion. Big num­bers mean noth­ing to me, so let me put that sta­tis­tic an­other way: $5 bil­lion is roughly the amount Americans spend an­nu­ally at coin-op­er­ated laun­dro­mats and $160 bil­lion is nearly what Americans spent last year on do­mes­tic air­line tick­ets. So, in a decade, the on­line sports gam­bling in­dus­try will have risen from the level of coin laun­dro­mats to ri­val the en­tire air­line in­dus­try.

And now here come the pre­dic­tion mar­kets, such as Polymarket and Kalshi, whose com­bined 2025 rev­enue came in around $50 bil­lion. These pre­dic­tive mar­kets are the log­i­cal end­point of the on­line gam­bling boom,” Coppins told me on my pod­cast Plain English. We have taught the en­tire American pop­u­la­tion how to gam­ble with sports. We’ve made it fric­tion­less and easy and put it on every­body’s phone. Why not ex­tend the logic and cul­ture of gam­bling to other seg­ments of American life?” He con­tin­ued:

Why not let peo­ple gam­ble on who’s go­ing to win the Oscar, when Taylor Swift’s wed­ding will be, how many peo­ple will be de­ported from the United States next year, when the Iranian regime will fall, whether a nu­clear weapon will be det­o­nated in the year 2026, or whether there will be a famine in Gaza? These are not things that I’m mak­ing up. These are all bets that you can make on these pre­dic­tive mar­kets.

Indeed, why not let peo­ple gam­ble on whether there will be a famine in Gaza? The mar­ket logic is cold and sim­ple: More bets means more in­for­ma­tion, and more in­for­ma­tional vol­ume is more ef­fi­ciency in the mar­ket­place of all fu­ture hap­pen­ings. But from an­other per­spec­tive—let’s call it, base­line moral­ity?—the trans­for­ma­tion of a famine into a wind­fall event for pre­scient bet­tors seems so grotesque as to re­quire no elab­o­ra­tion. One imag­ines a young man send­ing his 1099 doc­u­ments to a tax ac­coun­tant the fol­low­ing spring: right, so here are my div­i­dends, these are the cap gains, and, oh yeah, here’s my $9,000 pay­out for to­tally nail­ing when all those kids would die.”

It is a com­fort­ing myth that dystopias hap­pen when ob­vi­ously bad ideas go too far. Comforting, be­cause it plays to our naive hope that the world can be di­vided into sta­tic cat­e­gories of good ver­sus evil and that once we stig­ma­tize all the bad peo­ple and ghet­toize all the bad ideas, some utopia will spring into view. But I think dystopias more likely hap­pen be­cause seem­ingly good ideas go too far. Pleasure is bet­ter than pain” is a sen­si­ble no­tion, and a so­ci­ety de­voted to its im­pli­ca­tions cre­ated Brave New World. Order is bet­ter than dis­or­der” sounds al­right to me, but a so­ci­ety de­voted to the most grotesque vi­sion of that prin­ci­ple takes us to 1984. Sports gam­bling is fun, and pre­dic­tion mar­kets can fore­cast fu­ture events. But ex­tended with­out guardrails or lim­i­ta­tions, those prin­ci­ples lead to a world where ubiq­ui­tous gam­bling leads to cheat­ing, cheat­ing leads to dis­trust, and dis­trust leads ul­ti­mately to cyn­i­cism or out­right dis­en­gage­ment.

The cri­sis of au­thor­ity that has kind of al­ready vis­ited every other American in­sti­tu­tion in the last cou­ple of decades has ar­rived at pro­fes­sional sports,” Coppins said. Two-thirds of Americans now be­lieve that pro­fes­sional ath­letes some­times change their per­for­mance to in­flu­ence gam­bling out­comes. Not to over­state it, but that’s a dis­as­ter,” he said. And not just for sports.

There are four rea­sons to worry about the ef­fect of gam­bling in sports and cul­ture.

The first is the risk to in­di­vid­ual bet­tors. Every time we cre­ate 1,000 new gam­blers, we cre­ate dozens of new ad­dicts and a hand­ful of new bank­rupt­cies. As I’ve re­ported, there is ev­i­dence that about one in five men un­der 25 is on the spec­trum of hav­ing a gam­bling prob­lem, and calls to the National Problem Gambling Helpline have roughly tripled since sports gam­bling was broadly le­gal­ized in 2018. Research from UCLA and USC found that bank­rupt­cies in­creased by 10 per­cent in states that le­gal­ized on­line sports bet­ting be­tween 2018 and 2023. People will some­times ask me what busi­ness I have wor­ry­ing about on­line gam­bling when peo­ple should be free to spend their money how­ever they like. My re­sponse is that wise rules place guardrails around eco­nomic ac­tiv­ity with a cer­tain rate of per­sonal harm. For al­co­hol, we have li­cens­ing re­quire­ments, min­i­mum drink­ing ages, bound­aries around hours of sale, and rules about pub­lic con­sump­tion. As al­co­hol con­sump­tion is de­clin­ing among young peo­ple, gam­bling is surg­ing; Gen Z has re­placed one (often fun) vice with a mean­ing­ful chance of ad­dic­tion with an­other (often fun) vice with a mean­ing­ful chance of ad­dic­tion. But whereas we have cen­turies of ex­pe­ri­ence cur­tail­ing ex­ces­sive drink­ing with rules and cus­toms, we are cur­rently in a free-for-all era of gam­bling.

The sec­ond risk is to in­di­vid­ual play­ers and prac­ti­tion­ers. One rea­son why sports com­mis­sion­ers might have wanted to keep gam­bling out of their busi­ness is that gam­blers turns some peo­ple into com­plete psy­chopaths, and that’s not a very nice ex­pe­ri­ence for folks on the re­ceiv­ing end of gam­bling-af­flicted psy­chopaths. In his fea­ture, McKay Coppins re­ports on the ex­pe­ri­ence of Caroline Garcia, a top-ranked ten­nis player, who said she re­ceived tor­rents of abu­sive mes­sages from gam­blers both for los­ing games and for win­ning games. This has be­come a very com­mon ex­pe­ri­ence for ath­letes at the pro­fes­sional level, even at the col­lege level too,” Coppins said. As the ex­pe­ri­ence of jour­nal­ist Emanuel Fabian shows, gam­bling can turn or­di­nary peo­ple into mini mob bosses, who go around threat­en­ing play­ers and prac­ti­tion­ers who they be­lieve are cost­ing them thou­sands of dol­lars.

The third risk is to the in­tegrity of sports—or any other in­sti­tu­tion. At the end of 2025, in ad­di­tion to its in­dict­ment of the Cleveland Guardians pitch­ers, the FBI an­nounced 30 ar­rests in­volv­ing gam­bling schemes in the NBA. This cav­al­cade of ar­rests has dra­mat­i­cally re­duced trust in sports. Two-thirds of Americans now be­lieve that pro­fes­sional ath­letes change their per­for­mance to in­flu­ence gam­bling out­comes. It does not re­quire ex­tra­or­di­nary cre­ativ­ity to imag­ine how this prin­ci­ple could ex­tend to other do­mains and in­sti­tu­tions. If more peo­ple start to be­lieve that things only hap­pen in the world as a di­rect re­sult of shad­owy in­ter­ests in vast bet­ting mar­kets, it’s go­ing to be a per­ma­nent open sea­son for con­spir­acy the­o­ries.

The ul­ti­mate risk is al­most too dark to con­tem­plate in much de­tail. As the logic and cul­ture of casi­nos moves from sports to pol­i­tics, the scan­dals that have vis­ited base­ball and bas­ket­ball might soon ar­rive in pol­i­tics. Is it re­ally so un­be­liev­able that a politi­cian might tip off a friend, or as­suage an en­emy, by giv­ing them in­side in­for­ma­tion that would al­low them to profit on bet­ting mar­kets? Is it re­ally so in­cred­i­ble to be­lieve that a gov­ern­ment of­fi­cial would try to align pol­icy with a bet­ting po­si­tion that stood to earn them, or an al­lied group, hun­dreds of thou­sands of dol­lars? That is what a rigged pitch” in pol­i­tics would look like. It’s not just wa­ger­ing on a pol­icy out­come that you sus­pect will hap­pen. It’s chang­ing pol­icy out­comes based on what can be wa­gered.

Gambling is flour­ish­ing be­cause it meets the needs of our mo­ment: a low-trust world, where lonely young peo­ple are seek­ing high-risk op­por­tu­ni­ties to launch them into wealth and com­fort. In such an en­vi­ron­ment, fi­nan­cial­iza­tion might seem to be the last form of civic par­tic­i­pa­tion that feels hon­est to a large por­tion of the coun­try. Voting is com­pro­mised, and polling is ma­nip­u­lated, and news is al­go­rith­mi­cally cu­rated. But a bet set­tles. A game ends. There is com­fort in that. In an un­cer­tain and il­leg­i­ble world, it does­n’t get much more cer­tain and leg­i­ble than this: You won, or you lost.

A 2023 Wall Street Journal poll found that Americans are pulling away from prac­ti­cally every value that once de­fined na­tional life—pa­tri­o­tism, re­li­gion, com­mu­nity, fam­ily. Young peo­ple care less than their par­ents about mar­riage, chil­dren, or faith. But na­ture, ab­hor­ring a vac­uum, is fill­ing the moral void left by re­treat­ing in­sti­tu­tions with the mar­ket. Money has be­come our fi­nal virtue.

I of­ten find my­self think­ing about the philoso­pher Alasdair MacIntyre, who ar­gued in the in­tro­duc­tion of After Virtue that moder­nity had de­stroyed the shared moral lan­guage once sup­plied by tra­di­tions and re­li­gion, leav­ing us with only the lan­guage of in­di­vid­ual pref­er­ence. Virtue did not dis­ap­pear, I think, so much as it died and was rein­car­nated as the mar­ket. It is now the mar­ket that tells us what things are worth, what events mat­ter, whose pre­dic­tions are cor­rect, who is win­ning, who counts. Money has, in a strange way, be­come the last moral ar­biter stand­ing—the fi­nal uni­ver­sal lan­guage that a plu­ral­is­tic, dis­trust­ful, post-in­sti­tu­tional so­ci­ety can use to com­mu­ni­cate with it­self.

As this moral vo­cab­u­lary scales across cul­ture, it also cor­rodes cul­ture. In sports, when you have money on a game, you’re not root­ing for a team. You’re root­ing for a propo­si­tion. The so­cial func­tion of fan­dom—shared iden­tity, in­her­ited loy­alty, some­thing larger than your­self—dis­solves into in­di­vid­ual risk. In pol­i­tics, I fear the con­se­quences will be worse. Prediction mar­kets can be use­ful for those who want to know the fu­ture, but their util­ity re­cruits par­tic­i­pants into a re­la­tion­ship with the news cy­cle that is ad­ver­sar­ial, and even mis­an­thropic. A young man bet­ting on a ter­ror­ist at­tack or a famine is not act­ing as a mere con­cerned cit­i­zen whose par­tic­i­pa­tion im­proves the ef­fi­ciency of global pre­dic­tion mar­kets. He’s just a dude, on his phone, alone in a room, choos­ing to root for death.

If that does­n’t bother you, I don’t know how to make it bother you. Based on eco­nomic and mar­ket ef­fi­ciency prin­ci­ples alone, this young man’s be­hav­ior is de­fen­si­ble. But there is moral­ity out­side of mar­kets. There is more to life than the ef­fi­ciency of in­for­ma­tion net­works. But will we re­dis­cover it, any time soon? Don’t bet on it.

...

Read the original on www.derekthompson.org »

4 579 shares, 34 trendiness

EU Parliament Stops Mass Surveillance in Voting Thriller – Paving the Way for Genuine Child Protection!

The con­tro­ver­sial mass sur­veil­lance of pri­vate mes­sages in Europe is com­ing to an end. After the European Parliament had al­ready re­jected the in­dis­crim­i­nate and blan­ket Chat Control by US tech com­pa­nies on 13 March, con­ser­v­a­tive forces at­tempted a de­mo­c­ra­t­i­cally highly ques­tion­able ma­neu­ver yes­ter­day to force a re­peat vote to ex­tend the law any­way.

However, in a true vot­ing thriller to­day, the Parliament fi­nally pulled the plug on this sur­veil­lance ma­nia: With a ra­zor-thin ma­jor­ity of just a sin­gle vote, the Parliament first re­jected the au­to­mated as­sess­ment of un­known pri­vate pho­tos and chat texts as suspicious” or unsuspicious”. In the sub­se­quent fi­nal vote, the amended re­main­ing pro­posal clearly failed to reach a ma­jor­ity.

This means: As of 4 April, the EU dero­ga­tion will ex­pire for good. US cor­po­ra­tions like Meta, Google, and Microsoft must stop the in­dis­crim­i­nate scan­ning of the pri­vate chats of European cit­i­zens. The dig­i­tal pri­vacy of cor­re­spon­dence is re­stored!

This does not cre­ate a le­gal vac­uum—quite the op­po­site. Ending in­dis­crim­i­nate mass scan­ning clears the path for mod­ern, ef­fec­tive child pro­tec­tion. Fearmongering that in­ves­ti­ga­tors will be flying blind” is un­war­ranted: Recently, only 36% of sus­pi­cious ac­tiv­ity re­ports from US com­pa­nies orig­i­nated from the sur­veil­lance of pri­vate mes­sages any­way. Social me­dia and cloud stor­age ser­vices are be­com­ing in­creas­ingly rel­e­vant for in­ves­ti­ga­tions. Targeted telecom­mu­ni­ca­tions sur­veil­lance based on con­crete sus­pi­cion and a ju­di­cial war­rant re­mains fully per­mis­si­ble, as does the rou­tine scan­ning of pub­lic posts and hosted files. User re­port­ing also re­mains fully in­tact.

Digital free­dom fighter and for­mer Member of the European Parliament Patrick Breyer (Pirate Party) com­mented on to­day’s his­toric vic­tory:

This his­toric day brings tears of joy! The EU Parliament has buried Chat Control — a mas­sive, hard-fought vic­tory for the un­prece­dented re­sis­tance of civil so­ci­ety and cit­i­zens! The fact that a sin­gle vote tipped the scales against the ex­tremely er­ror-prone text and im­age search shows: Every sin­gle vote in Parliament and every call from con­cerned cit­i­zens counted!

We have stopped a bro­ken and il­le­gal sys­tem. Once our in­ves­ti­ga­tors are no longer drown­ing in a flood of false and long-known sus­pi­cion re­ports from the US, re­sources will fi­nally be freed up to hunt down or­ga­nized abuse rings in a tar­geted and covert man­ner. Trying to pro­tect chil­dren with mass sur­veil­lance is like des­per­ately try­ing to mop up the floor while leav­ing the faucet run­ning. We must fi­nally turn off the tap! This means gen­uine child pro­tec­tion through a par­a­digm shift: Providers must tech­ni­cally pre­vent cy­ber­groom­ing from the out­set through se­cure app de­sign. Illegal ma­te­r­ial on the in­ter­net must be proac­tively tracked down and deleted di­rectly at the source. That is what truly pro­tects chil­dren.

But be­ware, we can only cel­e­brate briefly to­day: They will try again. The ne­go­ti­a­tions for a per­ma­nent Chat Control reg­u­la­tion are con­tin­u­ing un­der high pres­sure, and soon the planned age ver­i­fi­ca­tion for mes­sen­gers threat­ens to end anony­mous com­mu­ni­ca­tion on the in­ter­net. The fight for dig­i­tal free­dom must go on!”

The Next Battle: The Return of Chat Control and Mandatory ID

Despite to­day’s vic­tory, fur­ther pro­ce­dural steps by EU gov­ern­ments can­not be com­pletely ruled out. Most of all, the tri­logue ne­go­ti­a­tions on a per­ma­nent child pro­tec­tion reg­u­la­tion (Chat Control 2.0) are con­tin­u­ing un­der se­vere time pres­sure. There, too, EU gov­ern­ments con­tinue to in­sist on their de­mand for voluntary” in­dis­crim­i­nate Chat Control.

Furthermore, the next mas­sive threat to dig­i­tal civil lib­er­ties is al­ready on the agenda: Next up in the on­go­ing tri­logue, law­mak­ers will ne­go­ti­ate whether mes­sen­ger and chat ser­vices, as well as app stores, will be legally obliged to im­ple­ment age ver­i­fi­ca­tion. This would re­quire users to pro­vide ID doc­u­ments or sub­mit to fa­cial scans, ef­fec­tively mak­ing anony­mous com­mu­ni­ca­tion im­pos­si­ble and se­verely en­dan­ger­ing vul­ner­a­ble groups such as whistle­blow­ers and per­se­cuted in­di­vid­u­als.

Background: What ex­actly ex­pires on 3 April

An EU in­terim reg­u­la­tion (2021/1232), set to ex­pire on 3 April, cur­rently per­mits US cor­po­ra­tions such as Meta to carry out in­dis­crim­i­nate mass scan­ning of pri­vate mes­sages on a vol­un­tary ba­sis. Three types of chat con­trol are au­tho­rised: scan­ning for al­ready known im­ages and videos (so-called hash scan­ning, which gen­er­ates over 90% of re­ports); au­to­mated as­sess­ment of pre­vi­ously un­known im­ages and videos; and au­to­mated analy­sis of text con­tent in pri­vate chats.

The AI-based analy­sis of un­known im­ages and texts is ex­tremely er­ror-prone. But the in­dis­crim­i­nate mass scan­ning for known ma­te­r­ial is also highly con­tro­ver­sial, too: be­yond the un­re­li­a­bil­ity of the al­go­rithms doc­u­mented by re­searchers, these scans rely on opaque for­eign data­bases rather than European crim­i­nal law. The al­go­rithms are blind to con­text and lack of crim­i­nal in­tent (e.g. con­sen­sual sex­ting be­tween teenagers). As a re­sult, vast num­bers of pri­vate but crim­i­nally ir­rel­e­vant chats are ex­posed.

The fact that to­day’s de­ci­sion by the EU Parliament was also tech­ni­cally im­per­a­tive is proven by a newly pub­lished sci­en­tific study. Renowned IT se­cu­rity re­searchers an­a­lyzed the stan­dard al­go­rithm PhotoDNA”, which is used by tech com­pa­nies for Chat Control. Their damn­ing ver­dict: The soft­ware is unreliable”. The re­searchers proved that crim­i­nals can ren­der il­le­gal im­ages in­vis­i­ble to the scan­ner through min­i­mal al­ter­ations (e.g., adding a sim­ple bor­der), while harm­less im­ages can be eas­ily ma­nip­u­lated so that in­no­cent cit­i­zens are falsely re­ported to the po­lice.

The Hard Facts: Why Chat Control Has Failed Spectacularly

The EU Commission’s 2025 eval­u­a­tion re­port on Chat Control reads like an ad­mis­sion of com­plete fail­ure:

* Data Giant Monopoly: Roughly 99% of all chat re­ports to po­lice in Europe come from a sin­gle US tech cor­po­ra­tion: Meta. US com­pa­nies acted as a pri­vate aux­il­iary po­lice force—with­out ef­fec­tive European over­sight.

* Massive Police Overload from Junk Data: The German Federal Criminal Police Office (BKA) re­ports that a stag­ger­ing 48% of the dis­closed chats are crim­i­nally ir­rel­e­vant. This flood of junk data ties up re­sources that are ur­gently needed for tar­geted in­ves­ti­ga­tions.

* Criminalization of Minors: According to crime sta­tis­tics, around 40% of in­ves­ti­ga­tions in Germany tar­get teenagers who thought­lessly share im­ages (e.g., con­sen­sual sex­ting).

* An Obsolete Model Due to Encryption: Because providers are in­creas­ingly tran­si­tion­ing to end-to-end en­cryp­tion for pri­vate mes­sages, the num­ber of chats re­ported to the po­lice has al­ready dropped by 50% since 2022.

* Failure in Child Protection: According to the Commission’s re­port, there is no mea­sur­able cor­re­la­tion be­tween the mass sur­veil­lance of pri­vate mes­sages and ac­tual con­vic­tions.

During the leg­isla­tive process, for­eign-funded lobby groups and au­thor­i­ties tried to pres­sure the Parliament through fear­mon­ger­ing. A com­par­i­son of their claims with re­al­ity:

Disinformation 1: The European Parliament is to blame for the col­lapse of the tri­logue ne­go­ti­a­tions.”

(Claimed by the lobby al­liance ECLAG and US tech com­pa­nies)

* Fact: It was the EU Council of Ministers that de­lib­er­ately let the ne­go­ti­a­tions fail. Leaked Council ca­bles re­veal that EU mem­ber states showed no will­ing­ness to com­pro­mise, fear­ing that any con­ces­sion could set a prece­dent for the per­ma­nent Chat Control 2.0 reg­u­la­tion. Parliament’s lead ne­go­tia­tor, Birgit Sippel, sharply crit­i­cized the Council: With their lack of flex­i­bil­ity, Member States have de­lib­er­ately ac­cepted that the in­terim reg­u­la­tion will ex­pire.”

Disinformation 2: Without in­dis­crim­i­nate Chat Control, law en­force­ment will be fly­ing blind.”

(Claimed by au­thor­i­ties in­clud­ing BKA President Holger Münch)

* Fact: Targeted sur­veil­lance re­mains al­lowed. The real prob­lem for au­thor­i­ties is their own re­fusal to re­move ma­te­r­ial from the in­ter­net. The Federation of German Criminal Investigators (BDK) warns that this mass sur­veil­lance pro­duces a flood of tips… of­ten with­out any ac­tual in­ves­tiga­tive lead.” Meanwhile, the BKA sys­tem­at­i­cally re­fuses to proac­tively have abuse ma­te­r­ial re­moved from the in­ter­net, as in­ves­tiga­tive re­port­ing by ARD has re­vealed.

Disinformation 4: The de­mand comes pri­mar­ily from vic­tims.”

(Implied by the ECLAG cam­paign)

* Fact: Actual sur­vivors are tak­ing le­gal ac­tion against the sur­veil­lance. Survivor Alexander Hanff writes: Taking away our right to pri­vacy means fur­ther harm­ing us.” To pre­serve safe spaces for vic­tims, a sur­vivor from Bavaria is cur­rently su­ing Meta. Who truly ben­e­fits was ex­posed in an in­ves­tiga­tive re­port by Balkan Insight: The US or­ga­ni­za­tion Thorn, which sells scan­ning soft­ware, in­vests mas­sively in EU lob­by­ing, while ECLAG mem­bers are funded by tech cor­po­ra­tions.

The European Parliament ad­vo­cates a gen­uine par­a­digm shift for fu­ture leg­is­la­tion, sup­ported by civil so­ci­ety, sur­vivor net­works, and IT se­cu­rity ex­perts:

Strict de­fault set­tings and pro­tec­tive mech­a­nisms (Security by Design) to make cy­ber­groom­ing tech­ni­cally harder from the out­set.

Proactive search by a new EU Child Protection Center and im­me­di­ate take­down oblig­a­tions for providers and law en­force­ment on the open in­ter­net and dark­net — il­le­gal ma­te­r­ial must be de­stroyed di­rectly at the source. There must be an end to law en­force­ment agen­cies de­clar­ing them­selves not com­pe­tent” for the re­moval of abuse ma­te­r­ial.

During the leg­isla­tive process, the mas­sive, ques­tion­able lob­by­ing ef­forts were ex­posed: The push for Chat Control is heav­ily dri­ven by for­eign-funded lobby groups and tech ven­dors. The US or­ga­ni­za­tion Thorn, which sells the ex­act type of scan­ning soft­ware in ques­tion, spends hun­dreds of thou­sands of eu­ros lob­by­ing in Brussels. The tech in­dus­try of­fi­cially lob­bied side-by-side with cer­tain or­ga­ni­za­tions for a law that does not pro­tect chil­dren, but rather se­cures their own prof­its and data ac­cess.

Right up to the very end, the US tech in­dus­try and for­eign- or gov­ern­ment-funded lobby groups tried to panic Europe. But flood­ing our po­lice with false pos­i­tives and du­pli­cates from mass sur­veil­lance does­n’t save a sin­gle child from abuse. Today’s de­fin­i­tive fail­ure of Chat Control is a clear stop sign to this sur­veil­lance ma­nia. Negotiators can­not ig­nore this ver­dict in the on­go­ing tri­logue ne­go­ti­a­tions for a per­ma­nent reg­u­la­tion. Indiscriminate mass scan­ning of our pri­vate mes­sages must fi­nally give way to truly ef­fec­tive and tar­geted child pro­tec­tion that re­spects fun­da­men­tal rights.”

...

Read the original on www.patrick-breyer.de »

5 552 shares, 24 trendiness

Tuta (@tuta.com)

This is a heav­ily in­ter­ac­tive web ap­pli­ca­tion, and JavaScript is re­quired. Simple HTML in­ter­faces are pos­si­ble, but that is not what this is.

Learn more about Bluesky at bsky.so­cial and at­proto.com. You did it! 🥳

European Parliament just de­cided that Chat Control 1.0 must stop.

This means on April 6, 2026, Gmail, LinkedIn, Microsoft and other Big Techs must stop scan­ning your pri­vate mes­sages in the EU. #PrivacyWins 💪

[contains quote post or other em­bed­ded con­tent]

...

Read the original on bsky.app »

6 548 shares, 34 trendiness

Moving from GitHub to Codeberg, for lazy people

I’ve just started to mi­grate some repos­i­to­ries from GitHub to Codeberg. I’ve wanted to do this for a long time but have stalled on it be­cause I per­ceived Codeberg as not be­ing ready and the mi­gra­tion process as be­ing a lot of (boring) work.

It turns out that is only par­tially true and wildly de­pends on your pro­ject. If you’re in a sim­i­lar po­si­tion as me, hope­fully these notes serve as mo­ti­va­tion and start­ing point. These so­lu­tions are not what I might stick around with long-term, but aimed at what I think is

eas­i­est to get started with when mi­grat­ing from GitHub.

First, there’s the mi­gra­tion of is­sues, pull re­quests and re­leases along with their ar­ti­facts. This is ac­tu­ally the eas­i­est part since Codeberg of­fers repos­i­tory im­port from GitHub that just works, and all these fea­tures have a UI nearly iden­ti­cal to GitHub’s. The im­port pre­serves is­sue num­bers, la­bels, au­thor­ship. The user ex­pe­ri­ence is very much a step above the ex­tremely awk­ward hacks that peo­ple use to im­port from other is­sue track­ers into GitHub.

If you’re us­ing GitHub Pages you can use code­berg.page. There’s a warn­ing about it not of­fer­ing any up­time SLO, but I haven’t no­ticed any down­time at all, and for now it’s fine. You push your HTML to a branch, very much like the old GitHub Pages. Update

2025-09-22: Alternatively you may try https://​grebedoc.dev or https://​www.sta­ti­chost.eu/

The by far nas­ti­est part is CI. GitHub has done an ex­cel­lent job lur­ing peo­ple in with free ma­cOS run­ners and in­fi­nite ca­pac­ity for pub­lic re­pos . You will have to give up on both of those things. I rec­om­mend look­ing into cross-com­pi­la­tion for your pro­gram­ming lan­guage, and to self-host a run­ner for

Forgejo Actions, to solve those prob­lems re­spec­tively.

Why Forgejo

Actions and not Woodpecker CI, is­n’t Woodpecker on Codeberg more

sta­ble? Yes, ab­solutely, in fact the

doc­u­men­ta­tion for Forgejo Actions on Codeberg is out of date right

now, but Forgejo Actions will just feel way more fa­mil­iar com­ing from GitHub Actions. The UI and YAML syn­tax is al­most iden­ti­cal, and the ex­ist­ing ac­tions ecosys­tem mostly works as-is on Codeberg. For ex­am­ple, where my GitHub Actions work­flow would say

uses: dtol­nay/​rust-tool­chain, my Forgejo Actions work­flow would just change to

uses: https://​github.com/​dtol­nay/​rust-tool­chain.

If you ab­solutely need ma­cOS run­ners I’d rec­om­mend stick­ing with GitHub Actions on the GitHub repos­i­tory, mir­ror­ing all com­mits from Codeberg to GitHub and us­ing Forgejo Actions to poll the GitHub API and sync the CI sta­tus back to Codeberg. I haven’t tried this one yet, but I have tried some other CI providers of­fer­ing ma­cOS builds and I don’t think they’re eas­ier or cleaner to in­te­grate into Codeberg than GitHub Actions.

Finally, what to do with the old repo on GitHub? I’ve just up­dated the README and archived the repo.

You could tell Codeberg to push new com­mits to GitHub, but this al­lows users to still file PRs and com­ment on is­sues and com­mits . Some folks have dealt with this by dis­abling is­sues on the GitHub repo, but that is a re­ally de­struc­tive ac­tion as it will 404 all is­sues, and pull re­quests can­not be dis­abled. Some re­pos like lib­virt/​lib­virt have writ­ten a GitHub Action that au­to­mat­i­cally closes all pull re­quests.

...

Read the original on unterwaditzer.net »

7 547 shares, 26 trendiness

Shell Tricks That Actually Make Life Easier (And Save Your Sanity)

There is a dis­tinct, vis­ceral kind of pain in watch­ing an oth­er­wise bril­liant en­gi­neer hold down the Backspace key for six con­tin­u­ous sec­onds to fix a typo at the be­gin­ning of a line.

We’ve all been there. We learn ls, cd, and grep, and then we sort of… stop. The ter­mi­nal be­comes a place we live in-but we rarely bother to arrange the fur­ni­ture. We ac­cept that cer­tain tasks take forty key­strokes, com­pletely un­aware that the shell au­thors solved our ex­act frus­tra­tion some­time in 1989.

Here are some tricks that aren’t ex­actly se­cret, but aren’t al­ways taught ei­ther. To keep the peace in our ex­tended Unix fam­ily, I’ve split these into two camps: the uni­ver­sal tricks that work on al­most any POSIX-ish shell (like sh on FreeBSD or ksh on OpenBSD), and the qual­ity-of-life ad­di­tions spe­cific to in­ter­ac­tive shells like Bash or Zsh.

These tricks rely on stan­dard ter­mi­nal line dis­ci­plines, generic Bourne shell be­hav­iors, or POSIX fea­tures. If you SSH into an em­bed­ded router from 2009, a fresh OpenBSD box, or a min­i­mal Alpine con­tainer, these will still have your back.

Why shuf­fle char­ac­ter-by-char­ac­ter when you can tele­port? These are stan­dard Emacs-style line-edit­ing bind­ings (via Readline or sim­i­lar), en­abled by de­fault in most mod­ern shells.

CTRL + W: You’re typing /var/log/nginx/ but you ac­tu­ally meant /var/log/apache2/. You have two choices: hold down Backspace un­til your soul leaves your body, or hit CTRL + W to in­stantly delete the word be­fore the cur­sor. Once you get used to this, hold­ing Backspace feels like dig­ging a hole with a spoon.

CTRL + U and CTRL + K: You typed out a beau­ti­fully crafted, 80-char­ac­ter rsync com­mand, but sud­denly re­al­ize you need to check if the des­ti­na­tion di­rec­tory ac­tu­ally ex­ists first. You don’t want to delete it, but you don’t want to run it. Hit CTRL + U to cut every­thing from the cur­sor to the be­gin­ning of the line. Check your di­rec­tory, and then hit CTRL + Y to paste (“yank”) your mas­ter­piece right back into the prompt. (CTRL + K does the same thing, but cuts from the cur­sor to the end of the line.)

CTRL + A and CTRL + E: Jump in­stantly to the be­gin­ning (A) or end (E) of the line. Stop reach­ing for the Home and End keys; they are miles away from the home row any­way.

ALT + B and ALT + F: Move back­ward (B) or for­ward (F) one en­tire word at a time. It’s the ar­row key’s much faster, much cooler sib­ling. (Mac users: you usu­ally have to tweak your ter­mi­nal set­tings to use Option as Meta for this to work).

re­set (or stty sane): While strictly more of a ter­mi­nal re­cov­ery tip than an in­ter­ac­tive shell trick, it be­longs here. We’ve all done it: you meant to cat a text file, but you ac­ci­den­tally cat a com­piled bi­nary or a com­pressed tar­ball. Suddenly, your ter­mi­nal is spit­ting out an­cient runes and Wingdings, and your prompt is com­pletely il­leg­i­ble. Instead of clos­ing the ter­mi­nal win­dow in shame, type re­set (even if you can’t see the let­ters you’re typ­ing) and hit en­ter. Your ter­mi­nal will heal it­self.

CTRL + C: Cancel the cur­rent com­mand im­me­di­ately. Your emer­gency exit when a com­mand hangs, or you re­al­ize you’re tail­ing the wrong log file.

CTRL + D: Sends an EOF (End of File) sig­nal. If you’re typ­ing in­put to a com­mand that ex­pects it, this closes the stream. But if the com­mand line is empty, it logs you out of the shell com­pletely-be care­ful where you press it.

CTRL + L: Your ter­mi­nal is clut­tered with stack traces, com­piler spaghetti, and pure dig­i­tal noise. Running the clear com­mand works, but what if you’re al­ready halfway through typ­ing a new com­mand? CTRL + L wipes the slate clean, throw­ing your cur­rent prompt right up to the top with­out in­ter­rupt­ing your train of thought.

cd -: The clas­sic chan­nel-flip­per. You’re deep down in /usr/local/etc/postfix and you need to check some­thing in /var/log. You type cd /var/log, look at the logs, and now you want to go back. Instead of typ­ing that long path again, type cd -. It switches you to your pre­vi­ous di­rec­tory. Run it again, and you’re back in logs. Perfect for tog­gling back and forth.

pushd and popd: If cd - is a tog­gle switch, pushd is a stack. Need to jug­gle mul­ti­ple di­rec­to­ries? pushd /etc changes to /etc but saves your pre­vi­ous di­rec­tory to a hid­den stack. When you’re done, type popd to pop it off the stack and re­turn ex­actly where you left off.

> file.txt: This emp­ties a file com­pletely with­out delet­ing and recre­at­ing it. Why does this mat­ter? It pre­serves file per­mis­sions, own­er­ship, and does­n’t in­ter­rupt processes that al­ready have the file open. It’s much cleaner than echo ” > file.txt (which ac­tu­ally leaves a new­line char­ac­ter) or rm file && touch file.

$_: In most shells, $_ ex­pands to the last ar­gu­ment of the pre­vi­ous com­mand-es­pe­cially use­ful in­ter­ac­tively or in sim­ple scripts when you need to op­er­ate on the same long path twice:

No more re-typ­ing paths or de­clar­ing tem­po­rary vari­ables to en­ter a di­rec­tory you cre­ated a sec­ond ago.

If you are writ­ing shell scripts, put these at the top im­me­di­ately af­ter your she­bang. It will save you from de­ploy­ing chaos to pro­duc­tion.

* set -e: Exit on er­ror. Very use­ful, but no­to­ri­ously weird with edge cases (especially in­side con­di­tion­als like if state­ments, while loops, and pipelines). Don’t rely on it blindly as it can cre­ate false con­fi­dence. (Pro-tip: consider set -euo pipefail for a more ro­bust safety net, but learn its caveats first.)

* set -u: Treats ref­er­enc­ing an un­set vari­able as an er­ror. This pro­tects you from cat­a­strophic dis­as­ters like rm -rf /usr/local/${MY_TYPO_VAR}/* ac­ci­den­tally ex­pand­ing into rm -rf /usr/local/*.

If you’re on a Linux box or us­ing a mod­ern in­ter­ac­tive shell, these are the tools that make the CLI feel less like a rusty bi­cy­cle and more like some­thing that ac­tu­ally re­sponds when you steer.

CTRL + R: Reverse in­cre­men­tal search. Stop press­ing the up ar­row forty times to find that one awk com­mand you used last Tuesday. Press CTRL + R, start typ­ing a key­word from the com­mand, and it mag­i­cally pulls it from your his­tory. Press CTRL + R again to cy­cle back­wards through matches.

!!: This ex­pands to the en­tirety of your pre­vi­ous com­mand. Its most fa­mous use case is the Permission de­nied” walk of shame. You con­fi­dently type sys­tem­ctl restart ng­inx, hit en­ter, and the sys­tem laughs at your lack of priv­i­leges. Instead of re­typ­ing it, run:

It’s your way of telling the shell, Do what I said, but this time with au­thor­ity.”

CTRL + X, then CTRL + E: You start typ­ing a quick one-liner. Then you add a pipe. Then an awk state­ment. Soon, you’re edit­ing a four-line mon­ster in­side your prompt and nav­i­ga­tion is get­ting dif­fi­cult. Hit CTRL + X fol­lowed by CTRL + E (in Bash; in Zsh, this needs con­fig­ur­ing). This drops your cur­rent com­mand into your de­fault text ed­i­tor (like Vim or Nano). You can edit it with all the power of a proper ed­i­tor, save, and exit. The shell then ex­e­cutes the com­mand in­stantly.

fc: The highly portable, tra­di­tional sib­ling to CTRL+X CTRL+E. Running fc opens your pre­vi­ous com­mand in your $EDITOR. It works across most shells and is a fan­tas­tic hid­den gem for fix­ing com­plex, multi-line com­mands that went wrong.

ESC + . (or ALT + .): Inserts the last ar­gu­ment of the pre­vi­ous com­mand right at your cur­sor. Press it re­peat­edly to cy­cle fur­ther back through your his­tory, drop­ping the ex­act file­name or pa­ra­me­ter you need right into your cur­rent com­mand.

!$: The non-in­ter­ac­tive sib­ling of ESC + .. Unlike ESC + . (which in­serts the text live at your cur­sor for you to re­view or edit), !$ ex­pands blindly at the ex­act mo­ment you hit en­ter.

(Pro-Tip: For script­ing or stan­dard sh, use the $_ vari­able men­tioned ear­lier in­stead!)

Brace ex­pan­sion is pure magic for avoid­ing repet­i­tive typ­ing, es­pe­cially when do­ing quick back­ups or re­names.

The Backup Expansion: Need to edit a crit­i­cal con­fig file and want to make a quick backup first?

This ex­pands to mv file­name.txt file­name.md. Fast, el­e­gant, and makes you look like a wiz­ard.

Need mul­ti­ple di­rec­to­ries? mkdir -p pro­ject/{​src,tests,docs} cre­ates all three at once.

: Treats the out­put of a com­mand as if it were a file. Say you want to diff the sorted ver­sions of two files. Traditionally, you’d sort them into tem­po­rary files, diff those, and clean up. Process sub­sti­tu­tion skips the mid­dle­man:

** (Globstar): find is a great com­mand, but some­times it feels like overkill. If you run shopt -s glob­star in Bash (it’s en­abled by de­fault in Zsh), ** matches files re­cur­sively in all sub­di­rec­to­ries. Need to find all JavaScript files in your cur­rent di­rec­tory and every­thing be­neath it?

CTRL + Z, then bg, then dis­own: You started a mas­sive, hour-long data­base im­port task, but you for­got to run it in tmux or screen. It’s ty­ing up your ter­mi­nal, and if your SSH con­nec­tion drops, the process dies. Panic sets in.

Type bg to let it re­sume run­ning in the back­ground. Your prompt is free!

Type dis­own to de­tach it from your shell en­tirely. You can safely close your lap­top, grab a cof­fee, and the process will sur­vive.

com­mand |& tee file.log: Standard pipes (|) only catch stan­dard out­put (std­out). If a script throws an er­ror (stderr), it skips the pipe and bleeds di­rectly onto your screen, miss­ing the log file. |& pipes both std­out and stderr (it’s a help­ful short­hand for 2>&1 |).

Throw in tee, and you get to watch the out­put on your screen while si­mul­ta­ne­ously sav­ing it to a log file. It’s the equiv­a­lent of watch­ing live TV while record­ing it to your DVR.

The shell is a tool­box, not an ob­sta­cle course. You don’t need to mem­o­rize all of these to­day. Pick just one trick, force it into your daily habits for a week, and then pick an­other. Stop let­ting the ter­mi­nal push you around, and start re­ar­rang­ing the fur­ni­ture. It’s your house now.

...

Read the original on blog.hofstede.it »

8 321 shares, 23 trendiness

My minute-by-minute response to the LiteLLM malware attack

I’m the en­gi­neer who got PyPI to quar­an­tine litellm. Here’s the full record­ing of how I found it.

Developers not trained in se­cu­rity re­search can now sound the alarm at a much faster rate than pre­vi­ously. AI tool­ing has sped up not just the cre­ation of mal­ware but also the de­tec­tion.

This is the Claude Code con­ver­sa­tion tran­script from dis­cov­er­ing and re­spond­ing to the litellm 1.82.8 sup­ply chain at­tack on March 24, 2026. The ses­sion be­gan as a rou­tine in­ves­ti­ga­tion into a frozen lap­top and es­ca­lated into a full mal­ware analy­sis and pub­lic dis­clo­sure, all within a sin­gle con­ver­sa­tion. See our dis­clo­sure post for the full writeup.

You no longer need to know the specifics of MacOS shut­down logs, how to parse cache sys­tems of var­i­ous pack­age man­agers, re­mem­ber the spe­cific docker com­mands to pull a fresh con­tainer with the mal­ware down­loaded, or even know whose email ad­dress to con­tact. You just need to be calmly walked through the hu­man as­pects of the process, and leave the AI to han­dle the rest.

Should fron­tier labs be train­ing their mod­els to be more aware of these at­tacks? In this case it took some healthy skep­ti­cism to get Claude to look for mal­ice, given how un­likely be­ing pa­tient zero for an un­doc­u­mented at­tack is.

Shout out to claude-code-tran­scripts for help dis­play­ing this.

All times are UTC. Redactions marked as […] pro­tect in­ter­nal in­fra­struc­ture de­tails.

...

Read the original on futuresearch.ai »

9 302 shares, 15 trendiness

Swift 6.3 Released

Swift is de­signed to be the lan­guage you reach for at every layer of the soft­ware stack. Whether you’re build­ing em­bed­ded firmware, in­ter­net-scale ser­vices, or full-fea­tured mo­bile apps, Swift de­liv­ers strong safety guar­an­tees, per­for­mance con­trol when you need it, and ex­pres­sive lan­guage fea­tures and APIs.

Swift 6.3 makes these ben­e­fits more ac­ces­si­ble across the stack. This re­lease ex­pands Swift into new do­mains and im­proves de­vel­oper er­gonom­ics across the board, fea­tur­ing:

* Improvements for us­ing Swift in em­bed­ded en­vi­ron­ments

* An of­fi­cial Swift SDK for Android

Read on for an overview of the changes and next steps to get started.

Swift 6.3 in­tro­duces the @c at­tribute, which lets you ex­pose Swift func­tions and enums to C code in your pro­ject. Annotating a func­tion or enum with @c prompts Swift to in­clude a cor­re­spond­ing de­c­la­ra­tion in the gen­er­ated C header that you can in­clude in your C/C++ files:

You can pro­vide a cus­tom name to use for the gen­er­ated C de­c­la­ra­tion:

@c also works to­gether with @implementation. This lets you pro­vide a Swift im­ple­men­ta­tion for a func­tion de­clared in a C header:

When us­ing @c to­gether with @implementation, Swift will val­i­date that the Swift func­tion matches a pre-ex­ist­ing de­c­la­ra­tion in a C header, rather than in­clud­ing a C de­c­la­ra­tion in the gen­er­ated header.

Swift 6.3 in­tro­duces mod­ule se­lec­tors to spec­ify which im­ported mod­ule Swift should look in for an API used in your code. If you im­port more than one mod­ule that pro­vides API with the same name, mod­ule se­lec­tors let you dis­am­biguate which API to use:

Swift 6.3 also en­ables us­ing the Swift mod­ule name to ac­cess con­cur­rency and String pro­cess­ing li­brary APIs:

Swift 6.3 in­tro­duces new at­trib­utes that give li­brary au­thors finer-grained con­trol over com­piler op­ti­miza­tions for clients of their APIs:

* Function spe­cial­iza­tion: Provide pre-spe­cial­ized im­ple­men­ta­tions of a generic API for com­mon con­crete types us­ing @specialize.

* Inlining: Guarantee in­lin­ing — a com­piler op­ti­miza­tion that ex­pands the body of a func­tion at the call-site — for di­rect calls to a func­tion with @inline(always). Use this at­tribute only when you’ve de­ter­mined that the ben­e­fits of in­lin­ing out­weigh any in­crease in code size.

* Function im­ple­men­ta­tion vis­i­bil­ity: Expose the im­ple­men­ta­tion of a func­tion in an ABI-stable li­brary to clients with @export(implementation). This al­lows the func­tion to par­tic­i­pate in more com­piler op­ti­miza­tions.

For a full list of lan­guage evo­lu­tion pro­pos­als in Swift 6.3, see the Swift Evolution dash­board.

Swift 6.3 in­cludes a pre­view of Swift Build in­te­grated into Swift Package Manager. This pre­view brings a uni­fied build en­gine across all sup­ported plat­forms for a more con­sis­tent cross-plat­form de­vel­op­ment ex­pe­ri­ence. To learn more, check out Preview the Swift Build System Integration. We en­cour­age you to try it in your own pack­ages and re­port any is­sues you en­counter.

Swift 6.3 also brings the fol­low­ing Swift Package Manager im­prove­ments:

* Prebuilt Swift Syntax for shared macro li­braries: Factor out shared macro im­ple­men­ta­tion code into a li­brary with sup­port for swift-syn­tax pre­built bi­na­ries in li­braries that are only used by macros.

* Flexible in­her­ited doc­u­men­ta­tion: Control whether in­her­ited doc­u­men­ta­tion is in­cluded in com­mand plu­g­ins that gen­er­ate sym­bol graphs.

* Discoverable pack­age traits: Discover the traits sup­ported by a pack­age us­ing the new swift pack­age show-traits com­mand.

For more in­for­ma­tion on changes to Swift Package Manager, see the SwiftPM 6.3 Release Notes.

Swift Testing has a num­ber of im­prove­ments, in­clud­ing warn­ing is­sues, test can­cel­la­tion, and im­age at­tach­ments.

* Warning is­sues: Specify the sever­ity of a test is­sue us­ing the new sever­ity pa­ra­me­ter to Issue.record. You can record an is­sue as a warn­ing us­ing Issue.record(“Something sus­pi­cious hap­pened”, sever­ity: .warning). This is re­flected in the test’s re­sults, but does­n’t mark the test as a fail­ure.

* Test can­cel­la­tion: Cancel a test (and its task hi­er­ar­chy) af­ter it starts us­ing try Test.cancel(). This is help­ful for skip­ping in­di­vid­ual ar­gu­ments of a pa­ra­me­ter­ized test, or re­spond­ing to con­di­tions dur­ing a test that in­di­cate it should­n’t pro­ceed.

* Image at­tach­ments: Attach com­mon im­age types dur­ing a test on Apple and Windows plat­forms. This is ex­posed via sev­eral new cross-im­port over­lay mod­ules with UI frame­works like UIKit.

The list of Swift Testing evo­lu­tion pro­pos­als in­cluded in Swift 6.3 are ST-0012, ST-0013, ST-0014, ST-0015, ST-0016, ST-0017, and ST-0020.

Swift 6.3 adds three new ex­per­i­men­tal ca­pa­bil­i­ties to DocC:

* Markdown out­put: Generate Markdown ver­sions of your doc­u­men­ta­tion pages along­side the stan­dard ren­dered JSON cov­er­ing sym­bols, ar­ti­cles, and tu­to­ri­als. Try it out by pass­ing –enable-experimental-markdown-output to docc con­vert.

* Per-page sta­tic HTML con­tent: Embed a light­weight HTML sum­mary of each page — in­clud­ing ti­tle, de­scrip­tion, avail­abil­ity, de­c­la­ra­tions, and dis­cus­sion — di­rectly into the in­dex.html file within a tag. This im­proves dis­cov­er­abil­ity by search en­gines and ac­ces­si­bil­ity for screen read­ers with­out re­quir­ing JavaScript. Try it out by pass­ing –transform-for-static-hosting –experimental-transform-for-static-hosting-with-content to docc con­vert.

Code block an­no­ta­tions: Unlock new for­mat­ting an­no­ta­tions for code blocks, in­clud­ing no­copy for dis­abling copy-to-clip­board, high­light to high­light spe­cific lines by num­ber, show­Li­neNum­bers to dis­play line num­bers, and wrap to wrap long lines by col­umn width. Specify these op­tions in a comma-sep­a­rated list af­ter the lan­guage name on the open­ing fence line:

DocC val­i­dates line in­dices and warns about un­rec­og­nized op­tions. Try out the new code block an­no­ta­tions with –enable-experimental-code-block-annotations.

Embedded Swift has a wide range of im­prove­ments in Swift 6.3, from en­hanced C in­ter­op­er­abil­ity and bet­ter de­bug­ging sup­port to mean­ing­ful steps to­ward a com­plete link­age model. For a de­tailed look at what’s new in em­bed­ded Swift, see Embedded Swift Improvements com­ing in Swift 6.3.

Swift 6.3 in­cludes the first of­fi­cial re­lease of the Swift SDK for Android. With this SDK, you can start de­vel­op­ing na­tive Android pro­grams in Swift, up­date your Swift pack­ages to sup­port build­ing for Android, and use Swift Java and Swift Java JNI Core to in­te­grate Swift code into ex­ist­ing Android ap­pli­ca­tions writ­ten in Kotlin/Java. This is a sig­nif­i­cant mile­stone that opens new op­por­tu­ni­ties for cross-plat­form de­vel­op­ment in Swift.

To learn more and try out Swift for Android de­vel­op­ment in your own pro­jects, see Getting Started with the Swift SDK for Android.

Swift 6.3 re­flects the con­tri­bu­tions of many peo­ple across the Swift com­mu­nity — through code, pro­pos­als, fo­rum dis­cus­sions, and feed­back from real-world ex­pe­ri­ence. A spe­cial thank you to the Android Workgroup, whose months of ef­fort — build­ing on many years of grass­roots com­mu­nity work — brought the Swift SDK for Android from nightly pre­views to an of­fi­cial re­lease in Swift 6.3.

If you’d like to get in­volved in what comes next, the Swift Forums are a great place to start.

Try out Swift 6.3 to­day! You can find in­struc­tions for in­stalling a Swift 6.3 tool­chain on the Install Swift page.

...

Read the original on swift.org »

10 298 shares, 18 trendiness

Landmark L.A. jury verdict finds Instagram, YouTube were designed to addict kids

This is read by an au­to­mated voice. Please re­port any is­sues or in­con­sis­ten­cies here.

This is read by an au­to­mated voice. Please re­port any is­sues or in­con­sis­ten­cies here.

After a gru­el­ing seven weeks of court pro­ceed­ings and more than 40 hours of tense de­lib­er­a­tions across nine days in one of the coun­try’s most closely watched civil tri­als, ju­rors handed down a land­mark de­ci­sion in Los Angeles County Superior Court on Wednesday, find­ing Instagram and YouTube re­spon­si­ble for the suf­fer­ing of a Chico, Calif., woman who charged the plat­forms were built to ad­dict young users.

Kaley G. M., the 20-year-old plain­tiff, who tes­ti­fied in February, ar­rived in court just be­fore 10 a.m. She re­mained stoic as the ver­dict, an award of $3 mil­lion and a de­ci­sion war­rant­ing ad­di­tional puni­tive dam­ages were read out. A com­pan­ion fought back tears, her chin quiv­er­ing. Several ob­servers wept silently de­spite Judge Carolyn B. Kuhl’s re­peated warn­ing not to re­spond.

We need to have no re­ac­tion to the ju­ry’s ver­dict — no cry­ing out, no re­ac­tions, no dis­tur­bance,” Kuhl warned. If there is we will have to have you re­moved from the court­room, and we sure don’t want to have to do that.”

Less than two hours af­ter it de­liv­ered its ini­tial ver­dict, the jury re­turned to award $2.1 mil­lion in puni­tive dam­ages against Meta and $900,000 against Google, bring­ing the to­tal judg­ment against the com­pa­nies to $6 mil­lion com­bined.

Attorneys for Snapchat and TikTok also ap­peared in court Wednesday morn­ing to hear the de­ci­sion. The two plat­forms set­tled with Kaley out of court for undis­closed sums be­fore the trial.

We re­spect­fully dis­agree with the ver­dict and are eval­u­at­ing our le­gal op­tions,” a spokesper­son for Instagram’s par­ent com­pany, Meta, said.

The ver­dict ar­rived less than 24 hours af­ter a New Mexico jury found Meta li­able for $375 mil­lion in dam­ages re­lated to state Atty. Gen. Raúl Torrez’s claim it turned Instagram into a breeding ground” for child preda­tors — a de­ci­sion the plat­form has vowed to ap­peal.

The Los Angeles jury took much longer to de­lib­er­ate. On Friday, ju­rors pre­empted their pizza lunch break to ask Kuhl whether all of them should weigh in on dam­ages, or only those who’d agreed on li­a­bil­ity. On Monday, they told Kuhl they were strug­gling to agree about one of the de­fen­dants.

Kuhl told the jury to keep try­ing.

Kaley said she first got hooked on YouTube and Instagram in grade school. Jurors were charged with de­ter­min­ing whether the com­pa­nies acted neg­li­gently in de­sign­ing their prod­ucts and failed to warn her of the dan­gers.

Their ver­dict will echo through thou­sands of other pend­ing law­suits, re­shap­ing the le­gal land­scape for some of the world’s most pow­er­ful com­pa­nies. Experts say the pay­out will likely set the bar for fu­ture awards.

It comes on the heels of a Delaware court de­ci­sion clear­ing Meta’s in­sur­ers of re­spon­si­bil­ity for dam­ages in­curred from several thou­sand law­suits re­gard­ing the harm its plat­forms al­legedly cause chil­dren” — a rul­ing that could leave it and other tech ti­tans on the hook for un­told fu­ture mil­lions.

Until this trial, which be­gan in late January, no suit seek­ing to hold tech ti­tans re­spon­si­ble for harms to chil­dren had ever reached a jury. Many more are now set to fol­low.

Kaley’s test case was cho­sen from among scores of suits cur­rently con­sol­i­dated in California state court. Hundreds more are mov­ing to­gether through the fed­eral sys­tem, where the first trial is set for June in San Francisco.

Collectively, the suits seek to prove that harm flowed not from user con­tent but from the de­sign and op­er­a­tion of the plat­forms them­selves.

That’s a crit­i­cal le­gal dis­tinc­tion, ex­perts say. Social me­dia com­pa­nies have so far been pro­tected by a pow­er­ful 1996 law called Section 230, which has shielded the apps from re­spon­si­bil­ity for what hap­pens to chil­dren who use it.

Lawyers for Meta and Google ar­gued Kaley’s strug­gles were the re­sult of her frac­tious home life and fall­out from the COVID pan­demic, not so­cial me­dia.

I don’t think it should have ever got­ten to a jury trial,” said Erwin Chemerinsky, dean of the UC Berkeley School of Law and an ex­pert on the 1st Amendment, which also pro­tects the plat­forms. All me­dia tries to keep peo­ple on [their plat­form] and com­ing back.”

Others say so­cial me­di­a’s al­go­rith­mic abil­ity to cap­ture, cul­ti­vate and con­trol at­ten­tion makes it fun­da­men­tally dif­fer­ent from teen-friendly ro­man­tasy nov­els, Marvel movies or first-per­son shooter games.

These are truly hard and heart­break­ing cases,” said Eric J. Segall, a pro­fes­sor at Georgia State College of Law. They rep­re­sent a clash be­tween free speech val­ues and the real harms caused by pro­tect­ing those com­pa­nies that en­gage in free speech am­pli­fi­ca­tion for profit.”

Letting ju­rors sort all of this out with­out more guid­ance is tempt­ing but also risky,” he said.

As de­lib­er­a­tions that be­gan March 13 wore on, ju­rors sig­naled sim­i­lar skep­ti­cism, ask­ing to see in­ter­nal Meta doc­u­ments, and re­view­ing tes­ti­mony from a de­fense ex­pert in re­gards to her pro­fes­sional in­tegrity; be­ing the only doc­tor stat­ing so­cial me­dia was not a con­tribut­ing fac­tor to KGMs men­tal health.”

They ap­peared to agree on Meta’s cul­pa­bil­ity by Friday, but la­bored through Tuesday to hash out a de­ci­sion for Google, de­liv­er­ing their ver­dict just af­ter 10 a.m. Wednesday.

Today, a jury saw the truth and held Meta and Google ac­count­able for de­sign­ing prod­ucts that ad­dict and harm chil­dren,” said Lexi Hazam, court-ap­pointed co-lead plain­tiffs’ coun­sel in the re­lated fed­eral ac­tion. This ver­dict sends an un­mis­tak­able mes­sage that no com­pany is above ac­count­abil­ity.”

The out­come will likely trans­form the al­ready heated de­bate over so­cial me­dia ad­dic­tion as a con­cept, what role apps may play in en­gi­neer­ing it, and whether in­di­vid­u­als like Kaley can prove they’re af­flicted.

The plat­forms’ at­tor­neys sought to cast doubt on the ail­ment — em­pha­siz­ing that there is no for­mal di­ag­no­sis for so­cial me­dia ad­dic­tion — while also ar­gu­ing that Kaley had never been treated for it.

Substitute the words YouTube’ for the word metham­phet­a­mine,” at­tor­ney Luis Li urged the jury dur­ing clos­ing ar­gu­ments Thursday. Ask your­selves with your life­time of ex­pe­ri­ence whether any­body suf­fer­ing from ad­dic­tion could say, Yeah, I just kind of lost in­ter­est.’”

She was sit­ting there for hours with­out be­ing on her phone,” said Meta at­tor­ney Paul W. Schmidt.

YouTube’s team also sought to dis­tance the video-shar­ing app from Instagram and other so­cial me­dia plat­forms, say­ing its func­tions are fun­da­men­tally dif­fer­ent.

Kaley’s team called it a gate­way” to her so­cial me­dia ad­dic­tion.

YouTube was­n’t a gate­way to any­thing,” Li said. YouTube was a toy that a child liked and then put down.”

Jurors dis­agreed, ul­ti­mately hold­ing the plat­form li­able, though they split the li­a­bil­ity 70-30, weight­ing heav­ily to Meta.

Plaintiffs’ at­tor­ney Mark Lanier leaned on his down-home Texas folksi­ness through­out the trial, telling the jury what was on his heart and scrib­bling with grease pen­cil on his demon­stra­tive aids. In his di­rect ad­dresses to the jury, he used a set of wooden baby blocks, stacks of pa­per, even a ham­mer and a crate of eggs.

During the puni­tive phase of the trial late Wednesday morn­ing, he brought out a glass jar filled with 415 peanut M&Ms to rep­re­sent the $415 bil­lion of stock­hold­er’s eq­uity Google’s par­ent com­pany, Alphabet, was val­ued at in December.

What are you go­ing to fine them for this?” he probed. Are you go­ing to fine them a bil­lion?” He plucked a green M&M from the top of the pile. Two bil­lion?” He pulled out an­other. You know a pack of M&Ms has 18 M&Ms in it? You fine them a bil­lion, and they’re not go­ing to no­tice.”

The last thing in the world they want you to do is talk about how many M&Ms they’ve got,” the lawyer said, urg­ing ju­rors to talk to Meta in Meta money.”

The last thing in the world they want you to do is fo­cus on what it takes to hold them ac­count­able for what they’ve done,” Lanier said.

Conversely, the tech teams re­lied on slick dig­i­tal pre­sen­ta­tions to re­view ev­i­dence and il­lus­trate their ar­gu­ments.

Focus on those facts that are at is­sue in this case,” Schmidt urged the jury dur­ing clos­ings. Not lawyer ar­gu­ments, not props like a glass of wa­ter or a jar of M&Ms, but ac­tual proof in ev­i­dence.”

During the puni­tive phase of the trial, he sought to em­pha­size that there was­n’t an in­ten­tion to do harm” to chil­dren, and that it had worked dili­gently to make its prod­ucts safer.

The case was the first to get Meta CEO Mark Zuckerberg on the wit­ness stand, where he de­fended Instagram’s safety record and lamented the dif­fi­culty of keep­ing young­sters off the app.

It also made pub­lic tens of thou­sands of pages of in­ter­nal doc­u­ments — doc­u­ments Lanier ar­gued showed the com­pa­nies in­ten­tion­ally tar­geted chil­dren, and en­gi­neered their prod­ucts to keep them on the plat­forms longer.

These are in­ter­nal doc­u­ments that you’re uniquely see­ing be­cause you’re the jury that got to sit on this case,” Lanier told the jury dur­ing clos­ing ar­gu­ments on Thursday. It’s given you ex­po­sure that the world has­n’t had.”

Those pre­vi­ously undis­closed ma­te­ri­als likely proved crit­i­cal to the ju­ry’s ul­ti­mate ver­dict, ex­perts said.

Internal emails here were key — they painted a pic­ture of in­dif­fer­ence at Meta,” said Joseph McNally, for­mer Acting U. S. Attorney for the Central District of California and an ex­pert in technology-related harm.”

The tech ti­tans have al­ready vowed to ap­peal both the California and New Mexico ver­dicts, all-but en­sur­ing the is­sue is ul­ti­mately de­cided by the Supreme Court, ex­perts said.

...

Read the original on www.latimes.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.