10 interesting stories served every morning and every evening.
Last year, I visited my grandmother’s house for the first time after the pandemic and came across a cupboard full of loose old photos. I counted 1,351 of them spanning all the way from my grandparents in their early 20s, my mom as a baby, to me in middle school, just around the time when we got our first smartphone and all photos since then were backed up online.
Everything was all over the place so I spent some time going through them individually and organizing them into groups. Some of the initial groups were based on the physical attributes of the photograph like similar aspect ratios or film stock. For example, there was a group of black/white 32mm square pictures that were taken around the time when my grandfather was in his mid 20s.
As I got done with grouping all of them, I was able to see flashes of stories in my head, but they were ephemeral and fragile. For instance, there was a group of photos that looked like it was taken during my grandparents’ wedding but I didn’t know the chronological order they were taken because EXIF metadata didn’t exist around that time.
So I sat down with my grandmother and asked her to reorder the photos and tell me everything she could remember about her wedding. Her face lit up as she narrated the backstory behind the occasion, going from photo to photo, resurfacing details that had been dormant for decades. I wrote everything down, recorded the names of people in some of the photos, some of whom I recognized as younger versions of my uncles and aunts.
After the “interview”, I had multiple pages of notes connecting the photos to events that happened 50 years ago. Since the account was historical, as an inside joke I wanted to see if I could clean it up and present it as a page on Wikipedia so I could print it and give it to her. So I cloned MediaWiki, spun up a local instance, and began my editorial work. I used the 2011 Royal Wedding as reference and drafted a page starting with the classic infobox and the lead paragraph.
I split up the rest of the content into sections and filled them with everything I could verify like dates, names, places, who sat where. I scanned all the photos and spent some time figuring out what to place where. For every photo placement, there was a follow up to include a descriptive caption too.
Whenever I mentioned a person, I linked them to an empty stub page. After I found out I could also link to the real Wikipedia, I was able to link things to real pages that provided wider context to things like venues, rituals, and the political climate around that time, like for instance a legal amendment that was relevant to the wedding ceremony.
In two evenings, I was able to document a full backstory for the photos into a neat article. These two evenings also made me realize just how powerful encyclopedia software is to record and preserve media and knowledge that would’ve otherwise been lost over time.
This was so much fun that I spent the following months writing pages to account for all the photos that needed to be stitched together.
I got help from r/genealogy about how to approach recording oral history and I was given resources to better conduct interviews, shoutout to u/stemmatis! I would get on calls with my grandmother and people in the family, ask them a couple of questions, and then write. It was also around this time that I began using audio transcription and language models to make the editorial process easier.
Over time, I managed to write a lot of pages connecting people to different life events. The encyclopedia format made it easy to connect dots I would have never found on my own, like discovering that one of the singers at my grandparents’ wedding was the same nurse who helped deliver me.
After finding all the stories behind the physical photos, I started to work on digital photos and videos that I had stored on Google Photos. The wonderful thing about digital photos is that they come with EXIF metadata that can reveal extra information like date, time, and sometimes geographical coordinates.
This time, without any interviews, I wanted to see if I could use a language model to create a page based on just browsing through the photos. As my first experiment, I created a folder with 625 photos of a family trip to Coorg back in 2012.
I pointed Claude Code at the directory and asked it to draft a wiki page by browsing through the images. I hinted at using ImageMagick to create contact sheets so it would help with browsing through multiple photos at once.
After a few minutes and a couple of tokens later, it had created a compelling draft with a detailed account of everything we did during the trip by time of day. The model had no location data to work with, just timestamps and visual content, but it was able to identify the places from the photos alone, including ones that I had forgotten by now. It picked up details on the modes of transportation we used to get between places just from what it could see.
After I had clarified who some of the people in the pictures were, it went on to identify them automatically in the captions. Now that I had a detailed outline ready, the page still only had content based on the available data, so to fill in the gaps I shared a list of anecdotes from my point of view and the model inserted them into places where the narrative called for them.
The Coorg trip only had photos to work with. My trip to Mexico City in 2022 had a lot more. I had taken 291 photos and 343 videos with an iPhone 12 Pro that included geographical coordinates as part of the EXIF metadata.
On top of that, I exported my location timeline from Google Maps, my Uber trips, my bank transactions, and Shazam history. I would ask Claude Code to start with the photos and then gradually give it access to the different data exports.
Here are some of the things it did across multiple runs:
It cross-referenced my bank transactions with location data to ascertain the restaurants I went to.
Some of the photos and videos showed me in attendance at a soccer match, however, it was unknown which teams were playing. The model looked up my bank transactions and found a Ticketmaster invoice with information about the teams and name of the tournament.
It looked up my Uber trips to figure out travel times and exact locations of pickup and drop.
It used my Shazam tracks to write about the kinds of songs that were playing at a place, like Cuban songs at a Cuban restaurant.
In a follow-up, I mentioned remembering an evening dinner with a guitarist playing in the background. It filtered my media to evening captures, found a frame in a video with the guitarist, uploaded it, and referenced the moment in the page.
The MediaWiki architecture worked well with the edits, since for every new data source it would make amendments like a real Wikipedia contributor would. I leaned heavily on features that already existed. Talk pages to clarify gaps and consolidate research notes, categories to group pages by theme, revision history to track how a page evolved as new data came in. I didn’t have to build any of this, it was all just there.
What started as me helping the model fill in gaps from my memory gradually inverted. The model was now surfacing things I had completely forgotten, cross-referencing details across data sources in ways I never would have done manually.
So I started pointing Claude Code at other data exports. My Facebook, Instagram, and WhatsApp archives held around 100k messages and a couple thousand voice notes exchanged with close friends over a decade.
The model traced the arc of our friendships through the messages, pulled out the life episodes we had talked each other through, and wove them into multiple pages that read like it was written by someone who knew us both. When I shared the pages with my friends, they wanted to read every single one.
This is when I realized I was no longer working on a family history project. What I had been building, page by page, was a personal encyclopedia. A structured, browsable, interconnected account of my life compiled from the data I already had lying around.
I’ve been working on this as whoami.wiki. It uses MediaWiki as its foundation, which turns out to be a great fit because language models already understand Wikipedia conventions deeply from their training data. You bring your data exports, and agents draft the pages for you to review.
A page about your grandmother’s wedding works the same way as a page about a royal wedding. A page about your best friend works the same way as a page about a public figure.
Oh and it’s genuinely fun! Putting together the encyclopedia felt like the early days of Facebook timeline, browsing through finished pages, following links between people and events, and stumbling on a detail I forgot.
But more than the technology, it’s the stories that stayed with me. Writing about my grandmother’s life surfaced things I’d never known, her years as a single mother, the decisions she had to make, the resilience it took. She was a stronger woman than I ever realized. Going through my friendships, I found moments of endearment that I had nearly forgotten, the days friends went the extra mile to be good to me. Seeing those moments laid out on a page made me pick up the phone and call a few of them. The encyclopedia didn’t just organize my data, it made me pay closer attention to the people in my life.
Today I’m releasing whoami.wiki as an open source project. The encyclopedia is yours, it runs on your machine, your data stays with you, and any model can read it. The project is early and I’m still figuring a lot of it out, but if this sounds interesting, you can get started here and tell me what you think!
...
Read the original on whoami.wiki »
Hello! This is a long, hopefully fun one! If you’re reading this in your email, you may need to click “expand” to read all the way to the end of this post. Thank you!
When I lived in Nashville, my girlfriends and I would take ourselves on “field trips” across the state. We once went on a tour to spot bald eagles in West Tennessee, and upon arrival, a woman with fluffy hair in the state park bathroom told us she had seen 113 bald eagles the day before. We ended up seeing (counts on one hand)…2.
In the summer of 2017, we went on another field trip to the National Park’s Manhattan Project Site in Oak Ridge, TN. In 1942, Oak Ridge, TN, was chosen as the site for a plutonium and uranium enrichment plant as part of the Manhattan Project, a top-secret WWII effort to develop the first atomic bomb. Once a small and rural farming community settled in the valley of East Tennessee, the swift task to create a nuclear bomb grew the secret settlement titled “Site X” from 3,000 people in 1942 to 75,000 by 1945. Alongside the population growth, enormously complex buildings were built.
A Note: The Manhattan Project created the nuclear bomb that caused extreme devastation in Japan and ended the war. There’s a lot of U. S. history that’s awful and indefensible. Today, though, I’d like to talk about the industrial design and color theory from that era.
Our first stop on the tour was the X-10 Graphite Reactor room and its control panel room. The X-10 Graphite Reactor, a 24-foot-square block of graphite, was the world’s second full-scale nuclear reactor. The plutonium produced from uranium there was shipped to Los Alamos, New Mexico, for research into the atomic bomb Fat Man.
What caught my eye as a designer, as with most industrial plants and control rooms of that time, besides the knobs, levers, and buttons, was the use of a very specific seafoam green, seen here on the reactor’s walls and in the control panel room.
Thus began my day-long search, traipsing through the internet for historical information about this specific shade of seafoam green.
Thankfully, this path led me to the work of color theorist Faber Birren.
In the fall of 1919, Faber Birren entered the Art Institute at the University of Chicago, only to drop out in the spring of 1921 to commit himself to self-education in color, as such a program didn’t exist. He spent his days interviewing psychologists and physicists and conducted his own color studies, which were considered unconventional at the time. He painted his bedroom walls red vermillion to test if it would make him go mad.
In 1933, he moved to New York City and became a self-appointed color consultant, approaching major corporations to sell the idea that appropriate use of color could boost sales. He convinced a Chicago wholesale meat company that the company’s white walls made the meat unappealing. He studied the steaks on various colored backgrounds and determined that a blue/green background would make the beef appear redder. Sales went up, and soon a number of industries hired Faber to bring color theory into their work, including the leading chemical and wartime contract company, as well as the Manhattan Project building designer, DuPont.
With the increase in wartime production in the US during WWII, Birren and DuPont created a master color safety code for the industrial plant industry, with the aim of reducing accidents and increasing efficiency within plants. These color codes were approved by the National Safety Council in 1944 and are now internationally recognized, having been mandatory practice since 1948. The color coding went as such:
* Fire Red: All fire protection, emergency stop buttons, and flammable liquids should be red
* Solar Yellow: Signifies caution and physical hazards such as falling
* Safety Green: Indicates safety features such as first-aid equipment, emergency exits, and eyewash stations.
* Light Green: Used on walls to reduce visual fatigue
My industrial “seafoam” light green mystery has finally been solved thanks to this article from UChicago Magazine.
Keeping in theme with “control rooms”, I researched the second Manhattan Project plant, the Hanford Site, home to the B Reactor, the first full-scale plutonium production reactor in the world. To my surprise, this site looked like an ode to Birren’s light green and color codes, which makes sense, since his client, DuPont, was also responsible for the design and construction of Hanford.
In Birren’s 1963 book Color for Interiors: Historical and Modern, he writes about research undertaken to measure eye fatigue in the industrial workplace and the effects of interior color on human efficiency and well-being. Using the color chart above, he states that the proper use of color hues can reduce accidents, raise standards of machine maintenance, and improve labor morale.
“The importance of color in factories is first to control brightness in the general field of view for an efficient seeing condition. Interiors can then be conditioned for emotional pleasure and interest, using warm, cool, or luminois hues as working conditions suggest. Color should be functional and not merely decorative.” - Faber Birren
Now, looking at the interiors of the Manhattan Project control rooms and plants, the broad use of Light and Medium Green makes sense. One mistake and mass devastation could have occurred within these towns. Birren writes, “Note that most of the standards are soft in tone. This is deliberate and intended to establish a non-distracting environment. Green is a restful and natural-looking color for average factory interiors. Light Green with Medium Green is suggested.”
Let’s put these theories to work with this photo of the B-Reactor room found at the Hanford Site of the Manhattan Project. In Birren’s book, he directed the following color applications for small industrial areas:
* ✔️ Medium Gray is proposed for machinery, equipment, and racks
* ✔️ Beige walls may be applied to interiors deprived of natural light
As we can see, his color theory was followed to a T.
Other US Industrial Plants that Used these Color Methods
This color theory research just opened a whole can of design worms for me, and I’m excited to dive into them more. For example, Germany developed its own seafoam green, specifically designed for bridges, called Cologne Bridge Green. That’s a post for another day.
And finally, if you enjoy this sort of design, I designed a font called “Parts List” that is meant to evoke the feeling of sitting in an oil change waiting room, with the smell of burnt coffee. I created this font out of old auto parts lists, and it’s a perfectly wobbly typeface that will give you that ‘Is it a typewriter or handwriting?’ feeling. It’s now available on my website.
PS: I have an old friend whose dad still works at the Uranium plant in Oak Ridge. I told him that I was surprised that almost all of the facilities had been torn down, and he just looked at me straight in the face and said, “Who said it’s actually gone?” Noted. ✌️
Thanks for being here!
...
Read the original on bethmathews.substack.com »
Here are three stories about the state of gambling in America.
In November 2025, two pitchers for the Cleveland Guardians, Emmanuel Clase and Luis Ortiz, were charged in a conspiracy for “rigging pitches.” Frankly, I had never heard of rigged pitches before, but the federal indictment describes a scheme so simple that it’s a miracle that this sort of thing doesn’t happen all the time. Three years ago, a few corrupt bettors approached the pitchers with a tantalizing deal: (1) We’ll bet that certain pitches will be balls; (2) you throw those pitches into the dirt; (3) we’ll win the bets and give you some money.
The plan worked. Why wouldn’t it? There are hundreds of pitches thrown in a baseball game, and nobody cares about one bad pitch. The bets were so deviously clever because they offered enormous rewards for bettors and only incidental inconvenience for players and viewers. Before their plan was snuffed out, the fraudsters won $450,000 from pitches that not even the most ardent Cleveland baseball fan would ever remember the next day. Nobody watching America’s pastime could have guessed that they were witnessing a six-figure fraud.
On the morning of February 28th, someone logged onto the prediction market website Polymarket and made an unusually large bet. This bet wasn’t placed on a baseball game. It wasn’t placed on any sport. This was a bet that the United States would bomb Iran on a specific day, despite extremely low odds of such a thing happening.
A few hours later, bombs landed in Iran. This one bet was part of a $553,000 payday for a user named “Magamyman.” And it was just one of dozens of suspicious, perfectly-timed wagers, totaling millions of dollars, placed in the hours before a war began.
It is almost impossible to believe that, whoever Magamyman is, he didn’t have inside information from members of the administration. The term war profiteering typically refers to arms dealers who get rich from war. But we now live in a world not only where online bettors stand to profit from war, but also where key decision makers in government have the tantalizing options to make hundreds of thousands of dollars by synchronizing military engagements with their gambling position.
On March 10, several days into the Iran War, the journalist Emanuel Fabian reported that a warhead launched from Iran struck a site outside Jerusalem.
Meanwhile on Polymarket, users had placed bets on the precise location of missile strikes on March 10. Fabian’s article was therefore poised to determine payouts of $14 million in betting. As The Atlantic’s Charlie Warzel reported, bettors encouraged him to rewrite his story to produce the outcome that they’d bet on. Others threatened to make his life “miserable.”
A clever dystopian novelist might conceive of a future where poorly paid journalists for news wires are offered six-figure deals to report fictions that cash out bets from online prediction markets. But just how fanciful is that scenario when we have good reason to believe that journalists are already being pressured, bullied, and threatened to publish specific stories that align with multi-thousand dollar bets about the future?
Put it all together: rigged pitches, rigged war bets, and attempts to rig wartime journalism. Without context, each story would sound like a wacky conspiracy theory. But these are not conspiracy theories. These are things that have happened. These are conspiracies—full stop.
“If you’re not paranoid, you’re not paying attention” has historically been one of those bumperstickers you find on the back of a car with so many other bumperstickers that you worry for the sanity of its occupants. But in this weird new reality where every event on the planet has a price, and behind every price is a shadowy counterparty, the jittery gambler’s paranoia—is what I’m watching happening because somebody more powerful than me bet on it?—is starting to seem, eerily, like a kind of perverse common sense.
What’s remarkable is not just the fact that online sports books have taken over sports, or that betting markets have metastasized in politics and culture, but the speed with which both have taken place.
For most of the last century, the major sports leagues were vehemently against gambling, as the Atlantic staff writer McKay Coppins explained in his recent feature. In 1992, NFL commissioner Paul Tagliabue told Congress that “nothing has done more to despoil the games Americans play and watch than widespread gambling on them.” In 2012, NBA commissioner David Stern loudly threatened New Jersey Gov. Chris Christie for signing a bill to legalize sports betting in the Garden State, reportedly screaming, “we’re going to come after you with everything we’ve got.”
So much for that. Following the 2018 Supreme Court decision Murphy vs. NCAA, sports gambling was unleashed into the world, and the leagues haven’t looked back. Last year, the NFL saw $30 billion gambled on football games, and the league itself made half a billion dollars in advertising, licensing, and data deals.
Nine years ago, Americans bet less than $5 billion on sports. Last year, that number rose to at least $160 billion. Big numbers mean nothing to me, so let me put that statistic another way: $5 billion is roughly the amount Americans spend annually at coin-operated laundromats and $160 billion is nearly what Americans spent last year on domestic airline tickets. So, in a decade, the online sports gambling industry will have risen from the level of coin laundromats to rival the entire airline industry.
And now here come the prediction markets, such as Polymarket and Kalshi, whose combined 2025 revenue came in around $50 billion. “These predictive markets are the logical endpoint of the online gambling boom,” Coppins told me on my podcast Plain English. “We have taught the entire American population how to gamble with sports. We’ve made it frictionless and easy and put it on everybody’s phone. Why not extend the logic and culture of gambling to other segments of American life?” He continued:
Why not let people gamble on who’s going to win the Oscar, when Taylor Swift’s wedding will be, how many people will be deported from the United States next year, when the Iranian regime will fall, whether a nuclear weapon will be detonated in the year 2026, or whether there will be a famine in Gaza? These are not things that I’m making up. These are all bets that you can make on these predictive markets.
Indeed, why not let people gamble on whether there will be a famine in Gaza? The market logic is cold and simple: More bets means more information, and more informational volume is more efficiency in the marketplace of all future happenings. But from another perspective—let’s call it, baseline morality?—the transformation of a famine into a windfall event for prescient bettors seems so grotesque as to require no elaboration. One imagines a young man sending his 1099 documents to a tax accountant the following spring: “right, so here are my dividends, these are the cap gains, and, oh yeah, here’s my $9,000 payout for totally nailing when all those kids would die.”
It is a comforting myth that dystopias happen when obviously bad ideas go too far. Comforting, because it plays to our naive hope that the world can be divided into static categories of good versus evil and that once we stigmatize all the bad people and ghettoize all the bad ideas, some utopia will spring into view. But I think dystopias more likely happen because seemingly good ideas go too far. “Pleasure is better than pain” is a sensible notion, and a society devoted to its implications created Brave New World. “Order is better than disorder” sounds alright to me, but a society devoted to the most grotesque vision of that principle takes us to 1984. Sports gambling is fun, and prediction markets can forecast future events. But extended without guardrails or limitations, those principles lead to a world where ubiquitous gambling leads to cheating, cheating leads to distrust, and distrust leads ultimately to cynicism or outright disengagement.
“The crisis of authority that has kind of already visited every other American institution in the last couple of decades has arrived at professional sports,” Coppins said. Two-thirds of Americans now believe that professional athletes sometimes change their performance to influence gambling outcomes. “Not to overstate it, but that’s a disaster,” he said. And not just for sports.
There are four reasons to worry about the effect of gambling in sports and culture.
The first is the risk to individual bettors. Every time we create 1,000 new gamblers, we create dozens of new addicts and a handful of new bankruptcies. As I’ve reported, there is evidence that about one in five men under 25 is on the spectrum of having a gambling problem, and calls to the National Problem Gambling Helpline have roughly tripled since sports gambling was broadly legalized in 2018. Research from UCLA and USC found that bankruptcies increased by 10 percent in states that legalized online sports betting between 2018 and 2023. People will sometimes ask me what business I have worrying about online gambling when people should be free to spend their money however they like. My response is that wise rules place guardrails around economic activity with a certain rate of personal harm. For alcohol, we have licensing requirements, minimum drinking ages, boundaries around hours of sale, and rules about public consumption. As alcohol consumption is declining among young people, gambling is surging; Gen Z has replaced one (often fun) vice with a meaningful chance of addiction with another (often fun) vice with a meaningful chance of addiction. But whereas we have centuries of experience curtailing excessive drinking with rules and customs, we are currently in a free-for-all era of gambling.
The second risk is to individual players and practitioners. One reason why sports commissioners might have wanted to keep gambling out of their business is that gamblers turns some people into complete psychopaths, and that’s not a very nice experience for folks on the receiving end of gambling-afflicted psychopaths. In his feature, McKay Coppins reports on the experience of Caroline Garcia, a top-ranked tennis player, who said she received torrents of abusive messages from gamblers both for losing games and for winning games. “This has become a very common experience for athletes at the professional level, even at the college level too,” Coppins said. As the experience of journalist Emanuel Fabian shows, gambling can turn ordinary people into mini mob bosses, who go around threatening players and practitioners who they believe are costing them thousands of dollars.
The third risk is to the integrity of sports—or any other institution. At the end of 2025, in addition to its indictment of the Cleveland Guardians pitchers, the FBI announced 30 arrests involving gambling schemes in the NBA. This cavalcade of arrests has dramatically reduced trust in sports. Two-thirds of Americans now believe that professional athletes change their performance to influence gambling outcomes. It does not require extraordinary creativity to imagine how this principle could extend to other domains and institutions. If more people start to believe that things only happen in the world as a direct result of shadowy interests in vast betting markets, it’s going to be a permanent open season for conspiracy theories.
The ultimate risk is almost too dark to contemplate in much detail. As the logic and culture of casinos moves from sports to politics, the scandals that have visited baseball and basketball might soon arrive in politics. Is it really so unbelievable that a politician might tip off a friend, or assuage an enemy, by giving them inside information that would allow them to profit on betting markets? Is it really so incredible to believe that a government official would try to align policy with a betting position that stood to earn them, or an allied group, hundreds of thousands of dollars? That is what a “rigged pitch” in politics would look like. It’s not just wagering on a policy outcome that you suspect will happen. It’s changing policy outcomes based on what can be wagered.
Gambling is flourishing because it meets the needs of our moment: a low-trust world, where lonely young people are seeking high-risk opportunities to launch them into wealth and comfort. In such an environment, financialization might seem to be the last form of civic participation that feels honest to a large portion of the country. Voting is compromised, and polling is manipulated, and news is algorithmically curated. But a bet settles. A game ends. There is comfort in that. In an uncertain and illegible world, it doesn’t get much more certain and legible than this: You won, or you lost.
A 2023 Wall Street Journal poll found that Americans are pulling away from practically every value that once defined national life—patriotism, religion, community, family. Young people care less than their parents about marriage, children, or faith. But nature, abhorring a vacuum, is filling the moral void left by retreating institutions with the market. Money has become our final virtue.
I often find myself thinking about the philosopher Alasdair MacIntyre, who argued in the introduction of After Virtue that modernity had destroyed the shared moral language once supplied by traditions and religion, leaving us with only the language of individual preference. Virtue did not disappear, I think, so much as it died and was reincarnated as the market. It is now the market that tells us what things are worth, what events matter, whose predictions are correct, who is winning, who counts. Money has, in a strange way, become the last moral arbiter standing—the final universal language that a pluralistic, distrustful, post-institutional society can use to communicate with itself.
As this moral vocabulary scales across culture, it also corrodes culture. In sports, when you have money on a game, you’re not rooting for a team. You’re rooting for a proposition. The social function of fandom—shared identity, inherited loyalty, something larger than yourself—dissolves into individual risk. In politics, I fear the consequences will be worse. Prediction markets can be useful for those who want to know the future, but their utility recruits participants into a relationship with the news cycle that is adversarial, and even misanthropic. A young man betting on a terrorist attack or a famine is not acting as a mere concerned citizen whose participation improves the efficiency of global prediction markets. He’s just a dude, on his phone, alone in a room, choosing to root for death.
If that doesn’t bother you, I don’t know how to make it bother you. Based on economic and market efficiency principles alone, this young man’s behavior is defensible. But there is morality outside of markets. There is more to life than the efficiency of information networks. But will we rediscover it, any time soon? Don’t bet on it.
...
Read the original on www.derekthompson.org »
The controversial mass surveillance of private messages in Europe is coming to an end. After the European Parliament had already rejected the indiscriminate and blanket Chat Control by US tech companies on 13 March, conservative forces attempted a democratically highly questionable maneuver yesterday to force a repeat vote to extend the law anyway.
However, in a true voting thriller today, the Parliament finally pulled the plug on this surveillance mania: With a razor-thin majority of just a single vote, the Parliament first rejected the automated assessment of unknown private photos and chat texts as “suspicious” or “unsuspicious”. In the subsequent final vote, the amended remaining proposal clearly failed to reach a majority.
This means: As of 4 April, the EU derogation will expire for good. US corporations like Meta, Google, and Microsoft must stop the indiscriminate scanning of the private chats of European citizens. The digital privacy of correspondence is restored!
This does not create a legal vacuum—quite the opposite. Ending indiscriminate mass scanning clears the path for modern, effective child protection. Fearmongering that investigators will be “flying blind” is unwarranted: Recently, only 36% of suspicious activity reports from US companies originated from the surveillance of private messages anyway. Social media and cloud storage services are becoming increasingly relevant for investigations. Targeted telecommunications surveillance based on concrete suspicion and a judicial warrant remains fully permissible, as does the routine scanning of public posts and hosted files. User reporting also remains fully intact.
Digital freedom fighter and former Member of the European Parliament Patrick Breyer (Pirate Party) commented on today’s historic victory:
“This historic day brings tears of joy! The EU Parliament has buried Chat Control — a massive, hard-fought victory for the unprecedented resistance of civil society and citizens! The fact that a single vote tipped the scales against the extremely error-prone text and image search shows: Every single vote in Parliament and every call from concerned citizens counted!
We have stopped a broken and illegal system. Once our investigators are no longer drowning in a flood of false and long-known suspicion reports from the US, resources will finally be freed up to hunt down organized abuse rings in a targeted and covert manner. Trying to protect children with mass surveillance is like desperately trying to mop up the floor while leaving the faucet running. We must finally turn off the tap! This means genuine child protection through a paradigm shift: Providers must technically prevent cybergrooming from the outset through secure app design. Illegal material on the internet must be proactively tracked down and deleted directly at the source. That is what truly protects children.
But beware, we can only celebrate briefly today: They will try again. The negotiations for a permanent Chat Control regulation are continuing under high pressure, and soon the planned age verification for messengers threatens to end anonymous communication on the internet. The fight for digital freedom must go on!”
The Next Battle: The Return of Chat Control and Mandatory ID
Despite today’s victory, further procedural steps by EU governments cannot be completely ruled out. Most of all, the trilogue negotiations on a permanent child protection regulation (Chat Control 2.0) are continuing under severe time pressure. There, too, EU governments continue to insist on their demand for “voluntary” indiscriminate Chat Control.
Furthermore, the next massive threat to digital civil liberties is already on the agenda: Next up in the ongoing trilogue, lawmakers will negotiate whether messenger and chat services, as well as app stores, will be legally obliged to implement age verification. This would require users to provide ID documents or submit to facial scans, effectively making anonymous communication impossible and severely endangering vulnerable groups such as whistleblowers and persecuted individuals.
Background: What exactly expires on 3 April
An EU interim regulation (2021/1232), set to expire on 3 April, currently permits US corporations such as Meta to carry out indiscriminate mass scanning of private messages on a voluntary basis. Three types of chat control are authorised: scanning for already known images and videos (so-called hash scanning, which generates over 90% of reports); automated assessment of previously unknown images and videos; and automated analysis of text content in private chats.
The AI-based analysis of unknown images and texts is extremely error-prone. But the indiscriminate mass scanning for known material is also highly controversial, too: beyond the unreliability of the algorithms documented by researchers, these scans rely on opaque foreign databases rather than European criminal law. The algorithms are blind to context and lack of criminal intent (e.g. consensual sexting between teenagers). As a result, vast numbers of private but criminally irrelevant chats are exposed.
The fact that today’s decision by the EU Parliament was also technically imperative is proven by a newly published scientific study. Renowned IT security researchers analyzed the standard algorithm “PhotoDNA”, which is used by tech companies for Chat Control. Their damning verdict: The software is “unreliable”. The researchers proved that criminals can render illegal images invisible to the scanner through minimal alterations (e.g., adding a simple border), while harmless images can be easily manipulated so that innocent citizens are falsely reported to the police.
The Hard Facts: Why Chat Control Has Failed Spectacularly
The EU Commission’s 2025 evaluation report on Chat Control reads like an admission of complete failure:
* Data Giant Monopoly: Roughly 99% of all chat reports to police in Europe come from a single US tech corporation: Meta. US companies acted as a private auxiliary police force—without effective European oversight.
* Massive Police Overload from Junk Data: The German Federal Criminal Police Office (BKA) reports that a staggering 48% of the disclosed chats are criminally irrelevant. This flood of junk data ties up resources that are urgently needed for targeted investigations.
* Criminalization of Minors: According to crime statistics, around 40% of investigations in Germany target teenagers who thoughtlessly share images (e.g., consensual sexting).
* An Obsolete Model Due to Encryption: Because providers are increasingly transitioning to end-to-end encryption for private messages, the number of chats reported to the police has already dropped by 50% since 2022.
* Failure in Child Protection: According to the Commission’s report, there is no measurable correlation between the mass surveillance of private messages and actual convictions.
During the legislative process, foreign-funded lobby groups and authorities tried to pressure the Parliament through fearmongering. A comparison of their claims with reality:
Disinformation 1: “The European Parliament is to blame for the collapse of the trilogue negotiations.”
(Claimed by the lobby alliance ECLAG and US tech companies)
* Fact: It was the EU Council of Ministers that deliberately let the negotiations fail. Leaked Council cables reveal that EU member states showed no willingness to compromise, fearing that any concession could set a precedent for the permanent Chat Control 2.0 regulation. Parliament’s lead negotiator, Birgit Sippel, sharply criticized the Council: “With their lack of flexibility, Member States have deliberately accepted that the interim regulation will expire.”
Disinformation 2: “Without indiscriminate Chat Control, law enforcement will be flying blind.”
(Claimed by authorities including BKA President Holger Münch)
* Fact: Targeted surveillance remains allowed. The real problem for authorities is their own refusal to remove material from the internet. The Federation of German Criminal Investigators (BDK) warns that this mass surveillance produces “a flood of tips… often without any actual investigative lead.” Meanwhile, the BKA systematically refuses to proactively have abuse material removed from the internet, as investigative reporting by ARD has revealed.
Disinformation 4: “The demand comes primarily from victims.”
(Implied by the ECLAG campaign)
* Fact: Actual survivors are taking legal action against the surveillance. Survivor Alexander Hanff writes: “Taking away our right to privacy means further harming us.” To preserve safe spaces for victims, a survivor from Bavaria is currently suing Meta. Who truly benefits was exposed in an investigative report by Balkan Insight: The US organization Thorn, which sells scanning software, invests massively in EU lobbying, while ECLAG members are funded by tech corporations.
The European Parliament advocates a genuine paradigm shift for future legislation, supported by civil society, survivor networks, and IT security experts:
Strict default settings and protective mechanisms (Security by Design) to make cybergrooming technically harder from the outset.
Proactive search by a new EU Child Protection Center and immediate takedown obligations for providers and law enforcement on the open internet and darknet — illegal material must be destroyed directly at the source. There must be an end to law enforcement agencies declaring themselves “not competent” for the removal of abuse material.
During the legislative process, the massive, questionable lobbying efforts were exposed: The push for Chat Control is heavily driven by foreign-funded lobby groups and tech vendors. The US organization Thorn, which sells the exact type of scanning software in question, spends hundreds of thousands of euros lobbying in Brussels. The tech industry officially lobbied side-by-side with certain organizations for a law that does not protect children, but rather secures their own profits and data access.
“Right up to the very end, the US tech industry and foreign- or government-funded lobby groups tried to panic Europe. But flooding our police with false positives and duplicates from mass surveillance doesn’t save a single child from abuse. Today’s definitive failure of Chat Control is a clear stop sign to this surveillance mania. Negotiators cannot ignore this verdict in the ongoing trilogue negotiations for a permanent regulation. Indiscriminate mass scanning of our private messages must finally give way to truly effective and targeted child protection that respects fundamental rights.”
...
Read the original on www.patrick-breyer.de »
This is a heavily interactive web application, and JavaScript is required. Simple HTML interfaces are possible, but that is not what this is.
Learn more about Bluesky at bsky.social and atproto.com. You did it! 🥳
European Parliament just decided that Chat Control 1.0 must stop.
This means on April 6, 2026, Gmail, LinkedIn, Microsoft and other Big Techs must stop scanning your private messages in the EU. #PrivacyWins 💪
[contains quote post or other embedded content]
...
Read the original on bsky.app »
I’ve just started to migrate some repositories from GitHub to Codeberg. I’ve wanted to do this for a long time but have stalled on it because I perceived Codeberg as not being ready and the migration process as being a lot of (boring) work.
It turns out that is only partially true and wildly depends on your project. If you’re in a similar position as me, hopefully these notes serve as motivation and starting point. These solutions are not what I might stick around with long-term, but aimed at what I think is
easiest to get started with when migrating from GitHub.
First, there’s the migration of issues, pull requests and releases along with their artifacts. This is actually the easiest part since Codeberg offers repository import from GitHub that just works, and all these features have a UI nearly identical to GitHub’s. The import preserves issue numbers, labels, authorship. The user experience is very much a step above the extremely awkward hacks that people use to import from other issue trackers into GitHub.
If you’re using GitHub Pages you can use codeberg.page. There’s a warning about it not offering any uptime SLO, but I haven’t noticed any downtime at all, and for now it’s fine. You push your HTML to a branch, very much like the old GitHub Pages. Update
2025-09-22: Alternatively you may try https://grebedoc.dev or https://www.statichost.eu/
The by far nastiest part is CI. GitHub has done an excellent job luring people in with free macOS runners and infinite capacity for public repos . You will have to give up on both of those things. I recommend looking into cross-compilation for your programming language, and to self-host a runner for
Forgejo Actions, to solve those problems respectively.
Why Forgejo
Actions and not Woodpecker CI, isn’t Woodpecker on Codeberg more
stable? Yes, absolutely, in fact the
documentation for Forgejo Actions on Codeberg is out of date right
now, but Forgejo Actions will just feel way more familiar coming from GitHub Actions. The UI and YAML syntax is almost identical, and the existing actions ecosystem mostly works as-is on Codeberg. For example, where my GitHub Actions workflow would say
uses: dtolnay/rust-toolchain, my Forgejo Actions workflow would just change to
uses: https://github.com/dtolnay/rust-toolchain.
If you absolutely need macOS runners I’d recommend sticking with GitHub Actions on the GitHub repository, mirroring all commits from Codeberg to GitHub and using Forgejo Actions to poll the GitHub API and sync the CI status back to Codeberg. I haven’t tried this one yet, but I have tried some other CI providers offering macOS builds and I don’t think they’re easier or cleaner to integrate into Codeberg than GitHub Actions.
Finally, what to do with the old repo on GitHub? I’ve just updated the README and archived the repo.
You could tell Codeberg to push new commits to GitHub, but this allows users to still file PRs and comment on issues and commits . Some folks have dealt with this by disabling issues on the GitHub repo, but that is a really destructive action as it will 404 all issues, and pull requests cannot be disabled. Some repos like libvirt/libvirt have written a GitHub Action that automatically closes all pull requests.
...
Read the original on unterwaditzer.net »
There is a distinct, visceral kind of pain in watching an otherwise brilliant engineer hold down the Backspace key for six continuous seconds to fix a typo at the beginning of a line.
We’ve all been there. We learn ls, cd, and grep, and then we sort of… stop. The terminal becomes a place we live in-but we rarely bother to arrange the furniture. We accept that certain tasks take forty keystrokes, completely unaware that the shell authors solved our exact frustration sometime in 1989.
Here are some tricks that aren’t exactly secret, but aren’t always taught either. To keep the peace in our extended Unix family, I’ve split these into two camps: the universal tricks that work on almost any POSIX-ish shell (like sh on FreeBSD or ksh on OpenBSD), and the quality-of-life additions specific to interactive shells like Bash or Zsh.
These tricks rely on standard terminal line disciplines, generic Bourne shell behaviors, or POSIX features. If you SSH into an embedded router from 2009, a fresh OpenBSD box, or a minimal Alpine container, these will still have your back.
Why shuffle character-by-character when you can teleport? These are standard Emacs-style line-editing bindings (via Readline or similar), enabled by default in most modern shells.
CTRL + W: You’re typing /var/log/nginx/ but you actually meant /var/log/apache2/. You have two choices: hold down Backspace until your soul leaves your body, or hit CTRL + W to instantly delete the word before the cursor. Once you get used to this, holding Backspace feels like digging a hole with a spoon.
CTRL + U and CTRL + K: You typed out a beautifully crafted, 80-character rsync command, but suddenly realize you need to check if the destination directory actually exists first. You don’t want to delete it, but you don’t want to run it. Hit CTRL + U to cut everything from the cursor to the beginning of the line. Check your directory, and then hit CTRL + Y to paste (“yank”) your masterpiece right back into the prompt. (CTRL + K does the same thing, but cuts from the cursor to the end of the line.)
CTRL + A and CTRL + E: Jump instantly to the beginning (A) or end (E) of the line. Stop reaching for the Home and End keys; they are miles away from the home row anyway.
ALT + B and ALT + F: Move backward (B) or forward (F) one entire word at a time. It’s the arrow key’s much faster, much cooler sibling. (Mac users: you usually have to tweak your terminal settings to use Option as Meta for this to work).
reset (or stty sane): While strictly more of a terminal recovery tip than an interactive shell trick, it belongs here. We’ve all done it: you meant to cat a text file, but you accidentally cat a compiled binary or a compressed tarball. Suddenly, your terminal is spitting out ancient runes and Wingdings, and your prompt is completely illegible. Instead of closing the terminal window in shame, type reset (even if you can’t see the letters you’re typing) and hit enter. Your terminal will heal itself.
CTRL + C: Cancel the current command immediately. Your emergency exit when a command hangs, or you realize you’re tailing the wrong log file.
CTRL + D: Sends an EOF (End of File) signal. If you’re typing input to a command that expects it, this closes the stream. But if the command line is empty, it logs you out of the shell completely-be careful where you press it.
CTRL + L: Your terminal is cluttered with stack traces, compiler spaghetti, and pure digital noise. Running the clear command works, but what if you’re already halfway through typing a new command? CTRL + L wipes the slate clean, throwing your current prompt right up to the top without interrupting your train of thought.
cd -: The classic channel-flipper. You’re deep down in /usr/local/etc/postfix and you need to check something in /var/log. You type cd /var/log, look at the logs, and now you want to go back. Instead of typing that long path again, type cd -. It switches you to your previous directory. Run it again, and you’re back in logs. Perfect for toggling back and forth.
pushd and popd: If cd - is a toggle switch, pushd is a stack. Need to juggle multiple directories? pushd /etc changes to /etc but saves your previous directory to a hidden stack. When you’re done, type popd to pop it off the stack and return exactly where you left off.
> file.txt: This empties a file completely without deleting and recreating it. Why does this matter? It preserves file permissions, ownership, and doesn’t interrupt processes that already have the file open. It’s much cleaner than echo “” > file.txt (which actually leaves a newline character) or rm file && touch file.
$_: In most shells, $_ expands to the last argument of the previous command-especially useful interactively or in simple scripts when you need to operate on the same long path twice:
No more re-typing paths or declaring temporary variables to enter a directory you created a second ago.
If you are writing shell scripts, put these at the top immediately after your shebang. It will save you from deploying chaos to production.
* set -e: Exit on error. Very useful, but notoriously weird with edge cases (especially inside conditionals like if statements, while loops, and pipelines). Don’t rely on it blindly as it can create false confidence. (Pro-tip: consider set -euo pipefail for a more robust safety net, but learn its caveats first.)
* set -u: Treats referencing an unset variable as an error. This protects you from catastrophic disasters like rm -rf /usr/local/${MY_TYPO_VAR}/* accidentally expanding into rm -rf /usr/local/*.
If you’re on a Linux box or using a modern interactive shell, these are the tools that make the CLI feel less like a rusty bicycle and more like something that actually responds when you steer.
CTRL + R: Reverse incremental search. Stop pressing the up arrow forty times to find that one awk command you used last Tuesday. Press CTRL + R, start typing a keyword from the command, and it magically pulls it from your history. Press CTRL + R again to cycle backwards through matches.
!!: This expands to the entirety of your previous command. Its most famous use case is the “Permission denied” walk of shame. You confidently type systemctl restart nginx, hit enter, and the system laughs at your lack of privileges. Instead of retyping it, run:
It’s your way of telling the shell, “Do what I said, but this time with authority.”
CTRL + X, then CTRL + E: You start typing a quick one-liner. Then you add a pipe. Then an awk statement. Soon, you’re editing a four-line monster inside your prompt and navigation is getting difficult. Hit CTRL + X followed by CTRL + E (in Bash; in Zsh, this needs configuring). This drops your current command into your default text editor (like Vim or Nano). You can edit it with all the power of a proper editor, save, and exit. The shell then executes the command instantly.
fc: The highly portable, traditional sibling to CTRL+X CTRL+E. Running fc opens your previous command in your $EDITOR. It works across most shells and is a fantastic hidden gem for fixing complex, multi-line commands that went wrong.
ESC + . (or ALT + .): Inserts the last argument of the previous command right at your cursor. Press it repeatedly to cycle further back through your history, dropping the exact filename or parameter you need right into your current command.
!$: The non-interactive sibling of ESC + .. Unlike ESC + . (which inserts the text live at your cursor for you to review or edit), !$ expands blindly at the exact moment you hit enter.
(Pro-Tip: For scripting or standard sh, use the $_ variable mentioned earlier instead!)
Brace expansion is pure magic for avoiding repetitive typing, especially when doing quick backups or renames.
The Backup Expansion: Need to edit a critical config file and want to make a quick backup first?
This expands to mv filename.txt filename.md. Fast, elegant, and makes you look like a wizard.
Need multiple directories? mkdir -p project/{src,tests,docs} creates all three at once.
: Treats the output of a command as if it were a file. Say you want to diff the sorted versions of two files. Traditionally, you’d sort them into temporary files, diff those, and clean up. Process substitution skips the middleman:
** (Globstar): find is a great command, but sometimes it feels like overkill. If you run shopt -s globstar in Bash (it’s enabled by default in Zsh), ** matches files recursively in all subdirectories. Need to find all JavaScript files in your current directory and everything beneath it?
CTRL + Z, then bg, then disown: You started a massive, hour-long database import task, but you forgot to run it in tmux or screen. It’s tying up your terminal, and if your SSH connection drops, the process dies. Panic sets in.
Type bg to let it resume running in the background. Your prompt is free!
Type disown to detach it from your shell entirely. You can safely close your laptop, grab a coffee, and the process will survive.
command |& tee file.log: Standard pipes (|) only catch standard output (stdout). If a script throws an error (stderr), it skips the pipe and bleeds directly onto your screen, missing the log file. |& pipes both stdout and stderr (it’s a helpful shorthand for 2>&1 |).
Throw in tee, and you get to watch the output on your screen while simultaneously saving it to a log file. It’s the equivalent of watching live TV while recording it to your DVR.
The shell is a toolbox, not an obstacle course. You don’t need to memorize all of these today. Pick just one trick, force it into your daily habits for a week, and then pick another. Stop letting the terminal push you around, and start rearranging the furniture. It’s your house now.
...
Read the original on blog.hofstede.it »
I’m the engineer who got PyPI to quarantine litellm. Here’s the full recording of how I found it.
Developers not trained in security research can now sound the alarm at a much faster rate than previously. AI tooling has sped up not just the creation of malware but also the detection.
This is the Claude Code conversation transcript from discovering and responding to the litellm 1.82.8 supply chain attack on March 24, 2026. The session began as a routine investigation into a frozen laptop and escalated into a full malware analysis and public disclosure, all within a single conversation. See our disclosure post for the full writeup.
You no longer need to know the specifics of MacOS shutdown logs, how to parse cache systems of various package managers, remember the specific docker commands to pull a fresh container with the malware downloaded, or even know whose email address to contact. You just need to be calmly walked through the human aspects of the process, and leave the AI to handle the rest.
Should frontier labs be training their models to be more aware of these attacks? In this case it took some healthy skepticism to get Claude to look for malice, given how unlikely being patient zero for an undocumented attack is.
Shout out to claude-code-transcripts for help displaying this.
All times are UTC. Redactions marked as […] protect internal infrastructure details.
...
Read the original on futuresearch.ai »
Swift is designed to be the language you reach for at every layer of the software stack. Whether you’re building embedded firmware, internet-scale services, or full-featured mobile apps, Swift delivers strong safety guarantees, performance control when you need it, and expressive language features and APIs.
Swift 6.3 makes these benefits more accessible across the stack. This release expands Swift into new domains and improves developer ergonomics across the board, featuring:
* Improvements for using Swift in embedded environments
* An official Swift SDK for Android
Read on for an overview of the changes and next steps to get started.
Swift 6.3 introduces the @c attribute, which lets you expose Swift functions and enums to C code in your project. Annotating a function or enum with @c prompts Swift to include a corresponding declaration in the generated C header that you can include in your C/C++ files:
You can provide a custom name to use for the generated C declaration:
@c also works together with @implementation. This lets you provide a Swift implementation for a function declared in a C header:
When using @c together with @implementation, Swift will validate that the Swift function matches a pre-existing declaration in a C header, rather than including a C declaration in the generated header.
Swift 6.3 introduces module selectors to specify which imported module Swift should look in for an API used in your code. If you import more than one module that provides API with the same name, module selectors let you disambiguate which API to use:
Swift 6.3 also enables using the Swift module name to access concurrency and String processing library APIs:
Swift 6.3 introduces new attributes that give library authors finer-grained control over compiler optimizations for clients of their APIs:
* Function specialization: Provide pre-specialized implementations of a generic API for common concrete types using @specialize.
* Inlining: Guarantee inlining — a compiler optimization that expands the body of a function at the call-site — for direct calls to a function with @inline(always). Use this attribute only when you’ve determined that the benefits of inlining outweigh any increase in code size.
* Function implementation visibility: Expose the implementation of a function in an ABI-stable library to clients with @export(implementation). This allows the function to participate in more compiler optimizations.
For a full list of language evolution proposals in Swift 6.3, see the Swift Evolution dashboard.
Swift 6.3 includes a preview of Swift Build integrated into Swift Package Manager. This preview brings a unified build engine across all supported platforms for a more consistent cross-platform development experience. To learn more, check out Preview the Swift Build System Integration. We encourage you to try it in your own packages and report any issues you encounter.
Swift 6.3 also brings the following Swift Package Manager improvements:
* Prebuilt Swift Syntax for shared macro libraries: Factor out shared macro implementation code into a library with support for swift-syntax prebuilt binaries in libraries that are only used by macros.
* Flexible inherited documentation: Control whether inherited documentation is included in command plugins that generate symbol graphs.
* Discoverable package traits: Discover the traits supported by a package using the new swift package show-traits command.
For more information on changes to Swift Package Manager, see the SwiftPM 6.3 Release Notes.
Swift Testing has a number of improvements, including warning issues, test cancellation, and image attachments.
* Warning issues: Specify the severity of a test issue using the new severity parameter to Issue.record. You can record an issue as a warning using Issue.record(“Something suspicious happened”, severity: .warning). This is reflected in the test’s results, but doesn’t mark the test as a failure.
* Test cancellation: Cancel a test (and its task hierarchy) after it starts using try Test.cancel(). This is helpful for skipping individual arguments of a parameterized test, or responding to conditions during a test that indicate it shouldn’t proceed.
* Image attachments: Attach common image types during a test on Apple and Windows platforms. This is exposed via several new cross-import overlay modules with UI frameworks like UIKit.
The list of Swift Testing evolution proposals included in Swift 6.3 are ST-0012, ST-0013, ST-0014, ST-0015, ST-0016, ST-0017, and ST-0020.
Swift 6.3 adds three new experimental capabilities to DocC:
* Markdown output: Generate Markdown versions of your documentation pages alongside the standard rendered JSON covering symbols, articles, and tutorials. Try it out by passing –enable-experimental-markdown-output to docc convert.
* Per-page static HTML content: Embed a lightweight HTML summary of each page — including title, description, availability, declarations, and discussion — directly into the index.html file within a tag. This improves discoverability by search engines and accessibility for screen readers without requiring JavaScript. Try it out by passing –transform-for-static-hosting –experimental-transform-for-static-hosting-with-content to docc convert.
Code block annotations: Unlock new formatting annotations for code blocks, including nocopy for disabling copy-to-clipboard, highlight to highlight specific lines by number, showLineNumbers to display line numbers, and wrap to wrap long lines by column width. Specify these options in a comma-separated list after the language name on the opening fence line:
DocC validates line indices and warns about unrecognized options. Try out the new code block annotations with –enable-experimental-code-block-annotations.
Embedded Swift has a wide range of improvements in Swift 6.3, from enhanced C interoperability and better debugging support to meaningful steps toward a complete linkage model. For a detailed look at what’s new in embedded Swift, see Embedded Swift Improvements coming in Swift 6.3.
Swift 6.3 includes the first official release of the Swift SDK for Android. With this SDK, you can start developing native Android programs in Swift, update your Swift packages to support building for Android, and use Swift Java and Swift Java JNI Core to integrate Swift code into existing Android applications written in Kotlin/Java. This is a significant milestone that opens new opportunities for cross-platform development in Swift.
To learn more and try out Swift for Android development in your own projects, see Getting Started with the Swift SDK for Android.
Swift 6.3 reflects the contributions of many people across the Swift community — through code, proposals, forum discussions, and feedback from real-world experience. A special thank you to the Android Workgroup, whose months of effort — building on many years of grassroots community work — brought the Swift SDK for Android from nightly previews to an official release in Swift 6.3.
If you’d like to get involved in what comes next, the Swift Forums are a great place to start.
Try out Swift 6.3 today! You can find instructions for installing a Swift 6.3 toolchain on the Install Swift page.
...
Read the original on swift.org »
This is read by an automated voice. Please report any issues or inconsistencies here.
This is read by an automated voice. Please report any issues or inconsistencies here.
After a grueling seven weeks of court proceedings and more than 40 hours of tense deliberations across nine days in one of the country’s most closely watched civil trials, jurors handed down a landmark decision in Los Angeles County Superior Court on Wednesday, finding Instagram and YouTube responsible for the suffering of a Chico, Calif., woman who charged the platforms were built to addict young users.
Kaley G. M., the 20-year-old plaintiff, who testified in February, arrived in court just before 10 a.m. She remained stoic as the verdict, an award of $3 million and a decision warranting additional punitive damages were read out. A companion fought back tears, her chin quivering. Several observers wept silently despite Judge Carolyn B. Kuhl’s repeated warning not to respond.
“We need to have no reaction to the jury’s verdict — no crying out, no reactions, no disturbance,” Kuhl warned. “If there is we will have to have you removed from the courtroom, and we sure don’t want to have to do that.”
Less than two hours after it delivered its initial verdict, the jury returned to award $2.1 million in punitive damages against Meta and $900,000 against Google, bringing the total judgment against the companies to $6 million combined.
Attorneys for Snapchat and TikTok also appeared in court Wednesday morning to hear the decision. The two platforms settled with Kaley out of court for undisclosed sums before the trial.
“We respectfully disagree with the verdict and are evaluating our legal options,” a spokesperson for Instagram’s parent company, Meta, said.
The verdict arrived less than 24 hours after a New Mexico jury found Meta liable for $375 million in damages related to state Atty. Gen. Raúl Torrez’s claim it turned Instagram into a “breeding ground” for child predators — a decision the platform has vowed to appeal.
The Los Angeles jury took much longer to deliberate. On Friday, jurors preempted their pizza lunch break to ask Kuhl whether all of them should weigh in on damages, or only those who’d agreed on liability. On Monday, they told Kuhl they were struggling to agree about one of the defendants.
Kuhl told the jury to keep trying.
Kaley said she first got hooked on YouTube and Instagram in grade school. Jurors were charged with determining whether the companies acted negligently in designing their products and failed to warn her of the dangers.
Their verdict will echo through thousands of other pending lawsuits, reshaping the legal landscape for some of the world’s most powerful companies. Experts say the payout will likely set the bar for future awards.
It comes on the heels of a Delaware court decision clearing Meta’s insurers of responsibility for damages incurred from “several thousand lawsuits regarding the harm its platforms allegedly cause children” — a ruling that could leave it and other tech titans on the hook for untold future millions.
Until this trial, which began in late January, no suit seeking to hold tech titans responsible for harms to children had ever reached a jury. Many more are now set to follow.
Kaley’s test case was chosen from among scores of suits currently consolidated in California state court. Hundreds more are moving together through the federal system, where the first trial is set for June in San Francisco.
Collectively, the suits seek to prove that harm flowed not from user content but from the design and operation of the platforms themselves.
That’s a critical legal distinction, experts say. Social media companies have so far been protected by a powerful 1996 law called Section 230, which has shielded the apps from responsibility for what happens to children who use it.
Lawyers for Meta and Google argued Kaley’s struggles were the result of her fractious home life and fallout from the COVID pandemic, not social media.
“I don’t think it should have ever gotten to a jury trial,” said Erwin Chemerinsky, dean of the UC Berkeley School of Law and an expert on the 1st Amendment, which also protects the platforms. “All media tries to keep people on [their platform] and coming back.”
Others say social media’s algorithmic ability to capture, cultivate and control attention makes it fundamentally different from teen-friendly romantasy novels, Marvel movies or first-person shooter games.
“These are truly hard and heartbreaking cases,” said Eric J. Segall, a professor at Georgia State College of Law. “They represent a clash between free speech values and the real harms caused by protecting those companies that engage in free speech amplification for profit.”
“Letting jurors sort all of this out without more guidance is tempting but also risky,” he said.
As deliberations that began March 13 wore on, jurors signaled similar skepticism, asking to see internal Meta documents, and reviewing testimony from a defense expert “in regards to her professional integrity; being the only doctor stating social media was not a contributing factor to KGM’s mental health.”
They appeared to agree on Meta’s culpability by Friday, but labored through Tuesday to hash out a decision for Google, delivering their verdict just after 10 a.m. Wednesday.
“Today, a jury saw the truth and held Meta and Google accountable for designing products that addict and harm children,” said Lexi Hazam, court-appointed co-lead plaintiffs’ counsel in the related federal action. “This verdict sends an unmistakable message that no company is above accountability.”
The outcome will likely transform the already heated debate over social media addiction as a concept, what role apps may play in engineering it, and whether individuals like Kaley can prove they’re afflicted.
The platforms’ attorneys sought to cast doubt on the ailment — emphasizing that there is no formal diagnosis for social media addiction — while also arguing that Kaley had never been treated for it.
“Substitute the words ‘YouTube’ for the word methamphetamine,” attorney Luis Li urged the jury during closing arguments Thursday. “Ask yourselves with your lifetime of experience whether anybody suffering from addiction could say, ‘Yeah, I just kind of lost interest.’”
“She was sitting there for hours without being on her phone,” said Meta attorney Paul W. Schmidt.
YouTube’s team also sought to distance the video-sharing app from Instagram and other social media platforms, saying its functions are fundamentally different.
Kaley’s team called it “a gateway” to her social media addiction.
“YouTube wasn’t a gateway to anything,” Li said. “YouTube was a toy that a child liked and then put down.”
Jurors disagreed, ultimately holding the platform liable, though they split the liability 70-30, weighting heavily to Meta.
Plaintiffs’ attorney Mark Lanier leaned on his down-home Texas folksiness throughout the trial, telling the jury what was on his heart and scribbling with grease pencil on his demonstrative aids. In his direct addresses to the jury, he used a set of wooden baby blocks, stacks of paper, even a hammer and a crate of eggs.
During the punitive phase of the trial late Wednesday morning, he brought out a glass jar filled with 415 peanut M&Ms to represent the $415 billion of stockholder’s equity Google’s parent company, Alphabet, was valued at in December.
“What are you going to fine them for this?” he probed. “Are you going to fine them a billion?” He plucked a green M&M from the top of the pile. “Two billion?” He pulled out another. “You know a pack of M&Ms has 18 M&Ms in it? You fine them a billion, and they’re not going to notice.”
“The last thing in the world they want you to do is talk about how many M&Ms they’ve got,” the lawyer said, urging jurors to “talk to Meta in Meta money.”
“The last thing in the world they want you to do is focus on what it takes to hold them accountable for what they’ve done,” Lanier said.
Conversely, the tech teams relied on slick digital presentations to review evidence and illustrate their arguments.
“Focus on those facts that are at issue in this case,” Schmidt urged the jury during closings. “Not lawyer arguments, not props like a glass of water or a jar of M&Ms, but actual proof in evidence.”
During the punitive phase of the trial, he sought to emphasize that “there wasn’t an intention to do harm” to children, and that it had worked diligently to make its products safer.
The case was the first to get Meta CEO Mark Zuckerberg on the witness stand, where he defended Instagram’s safety record and lamented the difficulty of keeping youngsters off the app.
It also made public tens of thousands of pages of internal documents — documents Lanier argued showed the companies intentionally targeted children, and engineered their products to keep them on the platforms longer.
“These are internal documents that you’re uniquely seeing because you’re the jury that got to sit on this case,” Lanier told the jury during closing arguments on Thursday. “It’s given you exposure that the world hasn’t had.”
Those previously undisclosed materials likely proved critical to the jury’s ultimate verdict, experts said.
“Internal emails here were key — they painted a picture of indifference at Meta,” said Joseph McNally, former Acting U. S. Attorney for the Central District of California and an expert in “technology-related harm.”
The tech titans have already vowed to appeal both the California and New Mexico verdicts, all-but ensuring the issue is ultimately decided by the Supreme Court, experts said.
...
Read the original on www.latimes.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.