10 interesting stories served every morning and every evening.
I never would have read Careless People, Sarah Wynn-Williams’s tell-all memoir about her years running global policy for Facebook, but then Meta’s lawyer tried to get the book suppressed and secured an injunction to prevent her from promoting it:
So I’ve got something to thank Meta’s lawyers for, because it’s a great book! Not only is Wynn-Williams a skilled and lively writer who spills some of Facebook’s most shameful secrets, but she’s also a kick-ass narrator (I listened to the audiobook, which she voices):
I went into Careless People with strong expectations about the kind of disgusting behavior it would chronicle. I have several friends who took senior jobs at Facebook, thinking they could make a difference (three of them actually appear in Wynn-Williams’s memoir), and I’ve got a good sense of what a nightmare it is for a company.
But Wynn-Williams was a lot closer to three of the key personalities in Facebook’s upper echelon than anyone in my orbit: Mark Zuckerberg, Sheryl Sandberg, and Joel Kaplan, who was elevated to VP of Global Policy after the Trump II election. I already harbor an atavistic loathing of these three based on their public statements and conduct, but the events Wynn-Williams reveals from their private lives make these three out to be beyond despicable. There’s Zuck, whose underlings let him win at board-games like Settlers of Catan because he’s a manbaby who can’t lose (and who accuses Wynn-Williams of cheating when she fails to throw a game of Ticket to Ride while they’re flying in his private jet). There’s Sandberg, who demands the right to buy a kidney for her child from someone in Mexico, should that child ever need a kidney.
Then there’s Kaplan, who is such an extraordinarily stupid and awful oaf that it’s hard to pick out just one example, but I’ll try. At one point, Wynn-Williams gets Zuck a chance to address the UN General Assembly. As is his wont, Zuck refuses to be briefed before he takes the dais (he’s repeatedly described as unwilling to consider any briefing note longer than a single text message). When he gets to the mic, he spontaneously promises that Facebook will provide internet access to refugees all over the world. Various teams at Facebook then race around, trying to figure out whether this is something the company is actually doing, and once they realize Zuck was just bullshitting, set about trying to figure out how to do it. They get some way down this path when Kaplan intervenes to insist that giving away free internet to refugees is a bad idea, and that instead, they should sell internet access to refugees. Facebookers dutifully throw themselves into this absurd project, which dies when Kaplan fires off an email stating that he’s just realized that refugees don’t have any money. The project dies.
The path that brought Wynn-Williams into the company of these careless people is a weird — and rather charming — one. As a young woman, Wynn-Williams was a minor functionary in the New Zealand diplomatic corps, and during her foreign service, she grew obsessed with the global political and social potential of Facebook. She threw herself into the project of getting hired to work on Facebook’s global team, working on strategy for liaising with governments around the world. The biggest impediment to landing this job is that it doesn’t exist: sure, FB was lobbying the US government, but it was monumentally disinterested in the rest of the world in general, and the governments of the world in particular.
But Wynn-Williams persists, pestering potentially relevant execs with requests, working friends-of-friends (Facebook itself is extraordinarily useful for this), and refusing to give up. Then comes the Christchurch earthquake. Wynn-Williams is in the US, about to board a flight, when her sister, a news presenter, calls her while trapped inside a collapsed building (the sister hadn’t been able to get a call through to anyone in NZ). Wynn-Williams spends the flight wondering if her sister is dead or alive, and only learns that her sister is OK through a post on Facebook.
The role Facebook played in the Christchurch quake transforms Wynn-Williams’s passion for Facebook into something like religious zealotry. She throws herself into the project of landing the job, and she does, and after some funny culture-clashes arising from her Kiwi heritage and her public service background, she settles in at Facebook.
Her early years there are sometimes comical, sometimes scary, and are characteristic of a company that is growing quickly and unevenly. She’s dispatched to Myanmar amidst a nationwide block of Facebook ordered by the ruling military junta and at one point, it seems like she’s about to get kidnapped and imprisoned by goons from the communications ministry. She arranges for a state visit by NZ Prime Minister John Key, who wants a photo-op with Zuckerberg, who — oblivious to the prime minister standing right there in front of him — berates Wynn-Williams for demanding that he meet with some jackass politician (they do the photo-op anyway).
One thing is clear: Facebook doesn’t really care about countries other than America. Though Wynn-Williams chalks this up to plain old provincial chauvinism (which FB’s top eschelon possess in copious quantities), there’s something else at work. The USA is the only country in the world that a) is rich, b) is populous, and c) has no meaningful privacy protections. If you make money selling access to dossiers on rich people to advertisers, America is the most important market in the world.
But then Facebook conquers America. Not only does FB saturate the US market, it uses its free cash-flow and high share price to acquire potential rivals, like Whatsapp and Instagram, ensuring that American users who leave Facebook (the service) remain trapped by Facebook (the company).
At this point, Facebook — Zuckerberg — turns towards the rest of the world. Suddenly, acquiring non-US users becomes a matter of urgency, and overnight Wynn-Williams is transformed from the sole weirdo talking about global markets to the key asset in pursuit off the company’s top priority.
Wynn-Williams’s explanation for this shift lies in Zuckerberg’s personality, his need to constantly dominate (which is also why his subordinates have learned to let him win at board games). This is doubtless true: not only has this aspect of Zuckerberg’s personality been on display in public for decades, Wynn-Williams was able to observe it first-hand, behind closed doors.
But I think that in addition to this personality defect, there’s a material pressure for Facebook to grow that Wynn-Williams doesn’t mention. Companies that grow get extremely high price-to-earnings (P:E) ratios, meaning that investors are willing to spend many dollars on shares for every dollar the company takes in. Two similar companies with similar earnings can have vastly different valuations (the value of all the stock the company has ever issued), depending on whether one of them is still growing.
High P:E ratios reflect a bet on the part of investors that the company will continue to grow, and those bets only become more extravagant the more the company grows. This is a huge advantage to companies with “growth stocks.” If your shares constantly increase in value, they are highly liquid — that is, you can always find someone who’s willing to buy your shares from you for cash, which means that you can treat shares like cash. But growth stocks are better than cash, because money grows slowly, if at all (especially in periods of extremely low interest rates, like the past 15+ years). Growth stocks, on the other hand, grow.
Best of all, companies with growth stocks have no trouble finding more stock when they need it. They just type zeroes into a spreadsheet and more shares appear. Contrast this with money. Facebook may take in a lot of money, but the money only arrives when someone else spends it. Facebook’s access to money is limited by exogenous factors — your willingness to send your money to Facebook. Facebook’s access to shares is only limited by endogenous factors — the company’s own willingness to issue new stock.
That means that when Facebook needs to buy something, there’s a very good chance that the seller will accept Facebook’s stock in lieu of US dollars. Whether Facebook is hiring a new employee or buying a company, it can outbid rivals who only have dollars to spend, because that bidder has to ask someone else for more dollars, whereas Facebook can make its own stock on demand. This is a massive competitive advantage.
But it is also a massive business risk. As Stein’s Law has it, “anything that can’t go on forever eventually stops.” Facebook can’t grow forever by signing up new users. Eventually, everyone who might conceivably have a Facebook account will get one. When that happens, Facebook will need to find some other way to make money. They could enshittify — that is, shift value from the company’s users and customers to itself. They could invent something new (like metaverse, or AI). But if they can’t make those things work, then the company’s growth will have ended, and it will instantaneously become grossly overvalued. Its P:E ratio will have to shift from the high value enjoyed by growth stocks to the low value endured by “mature” companies.
When that happens, anyone who is slow to sell will lose a ton of money. So investors in growth stocks tend to keep one fist poised over the “sell” button and sleep with one eye open, watching for any hint that growth is slowing. It’s not just that growth gives FB the power to outcompete rivals — it’s also the case that growth makes the company vulnerable to massive, sudden devaluations. What’s more, if these devaluations are persistent and/or frequent enough, the key FB employees who accepted stock in lieu of cash for some or all of their compensation will either demand lots more cash, or jump ship for a growing rival. These are the very same people that Facebook needs to pull itself out of its nosedives. For a growth stock, even small reductions in growth metrics (or worse, declines) can trigger cascades of compounding, mutually reinforcing collapse.
This is what happened in early 2022, when Meta posted slightly lower-than-anticipated US growth numbers, and the market all pounded on the “sell” button at once, lopping $250,000,000,000 of the company’s valuation in 24 hours. At the time, it was the worst-ever single day losses for any company in human history:
Facebook’s conquest of the US market triggered an emphasis on foreign customers, but not just because Zuck is obsessed with conquest. For Facebook, a decline in US growth posed an existential risk, the possibility of mass stock selloffs and with them, the end of the years in which Facebook could acquire key corporate rivals and executives with “money” it could print on the premises, on demand.
So Facebook cast its eye upon the world, and Wynn-Williams’s long insistence that the company should be paying attention to the political situation abroad suddenly starts landing with her bosses. But those bosses — Zuck, Sandberg, Kaplan and others — are “careless.” Zuck screws up opportunity after opportunity because he refuses to be briefed, forgets what little information he’s been given, and blows key meetings because he refuses to get out of bed before noon. Sandberg’s visits to Davos are undermined by her relentless need to promote herself, her “Lean In” brand, and her petty gamesmanship. Kaplan is the living embodiment of Green Day’s “American Idiot” and can barely fathom that foreigners exist.
Wynn-Williams’s adventures during this period are very well told, and are, by turns, harrowing and hilarious. Time and again, Facebook’s top brass snatch defeat from the jaws of victory, squandering incredible opportunities that Wynn-Williams secures for them because of their pettiness, short-sightedness, and arrogance (that is, their carelessness).
But Wynn-Williams’s disillusionment with Facebook isn’t rooted in these frustrations. Rather, she is both personally and professionally aghast at the company’s disgusting, callous and cruel behavior. She describes how her boss, Joel Kaplan, relentlessly sexually harasses her, and everyone in a position to make this stop tells her to shut up and take it. When Wynn-Williams give birth to her second child, she hemorrhages, almost dies, and ends up in a coma. Afterwards, Kaplan gives her a negative performance review because she was “unresponsive” to his emails and texts while she was dying in an ICU. This is a significant escalation of the earlier behavior she describes, like pestering her with personal questions about breastfeeding, video-calling her from bed, and so on (Kaplan is Sandberg’s ex-boyfriend, and Wynn-Williams describes another creepy event where Sandberg pressures her to sleep next to her in the bedroom on one of Facebook’s jets, something Wynn-Williams says she routinely does with the young women who report to her).
Meanwhile, Zuck is relentlessly pursuing Facebook’s largest conceivable growth market: China. The only problem: China doesn’t want Facebook. Zuck repeatedly tries to engineer meetings with Xi Jinping so he can plead his case in person. Xi is monumentally hostile to this idea. Zuck learns Mandarin. He studies Xi’s book, conspicuously displays a copy of it on his desk. Eventually, he manages to sit next to Xi at a dinner where he begs Xi to name his next child. Xi turns him down.
After years of persistent nagging, lobbying, and groveling, Facebook’s China execs start to make progress with a state apparatchik who dangles the possibility of Facebook entering China. Facebook promises this factotum the world — all the surveillance and censorship the Chinese state wants and more. Then, Facebook’s contact in China is jailed for corruption, and they have to start over.
At this point, Kaplan has punished Wynn-Williams — she blames it on her attempts to get others to force him to stop his sexual harassment — and cut her responsibilities in half. He tries to maneuver her into taking over the China operation, something he knows she absolutely disapproves of and has refused to work on — but she refuses. Instead, she is put in charge of hiring the new chief of China operations, giving her access to a voluminous paper-trail detailing the company’s dealings with the Chinese government.
According to Wynn-Williams, Facebook actually built an extensive censorship and surveillance system for the Chinese state — spies, cops and military — to use against Chinese Facebook users, and FB users globally. They promise to set up caches of global FB content in China that the Chinese state can use to monitor all Facebook activity, everywhere, with the implication that they’ll be able to spy on private communications, and censor content for non-Chinese users.
Despite all of this, Facebook is never given access to China. However, the Chinese state is able to use the tools Facebook built for it to attack independence movements, the free press and dissident uprisings in Hong Kong and Taiwan.
Meanwhile, in Myanmar, a genocide is brewing. NGOs and human rights activists keep reaching out to Facebook to get them to pay attention to the widespread use of the platform to whip up hatred against the country’s Muslim minority group, the Rohinga. Despite having expended tremendous amounts of energy to roll out “Free Basics” in Myanmar (a program whereby Facebook bribes carriers to exclude its own services from data caps), with the result that in Myanmar, “the internet” is synonymous with “Facebook,” the company has not expended any effort to manage its Burmese presence. The entire moderation staff consists of one (later two) Burmese speakers who are based in Dublin and do not work local hours (later, these two are revealed as likely stooges for the Myanmar military junta, who are behind the genocide plans).
The company has also failed to invest in Burmese language support for its systems — posts written in Burmese script are not stored as Unicode, meaning that none of the company’s automated moderation systems can parse it. The company is so hostile to pleas to upgrade these systems that Wynn-Williams and some colleagues create secret, private Facebook groups where they can track the failures of the company and the rising tide of lethal violence in the country (this isn’t the only secret dissident Facebook group that Wynn-Williams joins — she’s also part of a group of women who have been sexually harassed by colleagues and bosses).
The genocide that follows is horrific beyond measure. And, as with the Trump election, the company’s initial posture is that they couldn’t possibly have played a significant role in a real-world event that shocked and horrified its rank-and-file employees.
The company, in other words, is “careless.” Warned of imminent harms to its users, to democracy, to its own employees, the top executives simply do not care. They ignore the warnings and the consequences, or pay lip service to them. They don’t care.
Take Kaplan: after figuring out that the company can’t curry favor with the world’s governments by selling drone-delivered wifi to refugees (the drones don’t fly and the refugees are broke), he hits on another strategy. He remakes “government relations” as a sales office, selling political ads to politicians who are seeking to win over voters, or, in the case of autocracies, disenfranchised hostage-citizens. This is hugely successful, both as a system for securing government cooperation and as a way to transform Facebook’s global policy shop from a cost-center to a profit-center.
But of course, it has a price. Kaplan’s best customers are dictators and would-be dictators, formenters of hatred and genocide, authoritarians seeking opportunities to purge their opponents, through exile and/or murder.
Wynn-Williams makes a very good case that Facebook is run by awful people who are also very careless — in the sense of being reckless, incurious, indifferent.
But there’s another meaning to “careless” that lurks just below the surface of this excellent memoir: “careless” in the sense of “arrogant” — in the sense of not caring about the consequences of their actions.
To me, this was the most important — but least-developed — lesson of Careless People. When Wynn-Williams lands at Facebook, she finds herself surrounded by oafs and sociopaths, cartoonishly selfish and shitty people, who, nevertheless, have built a service that she loves and values, along with hundreds of millions of other people.
She’s not wrong to be excited about Facebook, or its potential. The company may be run by careless people, but they are still prudent, behaving as though the consequences of screwing up matter. They are “careless” in the sense of “being reckless,” but they care, in the sense of having a healthy fear (and thus respect) for what might happen if they fully yield to their reckless impulses.
Wynn-Williams’s firsthand account of the next decade is not a story of these people becoming more reckless, rather, it’s a story in which the possibility of consequences for that recklessness recedes, and with it, so does their care over those consequences.
Facebook buys its competitors, freeing it from market consequences for its bad acts. By buying the places where disaffected Facebook users are seeking refuge — Instagram and Whatsapp — Facebook is able to insulate itself from the discipline of competition — the fear that doing things that are adverse to its users will cause them to flee.
Facebook captures its regulators, freeing it from regulatory consequences for its bad acts. By playing a central role in the electoral campaigns of Obama and then other politicians around the world, Facebook transforms its watchdogs into supplicants who are more apt to beg it for favors than hold it to account.
Facebook tames its employees, freeing it from labor consequences for its bad acts. As engineering supply catches up with demand, Facebook’s leadership come to realize that they don’t have to worry about workforce uprisings, whether incited by impunity for sexually abusive bosses, or by the company’s complicity in genocide and autocratic oppression.
First, Facebook becomes too big to fail.
Then, Facebook becomes too big to jail.
Finally, Facebook becomes too big to care.
This is the “carelessness” that ultimately changes Facebook for the worse, that turns it into the hellscape that Wynn-Williams is eventually fired from after she speaks out once too often. Facebook bosses aren’t just “careless” because they refuse to read a briefing note that’s longer than a tweet. They’re “careless” in the sense that they arrive at a juncture where they don’t have to care who they harm, whom they enrage, who they ruin.
There’s a telling anaecdote near the end of Careless People. Back in 2017, leaks revealed that Facebook’s sales-reps were promising advertisers the ability to market to teens who felt depressed and “worthless”:
Wynn-Williams is — rightly — aghast about this, and even more aghast when she sees the company’s official response, in which they disclaim any knowledge that this capability was being developed and fire a random, low-level scapegoat. Wynn-Williams knows they’re lying. She knows that this is a routine offering, one that the company routinely boasts about to advertisers.
But she doesn’t mention the other lies that Facebook tells in this moment: for one thing, the company offers advertisers the power to target more teens than actually exist. The company proclaims the efficacy of its “sentiment analysis” tool that knows how to tell if teens are feeling depressed or “worthless,” even though these tools are notoriously inaccurate, hardly better than a coin-toss, a kind of digital phrenology.
Facebook, in other words, isn’t just lying to the public about what it offers to advertisers — it’s lying to advertisers, too. Contra those who say, “if you’re not paying for the product, you’re the product,” Facebook treats anyone it can get away with abusing as “the product” (just like every other tech monopolist):
Wynn-Williams documents so many instances in which Facebook’s top executives lie — to the courts, to Congress, to the UN, to the press. Facebook lies when it is beneficial to do so — but only when they can get away with it. By the time Facebook was lying to advertisers about its depressed teen targeting tools, it was already colluding with Google to rig the ad market with an illegal tool called “Jedi Blue”:
Facebook’s story is the story of a company that set out to become too big to care, and achieved that goal. The company’s abuses track precisely with its market dominance. It enshittified things for users once it had the users locked in. It screwed advertisers once it captured their market. It did the media-industry-destroying “pivot to video” fraud once it captured the media:
The important thing about Facebook’s carelessness is that it wasn’t the result of the many grave personality defects in Facebook’s top executives — it was the result of policy choices. Government decisions not to enforce antitrust law, to allow privacy law to wither on the vine, to expand IP law to give Facebook a weapon to shut down interoperable rivals — these all created the enshittogenic environment that allowed the careless people who run Facebook to stop caring.
The corollary: if we change the policy environment, we can make these careless people — and their successors, who run other businesses we rely upon — care. They may never care about us, but we can make them care about what we might do to them if they give in to their carelessness.
Meta is in global regulatory crosshairs, facing antitrust action in the USA:
And muscular enforcement pledges in the EU:
The law cannot make a man love me, but it can stop him from lynching me, and I think that’s pretty important.
What Happens When Private Equity Owns Your Kid’s Day Care https://jacobin.com/2025/04/private-equity-day-care-childcare/
#15yrsago India’s copyright bill gets it right https://web.archive.org/web/20100425031519/https://www.michaelgeist.ca/content/view/4974/196/
#15yrsago Hitler’s pissed off about fair use https://www.youtube.com/watch?v=kBO5dh9qrIQ
#5yrsago Unmasking the registrants of the “reopen” websites https://pluralistic.net/2020/04/22/filternet/#krebs
#1yrago Paying for it doesn’t make it a market https://pluralistic.net/2024/04/22/kargo-kult-kaptialism/#dont-buy-it
* Can we use the Internet for Democracy?
https://www.youtube.com/watch?v=Zh_HON6iql8
* Enshittification: Why Everything Suddenly Got Worse and What to Do About It, Farrar, Straus, Giroux, October 7 2025
https://us.macmillan.com/books/9780374619329/enshittification/
Unauthorized Bread: a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026
Enshittification, Why Everything Suddenly Got Worse and What to Do About It (the graphic novel), Firstsecond, 2026
* Enshittification: a nonfiction book about platform decay for Farrar, Straus, Giroux. Status: second pass edit underway (readaloud)
Picks and Shovels, a Martin Hench noir thriller about the heroic era of the PC. FORTHCOMING TOR BOOKS FEB 2025
This work — excluding any serialized fiction — is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.
Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.
“When life gives you SARS, you make sarsaparilla” -Joey “Accordion Guy” DeVilla
READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies (“BOGUS AGREEMENTS”) that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.
...
Read the original on pluralistic.net »
Dealing with open source software, I regularly encounter many kinds of licenses — MIT, Apache, BSD, GPL being the most prominent — and I’ve taken time out to read them. Of the many, the GNU General Public License (GPL) stands out the most. It reads like a letter to the reader rather than legalese, and feels quite in tune with the spirit of open source and software freedom.
Although GPLv3 is the most current version, I commonly encounter software that makes use of GPLv2. I got curious about the last line in its license notice:
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
Why does this license notice have a physical address, and not a URL? After all, even though the full license doesn’t often get included with software, it’s a simple matter to do a search and find the text of the GPLv2. Do people write to this address, and what happens if you do?
I turned to the Open Source Stack Exchange and got a very helpful answer. It’s because the GPLv2 was published in 1991, and most people were not online. Most people would have acquired software through physical media (such as tape or floppies) rather than a download.
Considering the storage constraints back then, it wouldn’t be surprising if developers only included the license notice, and not the entire license. It makes sense that the most common form of communication would have been through post.
The GPLv3, published in 2007, does contain a URL in the license notice since Internet usage was more widespread at the time.
I decided to write to the address to see what would happen. To do that, I would need some stamps and envelopes (I found one at my workplace) to send the request, and a self addressed enveloped with an international reply coupon to cover the cost of the reply.
I was disappointed to find out that the UK’s Royal Mail discontinued international reply coupons in 2011. The only alternative that I could think of was to buy some US stamps.
The easiest place to look for US stamps was on Ebay. I didn’t realize that I was stepping briefly into the world of philately; most stamp listings on Ebay were covered in phrases and terminology such as very fine grade, MNH (Mint Never Hinged), FDC (First Day Cover), NDC (No Die Cut), NDN (Nondenominated), and so on. It’s pretty easy to glean that these are properties that collectors would be looking for.
I ordered what seemed to be a ‘global’ stamp, for the smallest but safest amount that I could (about £3.86). The listing mentioned that it was ‘uncertified’ which was mildly unnerving, did that mean it was an invalid stamp? I decided to chance it, and quickly exited that world.
After a few weeks of waiting, I eventually received the ‘African Daisy global forever vert pair’ stamp which was round! I should have noticed that the seller sent me the item using stamps at a much lower denomination that those I had ordered. Oh well.
With the self addressed envelope ready, I wrote the request and addressed it to the GPLv2 address. Luckily I did have some UK stamps available to send the letter with.
Writing the address on the envelope was awkward, as I haven’t used a pen in several years; it took a few attempts and some wasted envelopes, printing the address would have taken less time. But it was ready so I posted it in my nearest Royal Mail box.
I had posted the letter in June 2022 and about five later weeks later, I received a reply. The round stamps looked sufficiently stamped upon with wavy lines, known as cancellation marks, which are yet another thing that philatelists like to collect!
Anyway the letter inside contained the full license text on 5 sheets of double-sided paper.
The first thing that came to attention, the paper that the text was printed on wasn’t an A4, it was smaller and not a size I was familiar with. I measured it and found that it’s a US letter size paper at about 21.5cm x 27.9cm. I completely forgot that the US, Canada, and a few other countries don’t follow the standard international paper sizes, even though I had written about it earlier.
There was a problem that I noticed right away, though: this text was from the GPL v3, not the GPL v2. In my original request I had never mentioned the GPL version I was asking about.
The original license notice makes no mention of GPL version either. Should the fact that the license notice contained an address have been enough metadata or a clue, that I was actually requesting the GPL v2 license? Or should I have mentioned that I was seeking the GPLv2 license?
I could choose to pursue by writing again and requesting the right thing, but it would take too much effort to follow up on, and I’m overall satisfied with what I received. As a postal introvert, I will now need a long period of rest to recoup.
...
Read the original on code.mendhak.com »
What, exactly, does a social network do? Is it a website that connects people with one another online, a digital gathering place where we can consume content posted by our friends? That’s certainly what it was in its heyday, in the two-thousands. Facebook was where you might find out that your friend was dating someone new, or that someone had thrown a party without inviting you. In the course of the past decade, though, social media has come to resemble something more like regular media. It’s where we find promotional videos created by celebrities, pundits shouting responses to the news, aggregated clips from pop culture, a rising tide of A. I.-generated slop, and other content designed to be broadcast to the largest number of viewers possible. The people we follow and the messages they post increasingly feel like needles in a digital haystack. Social media has become less social.
Facebook’s founder, Mark Zuckerberg, admitted as much during more than ten hours of testimony, over three days last week, in the opening phase of the Federal Trade Commission’s antitrust trial against Facebook’s parent company, Meta. The company, Zuckerberg said, has lately been involved in “the general idea of entertainment and learning about the world and discovering what’s going on.” This under-recognized shift away from interpersonal communication has been measured by the company itself. During the defense’s opening statement, Meta displayed a chart showing that the “percent of time spent viewing content posted by ‘friends’ ” has declined in the past two years, from twenty-two per cent to seventeen per cent on Facebook, and from eleven per cent to seven per cent on Instagram.
The F. T.C. is arguing that Meta maintained an illegal monopoly in the “personal social networking services” industry, in part by buying up Facebook’s competitors, such as Instagram, which the company acquired in 2012, and the messaging platform WhatsApp, which it acquired in 2014. But the F.T.C.’s definition of the social-media industry is hazy, and the antitrust case was already dismissed once, in 2021, partly because the “personal social networking services” market was too loosely defined. Meta’s counter-argument is, in a sense, that social media per se doesn’t exist now in the way that it did in the twenty-tens, and that what the company’s platforms are now known for—the digital consumption of all kinds of content—has become so widespread that no single company or platform can be said to monopolize it. In one of its slides at trial, Meta exhibited a graphic of a boxing ring showing the logos of Instagram, Facebook, and the various companies that Meta argues are competitors, including TikTok, YouTube, and Apple’s iMessage, though the F.T.C. doesn’t define any of those three as such. The company also used smartphone screenshots from the various apps to demonstrate how they’ve gravitated toward common formats: short video clips look similar on both Instagram and TikTok; messages look essentially the same in Instagram DMs as on Apple’s iMessage. Even as such similarities serve as helpful evidence for Meta’s defense, they also demonstrate how stultifying the entire online ecosystem has become. While in 2012 Facebook may have seemed singular and inescapable, now it looks like part of a crowded marketplace of apps competing to serve the same purpose.
The F. T.C.’s case, which originated during Donald Trump’s first term, entails reëvaluating business deals that it approved more than a decade ago, when the industry looked dramatically different. This makes the commission’s case less than airtight. Benedict Evans, an influential technology analyst, called the F.T.C.’s market definition of social networks “gerrymandering.” He told me, “By the F.T.C.’s definition, TikTok doesn’t compete with Facebook at all. Does that mean it would be O.K. for Facebook to buy TikTok?” Antitrust lawyers must prove that allegedly monopolistic practices cause consumer harm. In another antitrust case currently unfolding against Google, a court found that the company maintained a monopoly over parts of the online-advertising market by integrating its various automated advertising technologies, illegally privileging itself and harming its publishing customers by “reducing their revenue.” In the case of Meta, though, there is no price differential to point to—Meta’s platforms all allow users to access them for free—so the question of harm is less clear-cut.
The F. T.C. is arguing, instead, that Meta’s purported monopoly has led to a lack of innovation and to reduced consumer choice. But that, too, is difficult to prove in the case of Meta’s WhatsApp and Instagram acquisitions, because both sales occurred early in those companies’ life spans. In 2014, when WhatsApp was acquired, it had around half a billion users; now it has more than two billion. As Evans put it, the F.T.C. is arguing that “if Meta hadn’t bought WhatsApp, it would have become this voracious competitor.” He continued, “What we all actually know from following the history is that the founders of WhatsApp didn’t want to do any of the things that Meta did to fuel its runaway expansion. One of WhatsApp’s founders once compared the service’s goals to those of Craigslist, Zuckerberg recalled during his testimony. Meta, by contrast, aggressively pursued growth, loading WhatsApp with features such as social groups and video calls. The F.T.C. notes that market competition can result in “improved features, functionalities, integrity measures, and user experiences”; it’s hard to mount a persuasive argument that an independent WhatsApp would necessarily have provided more of those things than a Zuckerberg-owned one. (Many social networks fail; Path and Google+ were two other threats that Zuckerberg perceived, but neither grew into a viable competitor. He did at one point attempt to buy Snapchat, and though that company survived, it failed to become a major rival.)
One of the most surprising moments in Zuckerberg’s testimony came when the F. T.C. presented him with a memo that he sent to company executives, in 2018, suggesting that it might be better to spin Instagram into its own entity by choice. Zuckerberg wrote that Instagram was potentially undermining Facebook’s success, and that businesses that are independent often perform better than they would within a parent conglomerate. “Over time we may face antitrust regulation requiring us to spin off our other apps anyway,” he noted, with some prescience. Seven years ago, before the advent of TikTok and the diversification of content across digital platforms, that kind of split might have resulted in more varied products for users, more quickly—or it might not have. Either way, the social-media landscape today is arguably in the midst of a dramatic overhaul. TikTok may ultimately be banned; generative A.I. may supplant the existing model of an open, user-generated internet. On April 15th, the Verge broke the news that OpenAI is developing a social network of its own, to compete with the likes of Instagram and X. The F.T.C. may be chasing an old problem just as newer, bigger ones appear on the horizon.
This week, the European Union fined Apple and Meta for anticompetitive practices, but the penalties—five hundred million euros and two hundred million euros, respectively—are relatively modest. If the U. S. case prevails, the F.T.C. will have to decide whether to force a wholesale breakup of Meta or seek less dramatic “remedies.” One factor in this calculus might be the wishes of President Trump. In recent months, Zuckerberg has visited the White House repeatedly, and he’s ingratiated himself to the Administration with moves, at Meta, against D.E.I. and fact-checking. So far, despite a growing closeness with Silicon Valley, Trump has nevertheless continued to back the suit against Meta. As in the Administration’s ongoing trade war, Trump appreciates a pronounced threat as a tool to force a deal. Bytedance, the owner of TikTok, has all but capitulated to a mandated sale of a majority of the company. With regard to Trump, at least, Zuckerberg might be expected to capitulate one way or another. ♦
...
Read the original on www.newyorker.com »
Skip to Content
On loyalty to your employer
Note: This post originally appeared in HackerNoon in 2018. I’m republishing it here in order to preserve and share the original piece.
I’ve just returned to London having spent the past two weeks back home in Cork where I spent an awful lot of time with my father, a man who set up his first ever email account less than a year ago and has spent the past 30 years working for the same employer. My Dad is the antithesis of the tech industry in every sense. Considering the average ‘career’ with each employer in the tech industry is a touch under three years, the idea of spending 30 years working for the same employer is mind boggling. Despite this enormous disparity, I’m constantly witness to colleagues in the tech industry posting on LinkedIn about how great their employer is and why everyone should drop everything and come and work with them, only for them to announce a few short years later that they are moving on “to bigger and better things”.I’m going to be the first to hold my hands up and admit to being extremely guilty of doing exactly that on a regular basis in the past. I work in recruitment. Employers pay me a lot of money to wax lyrical about how great they are. They pay me to convince you that the grass is not only greener, but their grass is more flexible and inclusive too. So how do I reconcile my apathy towards every employer claiming to be the best, and my ability to do a good job?My criteria for vetting an employer worth working with is very straightforward. Anything beyond these four criteria is a bonus (and extremely subjective) but the four criteria below are my absolute zero compromise criteria.Do you pay reasonable salaries?
Fortunately, due to my line of work, asking for specifics around salaries is par for the course and not something an employer can easily lie about. To put it simply, if your salaries aren’t at least competitive then we’re wasting each other’s time. Pay fairly or pay well and we’re off to a good start.Do you treat your people well?
Glassdoor is your friend. If there are a slew of negative comments, look for consistencies. Were they all posted around the same time? Are there consistent themes? Raise these points and ask for the employers perspective. A quality employer will be honest and highlight what steps they took to address those issues. Not every company has a helpful Glassdoor profile (a lot of startups have yet to be reviewed) so take to social media, and look up current and former employees to see if there are any red flags.Are you financially secure?
This is startup 101 folks. Do your due diligence. Companies House, Crunchbase, etc are a good start. Enquire about their runway (how long they can survive if their current income and expenses stay constant). If they aren’t willing to be open and honest about their finances, walk away immediately.Are you open to trying new things?
This criteria is quite specific to the work I do and may not be universally applicable. If you’re asking me to team up with you to improve your ability to hire people then you categorically need to be open and willing to try new things. No amount of money will be enough to convince me to join your company and follow your same old tired recipe just because it worked well a couple of times in the past.If you hit all of the above criteria then I can do the thing that enables me to convince great people to work for your company. I can be absolutely transparent and honest with people.
So you’ve landed a great job, the office is incredible, the people seem super friendly, the money is good, the work is challenging and life seems pretty great. Post pictures of your desk littered with company branded swag. Enjoy yourself but don’t delude yourself.
You are a transaction. Sure, your employer gives you the impression they care about you but as soon as you start costing the company money or pose a risk to the company’s image or breach any other element of your 300 page contract, then I can absolutely assure you that they will drop you in a heartbeat. You don’t even need to do anything wrong to be at risk. If the company is struggling financially, due to no fault of yours, you and all your colleagues are at risk. Suddenly the corporate line of “we’re all family here” sounds a bit ridiculous.
Your employer pays you to spend more time with them than you spend with your family and/or loved ones. Your employer is one of the biggest influencers on your mental well-being. Your employer can and will replace you in a heartbeat if absolutely necessary.
Let me be explicitly clear, your employer isn’t your family and they are not your friend. They pay you to do a job and in return your only responsibility is to do that job well.
Do not sacrifice your relationship with family and friends to appease your employer.Do not sacrifice your mental wellbeing to appease your employer.Do not sacrifice your dignity, values, and ethics to appease your employer.Do not buy into the bullshit hype of “hustle” to appease your employer.Get your head down and work hard. If your employer compensates you well, puts effort into ensuring you are healthy in every sense and invests in your personal and/or professional growth then by all means, tell the world how happy you are.Focus on your own growth. Focus on helping the humans you work with. Focus on being efficient with your time and efforts so that you can spend even more time and effort on the things and people that truly matter.I’ll leave you on the words of my father on the eve of his 30 year work anniversary:
When I’m on my deathbed, I won’t look back at my life and wish I had worked harder. I’ll look back and wish I spent more time with the people I loved.
What Your Job Ad Says About You
...
Read the original on www.talentstuff.com »
Today, we’re releasing Instant SQL, a new way to write SQL that updates your result set as you type to expedite query building and debugging — all with zero-latency, no run button required. Instant SQL is now available in MotherDuck and the DuckDB Local UI.
We built Instant SQL for a simple reason: writing SQL is still too tedious and slow. Not because of the language itself, but because the way we interact with databases hasn’t evolved much since SQL was created. Writing SQL isn’t just about syntax - It’s about making sense of your data, knowing what to ask, and figuring out how to get there. That process is iterative, and it’s hard.
“Instant SQL will save me the misery of having to try and wrangle SQL in my BI tool where iteration speed can be very slow. This lets me get the data right earlier in the process, with faster feedback than waiting for a chart to render or clearing an analytics cache.” — Mike McClannahan, CTO, DashFuel
Despite how much database engines have improved, with things like columnar storage, vectorized execution, and the creation of blazing-fast engines like DuckDB, which can scan billions of rows in seconds, the experience of building a query hasn’t kept up. We still write queries in a text editor, hit a run button, and wait to see what happens.
At MotherDuck, we’ve been tackling this problem from multiple angles. Last year, we released the Column Explorer, which gives you fast distributions and summary statistics for all the columns in your tables and result sets. We also released FixIt, an unreasonably effective AI fixer for SQL. MotherDuck users love these tools because they speed up data exploration and query iteration.
Instant SQL isn’t just an incremental improvement to SQL tooling: It’s a fundamentally new way to interact with your queries - one where you can see your changes instantly, debug naturally, and actually trust the code that your AI assistant suggests. No more waiting. No more context switching. Just flow.
Let’s take a closer look at how it works.
Everyone knows what it feels like to start a new query from scratch. Draft, run, wait, fix, run again—an exhausting cycle that repeats hundreds of times a day. Instant SQL gives you result set previews that update as you type. You’re no longer running queries—you’re exploring your data in real-time, maintaining an analytical flow state where your best thinking happens.Whether your query is a simple transformation or a complex aggregation, Instant SQL will let you preview your results in real-time.
CTEs are easy to write, but difficult to debug. How many times a day do you comment out code to figure out what’s going on in a CTE? With Instant SQL, you can now click around and instantly visualize any CTE in seconds, rather than spend hours debugging. Even better, changes you make to a CTE are immediately reflected in all dependent select nodes, giving you real-time feedback on how your modifications cascade through the query.
We’ve all been there; you write a complex column formula for an important business metric, and when you run the query, you get a result set full of NULLs. You then have to painstakingly dismantle it piece-by-piece to determine if the issue is your logic or the underlying data. Instant SQL lets you break apart your column expressions in your result table to pinpoint exactly what’s happening. Every edit you make to the query is instantly reflected in how data flows through the expression tree. This makes debugging anything from complex numeric formulas to regular expressions feel effortless.
Preview anything DuckDB can query - not just tablesInstant SQL works for more than just DuckDB tables; it works for massive tables in MotherDuck, parquet files in S3, Postgres tables, SQLite, MySQL, Iceberg, Delta — you name it. If DuckDB can query it, you can see a preview of it. This is, hands down, the best way to quickly explore and model external data.
Fast-forward to a useful query before running itInstant SQL gives you the freedom to test and refine your query logic without the wait. You can quickly experiment with different approaches in real-time. When you’re satisfied with what you see in the preview, you can then run the query for your final, materialized results. This approach cuts hours off your SQL workflow, transforming the tedious cycle of write-run-wait into a fluid process of exploration and discovery.
All of these workflow improvements are great for humans, but they’re even better when you throw AI features into the mix. Today, we’re also releasing a new inline prompt editing feature for MotherDuck users. You can now select a bit of text, hit cmd+k (or ctrl+k for Windows and Linux users), write an instruction in plain language, and get an AI suggestion. Instant SQL makes this inline edit feature work magically. When you get a suggestion, you immediately see the suggestion applied to the result set. No more flipping a coin and accepting a suggestion that might ruin your hard work.Why hasn’t anyone done this before?As soon as we had a viable prototype of Instant SQL, we began to ask ourselves: why hasn’t anyone done something like this before? It seems obvious in hindsight. It turns out that you need a unique set of requirements to make Instant SQL work.A way to drastically reduce the latency in running a queryEven if you made your database return results in milliseconds, it won’t be much help if you’re sending your queries to us-east-1. DuckDB’s local-first design, along with principled performance optimizations and friendly SQL, made it possible to use your computer to parse queries, cache dependencies, and rewrite & run them. Combined with MotherDuck’s dual execution architecture, you can effortlessly preview and query massive amounts of data with low latency.Making Instant SQL requires more than just a performant architecture. Even if DuckDB is fast, real-world ad hoc queries may still take longer 100ms to return a result. And of of course, DuckDB can also query remote data sources. We need a way to locally cache samples of certain table references and rewrite our queries to point to those.A few years ago, DuckDB hid a piece of magic in the JSON extension: a way to get an abstract syntax tree (or AST) from any SELECT statement via a SQL scalar function. This means any toolmaker can build parser-powered features using this important part of DuckDB’s database internals - no need to write your own SQL parser from scratch.Of course, showing previews as you type requires more than just knowing where you are in the query. We’ve implemented several sophisticated local caching strategies to ensure results appear instantly. Think of it as a system that anticipates what you might want to see and prepares it ahead of time. The details of these caching techniques are interesting enough to deserve their own blog post. But suffice it to say, once the cache is warm, the results materialize before you can even lift your fingers from the keyboard.Without this perfect storm of technical capabilities — a fast local SQL engine, parser accessibility, precise cursor-to-AST mapping, and intelligent caching — Instant SQL simply couldn’t exist.A way to preview any SELECT node in a queryGetting the AST is a big step forward, but we still need a way to take your cursor position in the editor and map it to a path through this AST. Otherwise, we can’t know which part of the query you’re interested in previewing. So we built some simple tools that pair DuckDB’s parser with its tokenizer to enrich the parse tree, which we then use to pinpoint the start and end of all nodes, clauses, and select statements. This cursor-to-AST mapping enables us to show you a preview of exactly the SELECT statement you’re working on, no matter where it appears in a complex query.
...
Read the original on motherduck.com »
Influential users and recommendation algorithm design quietly shape what people see, what gains attention, and what gets silenced.
When an account with 219 million followers interacts with a smaller one — not by blocking or arguing, but simply by muting — the consequences are immediate. The smaller account’s visibility drops from 150,000 views to 20,000 overnight. No notice. No rule broken. Dimmed into irrelevance.
It’s a form of shadowbanning — not imposed by moderators, but activated by the algorithm in response to a high-weight engagement signal.
On the flip side, the same signal that suppresses can also elevate. When a high-reach account interacts — sometimes with nothing more than a vague comment or a repost — the algorithm reads it as endorsement. Content is boosted, visibility spikes, and narratives take flight.
Even low-effort, repetitive interactions—likes, generic replies—can act like a controlled dose of AstroBoost™ - just enough to simulate momentum and trigger amplification.
Social proof used to reflect crowd wisdom. Now it reflects algorithmic endorsement — triggered not by consensus, but by proximity to influence. A single interaction can distort scale, making selected content appear widely supported.
The result? Artificial popularity. Boosted narratives. Organic ideas buried by engineered reach. The crowd didn’t pick it—the algorithm did, based on who touched it.
Nothing needs to be removed or blocked. Often, content is simply deprioritized—pushed lower in the feed, placed outside key visibility zones, or displaced by fresher signals. The mechanisms are subtle, the outcomes consistent: lower reach, reduced visibility, diminished presence.
Meanwhile, amplification flows downstream. A single high-weight interaction can trigger a cascade—surfacing aligned content, prompting engagement across similar accounts, and reinforcing the same narrative through repetition.
What people see feels organic. In reality, they’re engaging with what’s already been filtered, ranked, and surfaced.
There’s no need to simulate support when the platform itself is the amplifier. Traditional astroturfing relied on fake accounts and bots. Today, manufactured consensus is powered by real users—but selected ones. Elite accounts trigger the machine. Everyone else gets pulled into the ripple.
This isn’t about faking the crowd — it’s about guiding it. Real users, real engagement, selectively amplified to create the illusion of widespread agreement. Consensus is just what survived the feed.
Perception shaped at scale doesn’t just change what people see—it changes how they vote, what they buy, what they protest, and what they ignore. It doesn’t just distort attention. It steers outcomes.
What’s not shown might as well not exist.
And if you think this only happens on one social network, you’re already caught in the wrong attention loop.
The most effective influence doesn’t announce itself. It doesn’t censor loudly, or boost aggressively. It shapes perception quietly — one algorithmic nudge at a time.
The ones who try to control everything too openly, too quickly, get caught. It’s not the blunt force authoritarians who endure. It’s the subtle ones. The ones who let people believe they chose freely — while feeding them only curated choices.
At rook2root.co we expose the tactics no one talks about. Not to preach, but to illuminate. subscribe to get new articles delivered by mail.
...
Read the original on rook2root.co »
In our Universe, quantum transitions are the governing rule behind every nuclear, atomic, and molecular phenomenon. Unlike the planets in our Solar System, which could stably orbit the Sun at any distance if they possessed the right speed, the protons, neutrons, and electrons that make up all the conventional matter we know of can only bind together in a specific set of configurations. These possibilities, although numerous, are finite in number, as the quantum rules that govern electromagnetism and the nuclear forces restrict how atomic nuclei and the electrons that orbit them can arrange themselves.
In all the Universe, the most common atom of all is hydrogen, with just one proton and one electron. Wherever new stars form, hydrogen atoms become ionized, becoming neutral again if those free electrons can find their way back to a free proton. Although the electrons will typically cascade down the allowed energy levels into the ground state, that normally produces only a specific set of infrared, visible, and ultraviolet light. But more importantly, a special transition occurs in hydrogen that produces light of about the size of your hand: 21 centimeters (about 8¼”) in wavelength. Even as a physicist, you’d be well justified to call this the “magic length” of our Universe, as it just might someday unlock the darkest secrets hiding out in the deepest cosmic recesses from which starlight will never escape.
When it comes to the light in the Universe, wavelength is the one property that you can count on to reveal how that light was created. Even though light comes to us in the form of photons — individual quanta that, collectively, make up the phenomenon we know as light — there are two very different classes of quantum process that create the light that surrounds us: continuous ones and discrete ones.
A continuous process is something like the light emitted by the photosphere of the Sun. It’s a dark object that’s been heated up to a certain temperature, and it radiates light of all different, continuous wavelengths as dictated by that temperature: what physicists know as blackbody radiation. More accurately, because the different layers of the photosphere are at different temperatures, the solar spectrum acts like a series of blackbodies all summed together: an amalgam of continuous processes.
A discrete process, however, doesn’t allow for the emission of light of a continuous set of wavelengths, but rather only at extremely specific, or discrete (and quantized), wavelengths. A good example of that is the light absorbed by the neutral atoms present within the extreme outer layers of the Sun. As the blackbody radiation from the lower layers of the photosphere strikes those neutral atoms sitting at the surface, a few of those photons will have just the right wavelengths to be absorbed by the electrons within the neutral atoms they encounter. When we break sunlight up into its individual wavelengths, the various absorption lines present against the backdrop of continuous, blackbody radiation reveal both of these processes to us.
Each individual atom has its properties primarily defined by its nucleus, made up of protons (which determine its charge) and neutrons (which, combined with protons, determine its mass). Atoms also have electrons, which orbit the nucleus at a distance determined by their charge-to-mass ratio, and each electron can only occupy a specific set of energy levels. In isolation, each atom will come to exist in the ground state: where the electrons cascade down until they occupy the lowest allowable energy levels, limited only by the quantum rules that determine the various properties that electrons are and aren’t allowed to possess.
Electrons can occupy the ground state — the 1s orbital — of an atom until it’s full, which can hold two electrons. The next energy level up consists of spherical (the 2s) and perpendicular (the 2p) orbitals, which can hold two and six electrons, respectively, for a total of eight. The third energy level can hold 18 electrons: 3s (with two), 3p (with six), and 3d (with ten), and the pattern continues on upward. In general, the “upward” transitions occur when a photon of a particular wavelength gets absorbed, while the “downward” transitions can occur spontaneously, and result in the emission of photons of the exact same wavelengths as are present within the atom’s absorption spectrum.
That’s the basic structure of an atom, sometimes referred to as “coarse structure.” When you transition from the third energy level to the second energy level in a hydrogen atom, for example, you produce a photon that’s red in color, with a wavelength of precisely 656.3 nanometers: right in the visible light range of human eyes.
But there are very, very slight differences between the exact, precise wavelength of a photon that gets emitted if you transition from:
* the third energy level down to either the 2s or the 2p orbital,
* an energy level where the spin angular momentum and the orbital angular momentum are aligned versus one where they’re anti-aligned,
* or one where the nuclear spin and the electron spin are aligned versus anti-aligned.
There are rules as to what’s allowed versus what’s forbidden in quantum mechanics as well, such as the fact that you can transition an electron from a d-orbital to either an s-orbital or a p-orbital, and from an s-orbital to a p-orbital, but not from an s-orbital to another s-orbital.
The slight differences in energy that arise between transitions of different types of orbital within the same energy level is known as an atom’s fine-structure, arising from the interaction between the spin of each particle within an atom and the orbital angular momentum of the electrons around the nucleus. It causes a shift in wavelength of less than 0.1%: small compared to the atom’s course structure, but still measurable and significant.
However, owing to the weird phenomena that occur within quantum mechanics, even “forbidden” transitions can sometimes occur. These transitions occur due to the phenomenon of quantum tunneling, where a quantum state can spontaneously transition to another, lower-energy quantum state. Sure, you might not be able to transition from an s-orbital to another s-orbital directly, but if you can:
* transition from an s-orbital to a p-orbital and then back to an s-orbital,
* transition from an s-orbital to a d-orbital and then back to an s-orbital,
* or, more generally, transition from an s-orbital to any other allowable state and then back to an s-orbital,
then that transition can occur. The only thing weird about quantum tunneling is that you don’t have to have a “real” transition occur to the intermediate state. Real transitions require energy, and even with insufficient energies, the intermediate state can be bypassed under the rules of quantum physics. This occurs when transitions happen virtually (as opposed to real transitions), so that you only see the final state emerge from the initial state: something that would be forbidden without the invocation of quantum tunneling.
This allows us to go beyond mere “coarse structure” and “fine structure,” allowing us to probe what’s known as hyperfine structure. Hyperfine structure appears where the spin of the atomic nucleus and one of the electrons that orbit it begin in an “aligned” state, where the spins are both in the same direction even though the electron is in the lowest-energy, ground (1s) state, and then transitions to an anti-aligned state, where the spins are reversed.
The most famous of these transitions occurs in the simplest type of atom of all: hydrogen. With just one proton and one electron, every time you form a neutral hydrogen atom and the electron cascades down to the ground (lowest-energy) state, there’s a 50% chance that the spins of the central proton and the electron will be aligned, with a 50% chance that the spins will be anti-aligned.
If the spins are anti-aligned, that’s truly the lowest-energy state; there’s nowhere to go via any known transition that will result in the emission of energy at all. But if the spins are aligned, it’s a slightly higher energy state than in the anti-aligned case. A hydrogen atom whose electron and proton both spin in the same direction could quite possibly transition, through quantum tunneling, to the anti-aligned state. Even though the direct transition process is forbidden, tunneling allows you to go straight from the starting point to the ending point, emitting a photon in the process.
This transition, because of its “forbidden” nature, takes an extremely long time to occur: approximately 10 million years for the average atom. However, this long lifetime of the slightly excited, aligned case for a hydrogen atom has an upside to it: the photon that gets emitted, at 21 centimeters in wavelength and with a frequency of 1420 megahertz, is intrinsically, extremely narrow. In fact, it’s the narrowest, most precise transition line known in all of atomic and nuclear physics!
If you were to go all the way back to the early stages of the hot Big Bang, before any stars had formed, you’d discover that a whopping 92% of the atoms in the Universe were exactly this species of hydrogen: with one proton and one electron in them. (At the present time, after all the stars that have formed some 13.8 billion years later, that number is down to “only” about 90% of all atoms.) As soon as neutral atoms stably form — just a few hundred thousand years after the Big Bang — these neutral hydrogen atoms form with a 50/50 chance of having aligned versus anti-aligned spins. The ones that form anti-aligned will remain so; the ones that form with their spins aligned will undergo this spin-flip transition, emitting radiation of 21 centimeters in wavelength.
Although it’s never yet been done, this gives us a tremendously provocative way to measure the early stages of the Universe as never before. If we could find a cloud of hydrogen-rich gas, even one that’s never formed stars, we could look for this spin-flip signal — accounting for the expansion of the Universe and the corresponding redshift of the light — to measure the atoms in the Universe from the earliest times ever seen. The only “broadening” to the line we’d expect to see would come from thermal and kinetic effects: from the non-zero temperature and the gravitationally-induced motion of the atoms that emit those 21 centimeter signals.
In addition to those primordial signals, 21 centimeter radiation arises as a consequence whenever new stars are produced. Every time that a star-forming event occurs, the more massive newborn stars produce large amounts of ultraviolet radiation: radiation that’s energetic enough to ionize hydrogen atoms. All of a sudden, space that was once filled with neutral hydrogen atoms is now filled with free protons and free electrons.
But those electrons aren’t going to remain ionized forever; if the interstellar environment they’re located in has enough free atomic nuclei (e.g., protons), they’re going to eventually be captured, once again, by those protons. Once the most massive stars have died away, there’s no longer going to be enough ultraviolet radiation to continue to ionize them over and over again, and then those electrons will once again sink down to the ground state, where they’ll have a 50/50 chance of being aligned or anti-aligned with the spin of the atomic nucleus.
Again, that same radiation — of 21 centimeters in wavelength — gets produced over timescales of ~10 million years. Every time we measure that 21 centimeter wavelength localized in a specific region of space, even if it gets redshifted by the expansion of the Universe, what we’re seeing is evidence of recent star-formation. Wherever star-formation occurs, hydrogen gets ionized, and whenever those atoms become neutral and de-excite again, this specific-wavelength radiation persists for tens of millions of years.
If we had the capability of sensitively mapping this 21 centimeter emission in all directions and at all redshifts (i.e., distances) in space, we could literally uncover the star-formation history of the entire Universe, as well as the de-excitation of the hydrogen atoms first formed in the aftermath of the hot Big Bang. With sensitive enough observations, we could answer questions like:
* Are there stars present in dark voids in space below the threshold of what we can observe, waiting to be revealed by their de-exciting hydrogen atoms?
* In galaxies where no new star-formation is observed, is star-formation truly over, or are there low-levels of new stars being born, just waiting to be discovered from this telltale signature of hydrogen atoms?
* Are there any events that heat up and lead to hydrogen ionization prior to the formation of the first stars, and are there star-formation bursts that exist beyond the capabilities of even our most powerful infrared observatories to observe directly?
By measuring light of precisely the needed wavelength — peaking at precisely 21.106114053 centimeters, plus whatever lengthening effects arise from the cosmic expansion of the Universe — we could reveal the answers to all of these questions and more. In fact, this is one of the main science goals of LOFAR: the low-frequency array, and it presents a strong science case for putting an upscaled version of this array on the radio-shielded far side of the Moon.
Of course, there’s another possibility that takes us far beyond astronomy when it comes to making use of this important length: creating and measuring enough spin-aligned hydrogen atoms in the lab to detect this spin-flip transition directly, in a controlled fashion. The transition takes about ~10 million years to “flip” on average, which means we’d need around a quadrillion (1015) prepared atoms, kept still and cooled to cryogenic temperatures, to measure not only the emission line, but the width of it. If there are phenomena that cause an intrinsic line-broadening, such as a primordial gravitational wave signal, such an experiment would, quite remarkably, be able to uncover its existence and magnitude.
In all the Universe, there are only a few known quantum transitions with the precision inherent to the hyperfine spin-flip transition of hydrogen, which results in the emission of radiation that’s 21 centimeters in wavelength. If we want to identify:
* ongoing and recent star-formation across the Universe,
* the first atomic signals even before the first stars were formed,
* or the relic strength of yet-undetected gravitational waves left over from cosmic inflation,
it becomes clear that the 21 centimeter transition is the most important probe we have in all the cosmos. In many ways, it’s the “magic length” for uncovering some of nature’s greatest secrets, and can take us closer to the Big Bang than observations of any stars or galaxies could ever hope to.
This article was originally published in December of 2022. It was updated in 2025.
...
Read the original on bigthink.com »
Almost all cars currently come with a key fob, which allows you to open the doors, and start the car. When you buy a car, the convenience is the compelling feature. You can leave the key fob in your pocket, and never again worry about having a physical key. It sounds great.
The implicit assumption you make is that the key fob system is secure, and that some random person with $50 of hardware can’t drive off with your car. You have no real way to tell whether the car company did a reasonable job with their system, so you have to trust them. Unfortunately, that trust is not always warranted. And it isn’t until people try to hack these systems that the problems come out. Problems that less scrupulous people may have already been exploiting.
There are lots of different key fob systems. We’ll start by looking at the key fob for my 2006 Prius. Key fobs use something called a Remote Keyless System (RKS). In the U. S. these operate at 315 MHz, +/- 2.5 MHz. My Prius key turned out to be at 312.590 MHz.The keyfobs are all listed in the FCC database. Watching for new entries is one of the ways people can tell when new car models are coming out. These will appear long before the official announcement.
You can figure out what frequency your key fob transmits on using your SDR and use GQRX or SDR# to monitor the spectrum. When you push a button on the fob, you should see a brief jump in the spectrum. You may need to shift the frequency band up or down by a couple of MHz to find the signal, mine was almost 2.5 MHz low.
One word of caution. Don’t get too carried away pushing the button! The RKS system uses a rolling pseudo-randomly generated code. Both the key fob and the car keep in sync, so that the car recognizes the next code. However, if the key fob gets too far ahead in the sequence (100s of button pushes) the car won’t recognize it. That makes the key (and the car) considerably less useful!
If we capture the signal the result is shown below
The total width of the plot is 10 seconds, so you can see there is one key press shortly after 2 seconds, and another shortly after 5 seconds.
If we plot 100 ms starting at 2 seconds, we can see the digital signal we are looking for:
Zooming in to the first couple of bits, we get
The bits are easy to identify. A decision threshold of 15 will give almost perfect detection. If we do this, and then plot first part of the digital data for the two key presses, we get this
Although the two start the same, they rapidly diverge. This is fortunate, because if the signal was the same every time, you’d have enough information to steal my car now!
The data is again on-off keying (OOK). It is also almost certainly split phase (or Manchester) encoding. Instead of a “1” being high, and a “0” being low, the information is encoding in the transition from high to low or low to high. That means that a “0” bit is a rising transition, and a “1” bit is a falling transition. A good way to recognize split phase encoding is that you can only have one or two low or high segments in a row. The nice thing about Manchester encoding is that every symbol has a transition, and these are easier to find then when the signal has been high or low for several intervals.
This example is OOK, which is the most common for car remotes. Some use frequency-shift keying (FSK), where each bit is transmitted as a different frequency, and the envelope is constant.
There are lots of different attacks that can be used against car remotes, depending on how they work, and what sort of access you are looking for. The simplest just let you open the car up. More thorough attacks give you complete control by basically cloning the remote.
Most key fobs use a rolling key. This produces a new waveform that depends on the ID of the key fob, a random seed, and how many times the key has been pressed. The car keeps track of the last code it received, and knows what the next several hundred codes might be. If it detects one of the expected future codes it opens the car. If it gets a previously used code, it stops responding to the key fob. For the Prius you have to do the “Chicken Dance” to get it to work again, provided you have another working key fob. Otherwise, you have to have the dealer rekey the car, for many hundreds of dollars. I have had to do this a couple times, now (for other reasons).
There are several lines of attack. One is simply recording the key fob output for a couple of button presses when it is away from the car, or the car is being jammed. With recorded unused codes, you can open the car.
Another is to reverse engineer the RKS sequence. In general this should be extremely hard. However, there have been several situations where this is very easy.
Finally, there are cars that open when the owner gets close to the car. This is based on a low power signal that can only be received when the key fob is very close. This can be defeated by amplifying these small signals.
There are many more attacks, and these will continue to multiply as cars get more complex, and have more embedded computer systems to go after. You can look at some of these for next week.
The oldest and simplest approach was to record the waveform that a key fob puts out (using your rtl-sdr), and then replay it. This works well for older garage door openers, that used a single fixed key. There are still cars out there that have key fobs that work this way (some pre-2000 Mercedes for example).
For key fobs that use a rolling key, you can still use a replay attack. If you can get access to the key fob when it is away from the car and record several key presses, you can replay these to have the car open.
If you can’t get access to the key fob, a second approach is to make a device that records the output of the key fob when it is used, and simultaneously jams the car. A standard way to do this is to listen to the key fob transmission, and then start jamming when the error correction bits are transmitted at the end. That way you don’t jam yourself. The car won’t recognize the packet, but you can recreate the error correction bits, and retransmit the waveform later.
Finally, a jammer by itself will keep the remote from begin able to lock the car. If the driver isn’t attentive, they may walk away from the car leaving it open.
All of this depends on your ability to both transmit and receive RF. Your rtl-sdr’s are just receivers, and do a great job of acquiring signals. There are lots of options for transmitting. There are a number of usb dongles that are based on the TI CC111X chips that are used in key fobs, like this one
Another recent device that has attracted a lot of attention is the Flipper Zero
This has the same chip as the previous device, but is much more accessiuble packaged. This is the Swiss army knife of RF hacking. It has generated a lot of controversy, as you can look into for this week’s assignment.
An interesting, more flexible approach uses your Raspberry PI to generate RF by sending a carefully crafted data sequence to the GPIO port. This is described in detail, with videos, and links to the code here:
With this, you can generate pretty much any digital packet waveform you would like. Power levels are more than adequate for emulating a key fob. The rtl-sdr’s are also well supported on the Raspberry PI, so the two together give you a total key fob hacking system for $50 or so, as we will see shortly.
Many higher end cars use a passive system for opening the car when the driver approaches. A low power signal is transmitted from the car as a challenge. The key fob then responds with an authentication. Because the power is so low, the car assumes the driver must be in close proximity if it receives a response.
These systems can be hacked by building a repeater that placed near the car. It captures the car’s signal and retransmits it at higher power. The remote can be anywhere with in a couple hundred meters, and it will still hear the signal. The remote responds, and that is again captured by the repeater, and retransmitted. The car thinks the key fob is nearby and opens the car.
The nice thing about this approach is that you don’t need to know anything about the key fob, except its frequency. You don’t need to reverse engineer the protocol it uses, you are actually just using the real key!
Here is a video of some care thieves stealing a Tesla with this approach
How can you reduce this risk?
The next attacks go after the rolling key system itself. The way this generally works is that the key fob sends an ID, along with a counter of how many times a key has been pressed. This is encrypted, and transmitted to the car when you push the button.
If the encryption is strong, it is extremely difficult to figure out what the userid and counter is. There are several interesting cases. One is for the 20 years of VW’s (and Audi’s, Porsche’s, etc), that we’ll look at here. Another is for Subarus, that you can look at for your assignment this week.
A description of the VW RKS system is given here
This points to a Wired article (which unfortunately currently behind a paywall), and includes a technical paper that goes into great detail about how it works. The authors of the technical paper looked at the VW RKS systems for the last 20 years.
For the most recent systems, the encryption was relatively strong, equivalent to a 90 bit key. However, it turns out they used the same key in every car! 100 million of them!
The challenge then is to figure out what the key is, and what the encryption algorithm is. The car itself helps you solve that one. When button is pushed the car receives the signal, and then decodes it in the onboard computer (ECU). The key and the algorithm are stored in the ECU firmware. The authors bought some ECUs on EBay, downloaded the firmware, and reverse engineered the encryption (these are usually fairly simple bitwise operations that are easy to identify). With this knowledge, after acquiring the signal from a single key press, the user ID and counter can be decoded, and the key fob cloned, giving complete control of the car.
There are a couple of interesting things here. One is that every VW car decodes every key fob, so by monitoring the execution of your ECU, you can find the user ID and counter for all of the cars around you. There are reports of people using systems like this to steal other makes of cars, also.
The reason only your car responds to your remote is that your car has a “allow list” of key fob ID’s it responds to. That is what gets set when you rekey the car.
This all sounds pretty alarming. But it gets worse, as we’ll see next week.
You have several options for your assignment this week. For each topic, generate about 5 slides to describe your thoughts or results. Sign up here
and upload your slides here:
1. This article concerns the Subaru RKS system. Read it, watch the videos, and describe what you find.
2. The Flipper Zero has gotten lots of attention. What controversies can you find? What can the device actually do? Should it be banned?
3. Why steal a car when you can have a bulldozer! Read this article, and watch the video, to see how this works.
4. There are lots of other car hacks out there. See if you can find something interesting, and describe it. Look for stories where you can figure out how it works. Entertainment systems are a common mode of access (check the Uconnect hack for Jeeps). Tesla and hackers have a long running cat-and-mouse game going. There are lots of interesting examples here. Two recent are Teslas and Hondas.
Finally, if you haven’t already, please send me an email about how the class is going for you. I appreciate hearing your thoughts. Thanks!
...
Read the original on web.stanford.edu »
Some services are down
Last updated on Apr 25 at 04:10am EDT
Homepage, Query API, and 2 other services are down
We are working to resolve an issue with our backend storage that is preventing the service from starting correctly
...
Read the original on status.open-vsx.org »
Skip to content This tutorial is also available in the following languages: 한국어 (Korean) and 日本語 (Japanese).In this tutorial, we will build a small microblog that implements the ActivityPub protocol, similar to Mastodon or Misskey, using Fedify, an ActivityPub server framework. This tutorial will focus more on how to use Fedify rather than understanding its underlying operating principles. If you have any questions, suggestions, or feedback, please feel free to join our Matrix chat space or Discord server or GitHub Discussions.This tutorial is aimed at those who want to learn Fedify and create ActivityPub server software.We assume that you have experience in creating web applications using HTML and HTTP, and that you understand command-line interfaces, SQL, JSON, and basic JavaScript. However, you don’t need to know TypeScript, , ActivityPub, or Fedify—we’ll teach you what you need to know about these as we go along.You don’t need experience in creating ActivityPub software, but we do assume that you’ve used at least one ActivityPub software like Mastodon or Misskey. This is so you have an idea of what we’re trying to build.In this tutorial, we’ll use Fedify to create a single-user microblog that can communicate with other federated software and services via ActivityPub. This software will include the following features:Only one account can be created.Other accounts in the fediverse can follow the user.The user can view their list of followers.The user’s posts are visible to followers in the fediverse.The user can follow other accounts in the fediverse.The user can view a list of accounts they are following.The user can view a chronological list of posts from accounts they follow.To simplify the tutorial, we’ll impose the following feature constraints:Account profiles (bio, photos, etc.) cannot be set.Once created, an account cannot be deleted.Once posted, a post cannot be edited or deleted.Once followed, another account cannot be unfollowed.There are no likes, shares, or comments.There is no search functionality.There are no security features such as authentication or permission checks.Of course, after completing the tutorial, you’re welcome to add these features—it would be good practice!The complete source code is available in the GitHub repository, with commits separated according to each implementation step for your reference.Fedify supports three JavaScript runtimes: Deno, Bun, and Node.js. Among these, Node.js is the most widely used, so we’ll use Node.js as the basis for this tutorial.A JavaScript runtime is a platform that executes JavaScript code. Web browsers are one type of JavaScript runtime, and for command-line or server use, Node.js is widely used. Recently, cloud edge functions like Cloudflare Workers have also gained popularity as JavaScript runtimes.To use Fedify, you need Node.js version 20.0.0 or higher. There are various installation methods—choose the one that suits you best.Once Node.js is installed, you’ll have access to the node and npm commands:To set up a Fedify project, you need to install the fedify command on your system. There are several installation methods, but using the npm command is the simplest:After installation, check if you can use the fedify command. You can check the version of the fedify command with this command:Make sure the version number is 1.0.0 or higher. If it’s an older version, you won’t be able to properly follow this tutorial.To start a new Fedify project, let’s decide on a directory path to work in. In this tutorial, we’ll name it microblog. Run the fedify init command followed by the directory path (it’s okay if the directory doesn’t exist yet):When you run the fedify init command, you’ll see a series of prompts. Select Node.js, npm, Hono, In-memory, and In-process in order:Fedify is not a full-stack framework, but rather a framework specialized for implementing ActivityPub servers. Therefore, it’s designed to be used alongside other web frameworks. In this tutorial, we’ll adopt Hono as our web framework to use with Fedify.After a moment, you’ll see files created in your working directory with the following structure:As you might guess, we’re using TypeScript instead of JavaScript, which is why we have .ts and .tsx files instead of .js files.The generated source code is a working demo. Let’s first check if it runs properly:This command will keep the server running until you press +:With the server running, open a new terminal tab and run the following command:This command queries an actor (actor) on the ActivityPub server we’ve set up locally. In ActivityPub, an actor can be thought of as an account that’s accessible across various ActivityPub servers.If you see output like this, it’s working correctly:✔ Looking up the object…
Person {
id: URL “http://localhost:8000/users/john”,
name: “john”,
preferredUsername: “john”
}This result tells us that there’s an actor object located at the /users/john path, it’s of type Person, its ID is http://localhost:8000/users/john, its name is john, and its username is also john.fedify lookup is a command to query ActivityPub objects. This is equivalent to searching with the corresponding URI on Mastodon. (Of course, since your server is only accessible locally at the moment, searching on Mastodon won’t yield any results yet.)If you prefer curl over the fedify lookup command, you can also query the actor with this command (note that we’re sending the Accept header with the -H option):However, if you query like this, the result will be in JSON format, which is difficult to read with the naked eye. If you also have the jq command installed on your system, you can use curl and jq together:Visual Studio Code might not be your favorite editor. However, we recommend using Visual Studio Code while following this tutorial. This is because we need to use TypeScript, and Visual Studio Code is currently the most convenient and excellent TypeScript IDE. Also, the generated project setup already includes Visual Studio Code settings, so you don’t have to wrestle with formatters or linters.Don’t confuse this with Visual Studio. Visual Studio Code and Visual Studio only share a brand name; they are completely different software.After installing Visual Studio Code, open the working directory by selecting File → Open Folder… from the menu.If you see a popup in the bottom right asking Do you want to install the recommended ‘Biome’ extension from biomejs for this repository?, click the Install button to install the extension. Installing this extension will automatically format your TypeScript code, so you don’t have to wrestle with code styles like indentation or spacing when writing TypeScript code.If you’re a loyal Emacs or Vim user, we won’t discourage you from using your favorite editor. However, we recommend setting up TypeScript . The difference in productivity depending on whether TypeScript is set up or not is significant.Before we start modifying code, let’s briefly go over TypeScript. If you’re already familiar with TypeScript, you can skip this section.TypeScript adds static type checking to JavaScript. The TypeScript syntax is almost the same as JavaScript, but the main difference is that you can specify types for variables and functions. Types are specified by adding a colon (:) after the variable or parameter.For example, the following code indicates that the foo variable is a string:If you try to assign a value of a different type to a variable declared like this, Visual Studio Code will show a red underline before you even run it and display a type error:foo = 123;Type ‘number’ is not assignable to type ‘string’.When coding, don’t ignore red underlines. If you ignore them and run the program, it’s likely that an error will actually occur at that part.The most common type of error you’ll encounter when coding in TypeScript is the null possibility error. For example, in the following code, the bar variable can be either a string or null (string | null):What happens if you try to get the first character of this variable’s content like this?You’ll get a type error like above. It’s saying that bar might sometimes be null, and in that case, calling null.charAt(0) would cause an error, so you need to fix the code. In such cases, you need to add handling for the null case like this:In this way, TypeScript helps prevent bugs by making you think of cases you might not have considered when coding.Another incidental advantage of TypeScript is that it enables auto-completion. For example, if you type foo., a list of methods available for string objects will appear, allowing you to choose from them. This allows for faster coding without having to check documentation each time.We hope you’ll feel the charm of TypeScript as you follow this tutorial. Above all, Fedify provides the best experience when used with TypeScript.If you want to learn TypeScript properly and thoroughly, we recommend reading The TypeScript Handbook. It takes about 30 minutes to read it all. is a syntax extension of JavaScript that allows you to insert XML or HTML into JavaScript code. It can also be used in TypeScript, in which case it’s sometimes called TSX. In this tutorial, we’ll write all HTML within JavaScript code using syntax. Those who are already familiar with can skip this section.For example, the following code assigns an HTML tree with a element at the top to the html variable:You can also insert JavaScript expressions using curly braces (the following code assumes, of course, that there’s a getName() function):One of the features of is that you can define your own tags called components. Components can be defined as ordinary JavaScript functions. For example, the following code defines and uses a component (component names typically follow PascalCase style):import type { Child, FC } from “hono/jsx”;
function getName() {
return “JSX”;
interface ContainerProps {
name: string;
children: Child;
const Container: FCIn the above code, FC is provided by Hono, the web framework we’ll use, and it helps define the type of the component. FC is a generic type, and the types that go inside the angle brackets after FC are type arguments. Here, we specify the props type as the type argument. Props refer to the parameters passed to the component. In the above code, we declared and used the ContainerProps interface as the props type for the component.Type arguments for generic types can be multiple, separated by commas. For example, Foo applies type arguments A and B to the generic type Foo.There are also generic functions, which are written as someFunction.When there’s only one type argument, the angle brackets enclosing the type argument may look like XML/HTML tags, but they have nothing to do with functionality.Applies the type argument ContainerProps to the generic type FC.Opens a component tag named . Must be closed with .Among the things passed as props, children is worth noting specifically. This is because the child elements of the component are passed as the children prop. As a result, in the above code, the html variable is assigned the HTML tree . was invented in the React project and started to be widely used. If you want to know more about , read the Writing Markup with and JavaScript in with Curly Braces sections of the React documentation.The first thing we’ll create is the account creation page. We need to create an account before we can post or follow other accounts. Let’s start by building the visible part.First, let’s create a file named src/views.tsx. Inside this file, we’ll define a component using :To avoid spending too much time on design, we’ll use a CSS framework called Pico CSS.When the type of a variable or parameter can be inferred by TypeScript’s type checker, like props above, it’s fine to omit the type annotation. Even when the type annotation is omitted in such cases, you can check the type of a variable by hovering your mouse cursor over the variable name in Visual Studio Code.Next, in the same file, let’s define a component that will go inside the layout:In , you can only have one top-level element, but the component has two top-level elements: and . That’s why we’ve wrapped them with <> and </>. This is called a fragment.
Now it’s time to use the components we’ve defined. Open the src/app.tsx file and import the two components we just defined:
Then, display the account creation form we just made on the /setup page:
Now, let’s open the http://localhost:8000/setup page in a web browser. If you see a screen like this, it’s working correctly:
Now that we’ve implemented the visible part, it’s time to implement the functionality. We need a place to store account information, so let’s use SQLite. SQLite is a relational database suitable for small-scale applications.
First, let’s declare a table to hold account information. From now on, we’ll write all table declarations in the src/schema.sql file. We’ll store account information in the users table:
Since the microblog we’re creating can only have one account, we’ve put a constraint on the id column, which is the primary key, to not allow values other than 1. This ensures that the users table can’t contain more than one record. We’ve also put constraints on the username column to not allow empty strings or strings that are too long.
Now we need to run the src/schema.sql file to create the users table. For this, we need the sqlite3 command, which you can get from the SQLite website or install from your platform’s package manager. For macOS, you don’t need to get it separately as it is built into the system. If you get it directly, you can get the sqlite-tools-*.zip file for your OS and unzip it. If you use a package manager, you can also install it with the following command:
Okay, now that we have the sqlite3 command, let’s use it to create a database file:
The above command will create a microblog.sqlite3 file, which will store your SQLite data.
Now we need to use the SQLite database in our app. To use SQLite database in Node.js, we need a SQLite driver library, and we’ll use the better-sqlite3 package. You can easily install the package with the npm command:
Now that we’ve installed the package, let’s write code to connect to the database using this package. Create a new file named src/db.ts and code it as follows:
Now let’s declare a type in JavaScript to represent the record stored in the users table. Create a src/schema.ts file and define the User type as follows:
Now that we’ve connected to the database, it’s time to write code to insert records.
Open the src/app.tsx file and import the db object and User type that will be used for record insertion:
Add code to check if an account already exists to the GET /setup handler we created earlier:
Now that we’ve roughly implemented the account creation feature, let’s try it out. Open the http://localhost:8000/setup page in a web browser and create an account. In this tutorial, we’ll assume that we used johndoe as the username. If it’s created, let’s also check if the record was properly inserted into the SQLite database:
If the record was properly inserted, you should see output like this (of course, johndoe will be whatever username you entered):
Now that we’ve created an account, let’s implement a profile page to display the account information. Although we don’t have much information to show yet.
Let’s start with the visible part again. Open the src/views.tsx file and define a component:
Then, open the src/app.tsx file and import the component we just defined:
And add a GET /users/{username} request handler that displays the component:
Now let’s test if it displays correctly. Open the http://localhost:8000/users/johndoe page in your web browser (if you created an account with a username other than johndoe, change the URL accordingly). You should see a screen like this:
As the name suggests, ActivityPub is a protocol for exchanging activities. Writing a post, editing a post, deleting a post, liking a post, commenting, editing a profile… All actions that happen in social media are expressed as activities.
And all activities are transmitted from actor to actor. For example, when John Doe writes a post, a (Create(Note)) activity is sent from Joh Doe to John Doe’s followers. If Jane Doe likes that post, a (Like) activity is sent from Jane Doe to John Doe.
Therefore, the first step in implementing ActivityPub is to implement the actor.
The demo app generated by the fedify init command already has a very simple actor implemented, but to communicate with actual software like Mastodon or Misskey, we need to implement the actor more properly.
First, let’s take a look at the current implementation. Open the src/federation.ts file:
The part we should focus on is the setActorDispatcher() method. This method defines the URL and behavior that other ActivityPub software will use when querying an actor on our server. For example, if we query /users/johndoe as we did earlier, the identifier parameter of the callback function will receive the string value “johndoe”. And the callback function returns an instance of the Person class to convey the information of the queried actor.
The ctx parameter receives a Context object, which contains various functions related to the ActivityPub protocol. For example, the getActorUri() method used in the above code returns the unique URI of the actor with the identifier passed as a parameter. This URI is being used as the unique identifier of the Person object.
As you can see from the implementation code, currently it’s making up actor information and returning it for any identifier that comes after the /users/ path. But what we want is to only allow queries for accounts that are actually registered. Let’s modify this part to only return for accounts in the database.
We need to create an actors table. Unlike the users table which only contains accounts on the current instance server, this table will also include remote actors belonging to federated servers. The table looks like this. Add the following SQL to the src/schema.sql file:
* The user_id column is a foreign key to connect with the users column. If the record represents a remote actor, it will be NULL, but if it’s an account on the current instance server, it will contain the users.id value of that account.
* The uri column contains the unique URI of the actor, also called the actor ID. All ActivityPub objects, including actors, have a unique ID in URI form. Therefore, it cannot be empty and cannot be duplicated.
* The handle column contains the fediverse handle in the form of @johndoe@example.com. Likewise, it cannot be empty and cannot be duplicated.
* The name column contains the name displayed in the UI. It usually contains a full name or nickname. However, according to the ActivityPub specification, this column can be empty.
* The inbox_url column contains the URL of the actor’s inbox. We’ll explain in detail what an inbox is below, but for now, just know that it must exist for the actor. This column also cannot be empty or duplicated.
* The shared_inbox_url column contains the URL of the actor’s shared inbox, which we’ll also explain below. It’s not mandatory, so it can be empty, and as the column name suggests, it can share the same shared inbox URL with other actors.
* The url column contains the profile URL of the actor. A profile URL means the URL of the profile page that can be opened in a web browser. Sometimes the actor’s ID and profile URL are the same, but they can be different depending on the service, so in that case, the profile URL is stored in this column. It can be empty.
* The created column records when the record was created. It cannot be empty, and by default, the insertion time is recorded.
Now, let’s apply the src/schema.sql file to the microblog.sqlite3 database file:
And let’s define a type in src/schema.ts to represent records stored in the actors table in JavaScript:
Although we currently have one record in the users table, there’s no corresponding record in the actors table. This is because we didn’t add a record to the actors table when creating the account. We need to modify the account creation code to add records to both users and actors.
First, let’s modify the SetupForm in src/views.tsx to also input a name that will go into the actors.name column along with the username:
Now import the Actor type we defined earlier in src/app.tsx:
Now let’s add code to the POST /setup handler to create a record in the actors table with the input name and other necessary information:
When checking if an account already exists, we modified it to determine that there’s no account yet not only when there’s no record in the users table, but also when there’s no matching record in the actors table. Apply the same condition to the GET /setup handler and the GET /users/{username} handler:
Finally, open the src/federation.ts file and add the following code below the actor dispatcher:
Don’t worry about the setInboxListeners() method for now. We’ll cover this when we explain about the inbox. Just note that the getInboxUri() method used in the account creation code needs the above code to work properly.
If you’ve modified all the code, open the http://localhost:8000/setup page in your browser and create an account again:
Now that we’ve created the actors table and filled in a record, let’s modify src/federation.ts again. First, import the db object, and Endpoints and Actor types:
Now that we’ve imported what we need, let’s modify the setActorDispatcher() method:
In the changed code, we now query the users table in the database and return null if it’s not an account on the current server. In other words, it will respond with a proper Person object with 200 OK for a GET /users/johndoe request (assuming the account was created with the username johndoe), and respond with 404 Not Found for other requests.
Let’s look at how the part creating the Person object has changed. First, a name property has been added. This property uses the value from the actors.name column. We’ll cover the inbox and endpoints properties when we explain about the inbox. The url property contains the profile URL of this account, and in this tutorial, we’ll make the actor ID and the actor’s profile URL match.
Now, let’s test if the actor dispatcher is working well.
With the server running, open a new terminal tab and enter the following command:
Since there’s no account named alice, you’ll get an error like this, unlike before:
Now let’s look up the johndoe account:
Now you get a good result:
The next thing we’ll implement is the actor’s cryptographic keys for signing. In ActivityPub, when an actor creates and sends an activity, it uses a digital signature to prove that the activity was really created by that actor. For this, each actor creates and holds their own matching private key (secret key) and public key pair, and makes the public key visible to other actors. When actors receive an activity, they compare the sender’s public key with the activity’s signature to verify that the activity was indeed created by the sender. Fedify handles the signing and signature verification automatically, but you need to implement the generation and preservation of the key pairs yourself.
Let’s define a keys table in src/schema.sql to store the private and public key pairs:
If you look closely at the table, you can see that the type column only allows two types of values. One is the RSA-PKCS#1-v1.5 type and the other is the Ed25519 type. (What each of these means is not important for this tutorial.) Since the primary key is on (user_id, type), there can be a maximum of two key pairs for one user.
The private_key and public_key columns can receive strings, and we’ll put JSON data in them. We’ll cover how to encode private and public keys as JSON later.
Let’s also define a Key type in the src/schema.ts file to represent records stored in the keys table in JavaScript:
Now we need to write code to generate and load key pairs.
Open the src/federation.ts file and import the exportJwk(), generateCryptoKeyPair(), importJwk() functions provided by Fedify and the Key type we defined earlier:
Now let’s modify the actor dispatcher part as follows:
First of all, we should pay attention to the setKeyPairsDispatcher() method called in succession after the setActorDispatcher() method. This method connects the key pairs returned by the callback function to the account. By connecting the key pairs in this way, Fedify automatically adds digital signatures with the registered private keys when sending activities.
The generateCryptoKeyPair() function generates a new private key and public key pair and returns it as a CryptoKeyPair object. For your reference, the CryptoKeyPair type has the type { privateKey: CryptoKey; publicKey: CryptoKey; }.
The exportJwk() function returns an object representing the CryptoKey object in JWK format. You don’t need to know what the JWK format is. Just understand that it’s a standard format for representing cryptographic keys in JSON. CryptoKey is a web standard type for representing cryptographic keys as JavaScript objects.
The importJwk() function converts a key represented in JWK format to a CryptoKey object. You can understand it as the opposite of the exportJwk() function.
Now, let’s turn our attention back to the setActorDispatcher() method. We’re using a method called getActorKeyPairs(), which, as the name suggests, returns the key pairs of the actor. The actor’s key pairs are those very key pairs we just loaded with the setKeyPairsDispatcher() method. We loaded two pairs of keys in RSA-PKCS#1-v1.5 and Ed25519 formats, so the getActorKeyPairs() method returns an array of two key pairs. Each element of the array is an object representing the key pair in various formats, which looks like this:
It’s complex to explain here how CryptoKey, CryptographicKey, and Multikey differ, and why there need to be so many formats. For now, let’s just note that when initializing the Person object, the publicKey property accepts the CryptographicKey type and the assertionMethods property accepts the MultiKey[] (TypeScript notation for an array of Multikey) type.
By the way, why are there two properties in the Person object that hold public keys, publicKey and assertionMethods? Originally in ActivityPub, there was only the publicKey property, but later the assertionMethods property was added to allow registration of multiple keys. Similar to how we generated both RSA-PKCS#1-v1.5 and Ed25519 keys earlier, we’re setting both properties for compatibility with various software. If you look closely, you can see that we’re only registering the RSA-PKCS#1-v1.5 key to the legacy publicKey property (the first item in the array is the RSA-PKCS#1-v1.5 key pair, and the second item is the Ed25519 key pair).
Now that we’ve registered the cryptographic keys to the actor object, let’s check if it’s working well. Query the actor with the following command:
If it’s working correctly, you should see output like this:
You can see that the Person object’s publicKey property contains one CryptographicKey object in RSA-PKCS#1-v1.5 type, and the assertionMethods property contains two Multikey objects in RSA-PKCS#1-v1.5 and Ed25519 formats.
Now let’s check if we can actually view the actor we’ve created in Mastodon.
Unfortunately, the current server is only accessible locally. However, it would be inconvenient to deploy somewhere every time we modify the code for testing. Wouldn’t it be great if we could expose our local server to the internet without deployment for immediate testing?
Here’s where the fedify tunnel command comes in handy. In a terminal, open a new tab and enter this command followed by the port number of your local server:
This creates a disposable domain name and relays to your local server. It will output a URL that’s accessible from the outside:
Of course, you’ll see your own unique URL different from the one above. You can check if it’s connecting well by opening https://temp-address.serveo.net/users/johndoe in your web browser (replace with your unique temporary domain):
Copy your fediverse handle shown on the above web page, then go into Mastodon and paste it into the search box in the upper left corner:
If the actor we created appears in the search results as shown above, it’s working correctly. You can also click on the actor’s name in the search results to go to their profile page:
But this is as far as we can go. Don’t try to follow yet! For our actor to be followable from other servers, we need to implement an inbox.
In ActivityPub, an inbox is an endpoint where an actor receives incoming activities from other actors. All actors have their own inbox, which is a URL that can receive activities via HTTP POST requests. When another actor sends a follow request, writes a post, comments, or performs any other interaction, the corresponding activity is delivered to the recipient’s inbox. The server processes the activities that come into the inbox and responds appropriately, allowing it to communicate and function as part of the federated network.
For now, we’ll start by implementing the reception of follow requests.
We need to create a follows table to hold the actors who follow you (followers) and the actors you follow (following). Add the following SQL to the src/schema.sql file:
Let’s create the follows table by executing src/schema.sql once again:
...
Read the original on fedify.dev »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.