10 interesting stories served every morning and every evening.
In September 2024, Amandla Thomas-Johnson was a Ph. D. candidate studying in the U.S. on a student visa when he briefly attended a pro-Palestinian protest. In April 2025, Immigration and Customs Enforcement (ICE) sent Google an administrative subpoena requesting his data. The next month, Google gave Thomas-Johnson’s information to ICE without giving him the chance to challenge the subpoena, breaking a nearly decade-long promise to notify users before handing their data to law enforcement.
Today, the Electronic Frontier Foundation sent complaints to the California and New York Attorneys General asking them to investigate Google for deceptive trade practices for breaking that promise. You can read about the complaints here. Below is Thomas-Johnson’s account of his ordeal.
I thought my ordeal with U. S. immigration authorities was over a year ago, when I left the country, crossing into Canada at Niagara Falls.
By that point, the Trump administration had effectively turned federal power against international students like me. After I attended a pro-Palestine protest at Cornell University—for all of five minutes—the administration’s rhetoric about cracking down on students protesting what we saw as genocide forced me into hiding for three months. Federal agents came to my home looking for me. A friend was detained at an airport in Tampa and interrogated about my whereabouts.
I’m currently a Ph. D. student. Before that, I was a reporter. I’m a dual British and Trinadad and Tobago citizen. I have not been accused of any crime.
I believed that once I left U. S. territory, I had also left the reach of its authorities. I was wrong.
Weeks later, in Geneva, Switzerland, I received what looked like a routine email from Google. It informed me that the company had already handed over my account data to the Department of Homeland Security.
At first, I wasn’t alarmed. I had seen something similar before. An associate of mine, Momodou Taal, had received advance notice from Google and Facebook that his data had been requested. He was given advanced notice of the subpoenas, and law enforcement eventually withdrew them before the companies turned over his data.
Google had already disclosed my data without telling me.
I assumed I would be given the same opportunity. But the language in my email was different. It was final: “Google has received and responded to legal process from a law enforcement authority compelling the release of information related to your Google Account.”
Google had already disclosed my data without telling me. There was no opportunity to contest it.
To be clear, this should not have happened this way. Google promises that it will notify users before their data is handed over in response to legal processes, including administrative subpoenas. That notice is meant to provide a chance to challenge the request. In my case, that safeguard was bypassed. My data was handed over without warning—at the request of an administration targeting students engaged in protected political speech.
Months later, my lawyer at the Electronic Frontier Foundation obtained the subpoena itself. On paper, the request focused largely on subscriber information: IP addresses, physical address, other identifiers, and session times and durations.
But taken together, these fragments form something far more powerful—a detailed surveillance profile. IP logs can be used to approximate location. Physical addresses show where you sleep. Session times would show when you were communicating with friends or family. Even without message content, the picture that emerges is intimate and invasive.
What this experience has made clear is that anyone can be targeted by law enforcement. And with their massive stores of data, technology companies can facilitate those arbitrary investigations. Together, they can combine state power, corporate data, and algorithmic inference in ways that are difficult to see—and even harder to challenge.
The consequences of what happened to me are not abstract. I left the United States. But I do not feel that I have left its reach. Being investigated by the federal government is intimidating. Questions run through your head. Am I now a marked individual? Will I face heightened scrutiny if I continue my reporting? Can I travel safely to see family in the Caribbean?
Who, exactly, can I hold accountable?
...
Read the original on www.eff.org »
I took this snapshot, and the rest of them, at the Natural History Museum of Los Angeles County’s Unearthed: Raw Beauty exhibition yesterday. Enjoy.
This entry was posted on March 3, 2026 at 7:47 pm and is filed under Uncategorized. You can follow any responses to this entry through the RSS 2.0 feed.
You can leave a response, or trackback from your own site.
...
Read the original on wchambliss.wordpress.com »
It’s not about technology for its own sake. It’s about being able to implement your ideas.
Imagine you don’t know anything about programming, and you want learn how to do it. You take a look at Amazon.com, and there’s a highly recommended set of books by Knute or something with a promising title, The Art of Computer Programming, so you buy them. Now imagine that it’s more than just a poor choice, but that all the books on programming are at written at that level.
That’s the situation with books about writing compilers.
It’s not that they’re bad books, they’re just too broadly scoped, and the authors present so much information that it’s hard to know where to begin. Some books are better than others, but there are still the thick chapters about converting regular expressions into executable state machines and different types of grammars and so on. After slogging through it all you will have undoubtedly expanded your knowledge, but you’re no closer to actually writing a working compiler.
Not surprisingly, the opaqueness of these books has led to the myth that compilers are hard to write.
The best source for breaking this myth is Jack Crenshaw’s series, Let’s Build a Compiler!, which started in 1988. This is one of those gems of technical writing where what’s assumed to be a complex topic ends up being suitable for a first year programming class. He focuses on compilers of the Turbo Pascal class: single pass, parsing and code generation are intermingled, and only the most basic of optimizations are applied to the resulting code. The original tutorials used Pascal as the implementation language, but there’s a C version out there, too. If you’re truly adventurous, Marcel Hendrix has done a Forth translation (and as Forth is an interactive language, it’s easier to experiment with and understand than the C or Pascal sources).
As good as it is, Crenshaw’s series has one major omission: there’s no internal representation of the program at all. That is, no abstract syntax tree. It is indeed possible to bypass this step if you’re willing to give up flexibility, but the main reason it’s not in the tutorials is because manipulating trees in Pascal is out of sync with the simplicity of the rest of the code he presents. If you’re working in a higher level language–Python, Ruby, Erlang, Haskell, Lisp–then this worry goes away. It’s trivially easy to create and manipulate tree-like representations of data. Indeed, this is what Lisp, Erlang, and Haskell were designed for.
That brings me to A Nanopass Framework for Compiler Education [PDF] by Sarkar, Waddell, and Dybvig. The details of this paper aren’t quite as important as the general concept: a compiler is nothing more than a series of transformations of the internal representation of a program. The authors promote using dozens or hundreds of compiler passes, each being as simple as possible. Don’t combine transformations; keep them separate. The framework mentioned in the title is a way of specifying the inputs and outputs for each pass. The code is in Scheme, which is dynamically typed, so data is validated at runtime.
After writing a compiler or two, then go ahead and plunk down the cash for the infamous Dragon Book or one of the alternatives. Maybe. Or you might not need them at all.
...
Read the original on prog21.dadgum.com »
Your Backpack Got Worse On PurposeIn 1986, a corporation that made women’s lingerie bought every backpack brand you’ve ever trusted. VF Corporation started as Vanity Fair Mills. Bras and underwear. They paid $762 million for a company called Blue Bell and picked up JanSport in the deal. That acquisition made them the largest publicly traded clothing company in the world.Then they went shopping.In 2000, they bought The North Face. Same year, they bought Eastpak. In 2004, Kipling. In 2007, Eagle Creek. By the time they were done, VF Corporation controlled an estimated 55% of the US backpack market.More than half. One company.Every time you stood in a store in the 2010s and compared a JanSport to a North Face to an Eastpak, you were comparing three labels owned by the same parent corporation. Same earnings call. Same margin targets. Same quarterly pressure. The sense that you were choosing between competitors was a fiction that VF Corp had no incentive to correct.Competition is what kept these brands honest when they were independent. If JanSport built a shitty bag in 1985, you walked across the aisle and bought an Eastpak instead. That threat disciplined every material choice, every stitch count, every zipper spec. Once they all report to the same parent, the discipline evaporates. Nobody needs to outbuild anybody. The only pressure left is the one coming from above: hit the margin target.The easiest way to hit a margin target is to make everything a little worse, across the board, all at once.Denier count is the most measurable indicator of fabric durability. It measures fiber thickness. A bag made with 1000-denier Cordura nylon can survive years of daily use. Drop that to 600-denier polyester and you have a bag that looks identical on the shelf and lasts half as long.YKK makes the best zippers on earth. They’re Japanese, they cost more per unit, and brands that care about longevity use them because a zipper failure kills a bag faster than fabric wear. On VF Corp’s lower-tier models, YKK hardware got swapped for generic alternatives. A few cents saved per unit across millions of bags.Stitching density went down. More stitches per inch means stronger seams. Fewer stitches means faster production. When you’re running millions of units through factories in Vietnam, Bangladesh, and Cambodia, shaving seconds off each seam saves serious money. It also creates failure points at every spot where the bag takes stress. Strap junctions. Zipper terminations. The bottom panel.None of this shows up on the shelf. The colors are right. The logos are crisp. The product photography is excellent. You discover what you actually bought three months in, when the stitching pulls apart at every stress point.Someone in the industry pushed back on an earlier version of this piece with a fair point: VF Corp’s brands still operate with their own design teams and their own headquarters. The brands aren’t literally merged. And the premium tiers within North Face and JanSport still use quality materials. The Summit Series from TNF still has Cordura. You can still find a JanSport with YKK zippers if you know where to look.All of that is true. But it actually makes the argument worse, not better.The fact that VF Corp kept the premium tiers intact while degrading the entry-level and mid-range products means this was a deliberate segmentation strategy. They still make the good version. They just also sell a garbage version under the same trusted name, in the same stores, to the people who don’t know the difference. The brand reputation built by decades of quality products is now being used to move cheap products to buyers who trust the logo.Walmart’s JanSport and REI’s JanSport are not the same bag. But they carry the same name, and that’s the point. The name is doing the selling. The product doesn’t have to.The warranty is doing the same thingJanSport still advertises a lifetime warranty. It sounds like a company that stands behind its product.Go try to use it.You ship the bag back at your own expense. That runs $12 to $25 depending on size and where you live. You wait three to six weeks. That’s the current turnaround per JanSport’s own warranty page. Then they evaluate the damage.“Normal wear and tear” isn’t covered. Only “defects in materials and workmanship.” Think about what that means for a bag engineered to last two years. When it starts falling apart at eighteen months, that failure can be classified as the product reaching its expected lifetime, not as a defect. The warranty language is structurally designed to exclude the exact type of failure the product is now built to have.People who do get warranty replacements report receiving bags that are worse than the one they sent in. Thinner fabric. Cheaper hardware. You mailed back a 2016 JanSport and got a 2025 JanSport, and those are fundamentally different products.The warranty used to be legendary. JanSport used to be the brand people cited when they talked about companies that actually stood behind their stuff. That reputation still exists in people’s memories. The warranty now runs on that leftover trust.One person told me they called about getting a zipper replaced on a JanSport from the late 90s. They were told it was normal wear and tear. They tried tailors, got quoted $50 to $100 for a new zipper. They looked at buying a new JanSport and saw how far the quality had fallen. They ended up buying a used backpack at a thrift store for four dollars.Ten to twenty used bags for the price of one new one that’ll fall apart. That’s where we’re at.The math that makes this intentionalPrice of a bag divided by years it actually lasts. That’s your cost per year.A $35 JanSport that dies in eighteen months: $23 per year. Add the shipping cost when you try the warranty. Add the replacement cost when the claim gets denied. Add your time.A $200 bag that lasts ten years: $20 per year. Already cheaper. At fifteen years, which the well-built ones consistently do, you’re at $13 per year.The “expensive” bag costs less. But VF Corp doesn’t want you to do this math, because the $35 bag creates a repeat customer every eighteen months. The $200 bag creates one transaction and zero follow-ups. From a shareholder’s perspective, the bag that falls apart is the better product.That’s the business model. Repeat failure, repeat purchase, repeat revenue. The quality decline isn’t a side effect. It’s the strategy.And then they tried to sell the whole thingIn 2021, VF Corp sold Eagle Creek to a former employee who basically rescued the brand from being shut down.By 2023, VF Corp announced it was exploring “strategic alternatives” for its entire remaining backpack division. JanSport. Eastpak. Kipling. All of them potentially up for sale because they weren’t generating enough profit.The brands your parents trusted went from independent companies to conglomerate assets to margin optimization targets to potential fire-sale candidates. All in under forty years.And something worth knowing: VF Corporation sold its lingerie business (the one it was literally founded on) back in 2007. Vanity Fair intimates went to Fruit of the Loom. The company shed the thing it actually knew how to make so it could focus on extracting value from the brands it bought. They didn’t build any of these outdoor brands. They acquired them, optimized them, and when the optimization stopped producing returns, started looking for the exit.This is the pattern. Acquisition. Cost optimization. Quality decline. Warranty narrowing. Brand equity extraction. And eventually, divestiture.It happened to your backpack. The same playbook is running right now on your power tools, your boots, your sunglasses, and about a dozen other product categories where a company you trusted quietly got absorbed by a corporation you’ve never heard of.I’ll be writing about those next.
...
Read the original on www.worseonpurpose.com »
I have for years been interested in sleep research due to my professional involvement in memory and learning. This article attempts to produce a synthesis of what is known about sleep with a view to practical applications, esp. in people who need top-quality sleep for their learning or creative achievements. Neurophysiology of sleep is an explosively growing branch of science. Many theories that are currently contested will soon be forgotten as a result of new findings. Consequently, this text is likely to grow old very quickly (compare the old version from the year 2000 here). Still, some basic truths about sleep are well-established, and practical conclusions can be drawn with the benefit to human creativity and intellectual accomplishment. In this text, I provide some links to research papers and popular-scientific articles that advocate disparate and contradictory theories. Please consult other sources to be certain you do not to get a one-sided view! This article includes some indications on how to use free running sleep in the treatment of insomnia, advanced and delayed phase shift syndromes, and some other sleep disorders. If your own experience can contribute to the ideas presented herein, I will gladly hear from you (esp. in the context of learning and creativity).
2 Importance of sleep
2.2 Why do we sleep?
2.4 If you do not sleep, you die!
2.4.4 Why do we die without sleep?
2.5 Two components of sleep
2.5.4 The fundamental theorem of good sleep
2.5.4.1 When good sleep might not come?
3 Formula for good sleep
3.2 Should we free run our sleep?
3.4 Optimizing the timing of brainwork
3.6 Kill the alarm clock!
3.7 Sleep inertia
3.7.3 How can I recover from sleep inertia?
4 Sleep habits
4.4 Charting sleep with SleepChart
4.7 Biphasic nature of human sleep
4.8 Segmented sleep
4.8.4 Examples of segmented sleep
4.9 Delayed Sleep Phase Syndrome (DSPS)
4.9.4 Asynchronous DSPS
4.9.6 28 hour day schedule
4.9.6.1 28 hour day in DSPS
4.9.7 Curing DSPS and insomnia
4.10 Advanced Sleep Phase Syndrome (ASPS)
4.13 Baby sleep
4.13.1 How to make babies sleep well?
4.13.7 What about the mom?
4.13.8 Why babies sleep so much?
4.14 Insomnia
5 Napping
5.1 Napping is good
5.1.3 To nap or not to nap? Nap!
5.2 Napping myths
5.2.3 Myth #3: Make sure you wake up from Stage 2 NREM
5.2.4 Myth #4: The circadian cycle can be ignored or abolished
5.4 One nap per day is enough
5.5 Polyphasic sleep
5.5.4 To sleep or not to sleep polyphasically
5.5.7 Sleep and creativity: Less is more
5.5.11 Polyphasic sleep: scientific challenge
5.5.11.1 Are early risers better at polyphasic adaptation?
5.5.11.2 Why so little research into polyphasic sleep?
5.5.12 Charting polyphasic sleep
5.5.17 Sustainability of polyphasic sleep
5.5.17.2 The limits of the body clock training
6 Factors that affect sleep
6.7 Exercise
6.7.2 What is the best time to exercise?
6.12 Learning
6.12.1 Learning should help you sleep
7 Sleep and learning
7.1 Sleep length
7.1.1 Optimum length of sleep
7.1.1.5 Length of sleep among users of SuperMemo
7.1.3 Jim Horne and Daniel Kripke
7.1.4 Effects of sleep duration and sleep phase on learning
7.2 How sleep affects learning?
7.2.1 Why is sleep important for learning?
7.3 Studying sleep and learning with SuperMemo
7.3.6 Recall vs. Consolidation
7.4 How learning affects sleep?
7.5 Sleep and school
8 Physiology of sleep
8.1 Why do we fall asleep?
8.1.1 Initiation of sleep
8.1.4 Phase response curve (PRC)
8.1.4.1 Changing the length of the circadian period
8.1.5 Recursive phase response curve (rPRC)
8.2 NREM and REM sleep
8.3 Why do we need sleep?
8.3.2 Sleep theories
8.3.3 Sleep and memory
8.3.3.1 NREM and memory
8.3.3.2 REM and memory
8.3.5 Neural optimization in sleep
8.3.7 Robert Vertes and Jerome Siegel
8.3.7.1 1. Sleep does not serve a role in declarative memory?
8.3.7.2 2. REM sleep deprivation does not lead to cognitive impairment?
8.3.7.3 3. Sleep-dependent enhancement of procedural learning has not been proven?
8.3.7.4 4. Learning in waking is far more significant than overnight enhancements?
8.3.7.6 How can random impulsations in REM make a sense in dreams?
9 Myths and facts
13 Summary
The good educator insists on exercise, play, and plentiful sleep: “the great cordial of nature.”
It is everyone’s dream to wake up fresh, happy, and ready for action on a daily basis. Sadly, in the modern world, only a small minority lives that dream. Yet the dream is within reach for most healthy people given:
I hope that this article compiles all the basic ingredients of knowledge that are helpful in accomplishing refreshing sleep. As for the sacrifice, it is important to begin with the understanding that one cannot eat one’s cake and have it too. Healthy sleep may be incompatible with some modern habits, some cravings, or some lifestyle choices. At worst, refreshing sleep may be incompatible with one’s job or even long-term goals. Due to the latter fact, this article cannot provide a solution for everyone. Moreover, having a happy and fresh mind on a daily basis is a difficult thing to accomplish even with an arsenal of knowledge and full focus on good sleep. However, let me state it emphatically, good sleep on most nights is feasible for most people!
This article was originally written a decade ago. I have always been interested in memory, learning, and sleep. In addition, in my job, sleep is as important as oxygen. As we all move deeper into the Information Age and Knowledge Economy, the issues discussed herein will become more and more important for each of us. After writing the original article, I had the great pleasure of getting in touch with hundreds of people experiencing various sleep problems. I came to see first hand how knowledge of sleep helps solve their problems. I could also see how the industrialized age lays obstacles in one’s quest for good sleep and high productivity. I have witnessed a true epidemic of sleep phase disorders, an explosion of interest in polyphasic sleep, and an exponential increase in interest in the matters of sleep in general. Despite my pleas, many people just cannot avoid using an alarm clock, running all-nighters before exams, waking their kids cranky for school, popping pills before sleep, leaving babies in their cots to cry it out for sleep, etc. The picture would be pretty sad and alarming were it not for the fact that there is hope in knowledge. With a degree of determination, everyone can improve his, her, or their kids’ sleep.
This article is a compilation of the most important and the most interesting things about the biology of sleep. It is supposed to help you gain knowledge needed to achieve high quality refreshing sleep that will boost your mental powers. The article explains why sleep is vitally important for health and for the brain. It argues that sleep deserves highest respect, and that most people could get excellent sleep if they only followed the prescribed rules.
Since writing the original Good sleep, good learning, good life, tremendous progress has been made in the science of sleep. My own work with tools such as SleepChart and SuperMemo has shed some interesting light on the connection between sleep and learning. As I kept addressing the progress in sleep science in minor articles and FAQs, some visitors to supermemo.com complained that valuable nuggets of information are dispersed throughout the site instead of being organized in a more encyclopedic manner in a single article. Here then comes a comprehensive compilation, in which I would like to retain the focus on practical knowledge that is helpful in achieving good sleep. However, I would still like to smuggle in some lesser known research findings that might be inspiring for an average reader and/or a scientist working in the fields of sleep, memory, and learning. If you believe I left out anything important that others should know, please let me know.
As the article grew to be insanely long, you may wish to begin with the summary at the bottom of the article. And if even that is too long, here are the highlights:
respect sleep as your tool for high IQ and good learning
free running sleep can help you resolve many sleep problems
biphasic sleep schedule is probably the healthiest schedule for creative people
do not wake up kids for school; if they cannot wake up in time, let them skip a class or two, or consider homeschooling
let babies and young children sleep on demand, co-sleeping is a great idea (even if many pediatricians will tell you otherwise)
exercise, learning, and sleep are your best tools for brain growth!
avoid regulating sleep and alertness with substances, esp. sleeping pills, alcohol, illegal drugs, nicotine, and caffeine
Incremental writing: Due to the size of the material, this article was written using a technique called incremental writing. Incremental writing is helpful in organizing a large body of earlier writings into a single linear piece. The main advantage of incremental writing is a reasonable degree of coherence despite speedy processing of materials taken from disparate sources. Texts produced with incremental writing are particularly suitable for learning with the help of incremental reading as they produce small independent Wikipedia-style sub-articles. For a linear reader, however, this may mean a degree of bloatedness and an annoying repetitiveness of the main themes for which I apologize. If the size of the article is intimidating, you could try reading it incrementally (e.g. with SuperMemo 2004 Freeware)?
References: Due to the volume of the material, I was not able to provide references for all statements included in the text. Some of these are common sense, some are common knowledge, others I took from memory or from SuperMemo without digging deep to the direct source. If you cannot find a reference for a particular claim, please let me know
...
Read the original on super-memory.com »
Spotify and several major record labels, including UMG, Sony, and Warner, secured a $322 million default judgment against the unknown operators of Anna’s Archive. The shadow library failed to appear in court and briefly released millions of tracks that were scraped from Spotify via BitTorrent. In addition to the monetary penalty, a permanent injunction required domain registrars and other parties to suspend the site’s domain names.
Spotify and several major record labels, including UMG, Sony, and Warner, secured a $322 million default judgment against the unknown operators of Anna’s Archive. The shadow library failed to appear in court and briefly released millions of tracks that were scraped from Spotify via BitTorrent. In addition to the monetary penalty, a permanent injunction required domain registrars and other parties to suspend the site’s domain names.
Anna’s Archive is generally known as a meta-search engine for shadow libraries, helping users find pirated books and other related resources.
However, last December, the site announced that it had also backed up Spotify, which came as a shock to the music industry.
Anna’s Archive initially released only Spotify metadata, and no actual music, but that put the music industry on high alert. Together with the likes of Universal, Warner, and Sony, Spotify filed a lawsuit days later, hoping to shut the site down.
Through a preliminary injunction targeting domain registrars and registries, the shadow library lost several domain names. However, not all were taken down, and the site registered various new domain names as backups.
The legal pressure also appeared to pay off in other ways. Not long after the lawsuit was filed, the shadow library removed the Spotify listing for their torrents page. The same applies to the first batch of music files that was accidentally released in February.
The site’s operator, Anna’s Archivist, hoped that these removals would motivate the music industry to back down, but that wasn’t the case. Instead, they returned to court requesting a $322 million default judgment after the defendant failed to show up in court.
Yesterday, Judge Jed Rakoff of the Southern District of New York entered a default judgment against the site’s unknown operators, awarding Spotify and the major labels the requested $322 million damages award in full.
The music labels get the statutory maximum of $150,000 in damages for around 50 works. Spotify adds a DMCA circumvention claim of $2,500 for 120,000 music files, bringing the total to more than $322 million.
The plaintiff previously described their damages request as “extremely conservative.” The DMCA claim is based only on the 120,000 files, not the full 2.8 million that were released. Had they applied the $2,500 rate to all released files, the damages figure would exceed $7 billion.
Anna’s Archive did not show up in court, and the operators of the site remain unidentified. The judgment attempts to address this directly, by ordering Anna’s Archive to file a compliance report within ten business days, under penalty of perjury, that includes valid contact information for the site and its managing agents.
Whether the site will comply with this order is highly uncertain.
For now, the monetary judgment is mostly a victory on paper, as recouping money from an unknown entity is impossible. For this reason, the music companies also requested a permanent injunction.
In addition to the damages award, Rakoff entered a permanent worldwide injunction covering ten Anna’s Archive domains: annas-archive.org, .li, .se, .in, .pm, .gl, .ch, .pk, .gd, and .vg.
Domain registries and registrars of record, along with hosting and internet service providers, are ordered to permanently disable access to those domains, disable authoritative nameservers, cease hosting services, and preserve evidence that could identify the site’s operators.
The judgment names specific third parties bound by those obligations, including Public Interest Registry, Cloudflare, Switch Foundation, The Swedish Internet Foundation, Njalla SRL, IQWeb FZ-LLC, Immaterialism Ltd., Hosting Concepts B. V., Tucows Domains Inc., and OwnRegistrar, Inc.
Anna’s Archive is also ordered to destroy all copies of works scraped from Spotify and to file a compliance report within ten business days, under penalty of perjury, including valid contact information for the site and its managing agents. That last requirement could prove significant, given that the identity of the site’s operators remains unknown.
In theory, Anna’s Archive has the option to prevent the domain suspension. The permanent injunction allows the site to seek relief from this measure, after showing that it has paid the full $322 million damages award and complied with all injunctive obligations.
That’s an unlikely option, to say the least. At the same time, however, it is not guaranteed that the site’s domain names will be suspended.
As reported previously, several domain names, including the Greenland-based .gl version, are linked to registries and registrars outside the jurisdiction of the U. S. court. As such, they previously did not comply to the preliminary injunction, and it is unknown whether the latest order changes that.
A copy of the default judgment entered by Judge Rakoff is available here (pdf).
...
Read the original on torrentfreak.com »
Last week we learned about Anthropic’s Mythos, a new LLM so “strikingly capable at computer security tasks” that Anthropic didn’t release it publicly. Instead, only critical software makers have been granted access, providing them time to harden their systems.
We quickly blew through our standard stages of processing big AI claims: shock, existential fear, hype, skepticism, criticism, and (finally) moving onto the next thing. I encouraged people to take a wait-and-see approach, as security capabilities are tailor-made for impressive demos. Finding exploits is a clearly defined, verifiable search problem. You’re not building a complex system, but poking at one that exists. A problem well suited to throwing millions of tokens at.
Yesterday, the first 3rd party analysis landed, from the AI Security Institute (AISI), largely supporting Anthropic’s claims. Mythos is really good, “a step up over previous frontier models in a landscape where cyber performance was already rapidly improving.”
The entire report is worth reading, but I want to focus on the following chart, detailing the ability of different models to successfully complete a simulated, complex corporate network attack:
“The Last Ones” is, “a 32-step corporate network attack simulation spanning initial reconnaissance through to full network takeover, which AISI estimates to require humans 20 hours to complete.” The lines are the average performance across multiple runs (10 runs for Mythos, Opus 4.6, and GPT-5.4), with the “max” lines representing the best of each batch. Mythos was the only model to complete the task, in 3 out of its 10 attempts.
This chart suggests an interesting security economy: to harden a system we need to spend more tokens discovering exploits than attackers spend exploiting them.
AISI budgeted 100M tokens for each attempt. That’s $12,500 per Mythos attempt, $125k for all ten runs. Worryingly, none of the models given a 100M budget showed signs of diminishing returns. “Models continue making progress with increased token budgets across the token budgets tested,” AISI notes.
If Mythos continues to find exploits so long as you keep throwing money at it, security is reduced to a brutally simple equation: to harden a system you need to spend more tokens discovering exploits than attackers will spend exploiting them.
You don’t get points for being clever. You win by paying more. It is a system that echoes cryptocurrency’s proof of work system, where success is tied to raw computational work. It’s a low temperature lottery: buy the tokens, maybe you find an exploit. Hopefully you keep trying longer than your attackers.
This calculus has a few immediate takeaways:
For those of you who aren’t exposed to AI maximalists, this statement feels absurd. But lately, after the LiteLLM and Axios supply chain scares, many have argued for reimplementing dependency functionality using coding agents.
Classical software engineering would have you believe that dependencies are good (we’re building pyramids from bricks), but imo this has to be re-evaluated, and it’s why I’ve been so growingly averse to them, preferring to use LLMs to “yoink” functionality when it’s simple enough and possible.
If security is purely a matter of throwing tokens at a system, Linus’s law that, “given enough eyeballs, all bugs are shallow,” expands to include tokens. If corporations that rely on OSS libraries spend to secure them with tokens, it’s likely going to be more secure than your budget allows. Certainly, this has complexities: cracking a widely used OSS package is inherently more valuable than hacking a one-off implementation, which incentivizes attackers to spend more on OSS targets.
Second, hardening will be an additional phase for agentic coders.
We’ve already been seeing developers break their process into two steps, development and code review, often using different models for each phase. As this matures, we’re seeing purpose-built tooling meeting this pattern. Anthropic launched a code review product that costs $15-20 per review.
If the above Mythos claims hold, I suspect we’ll see a three phase cycle: development, review, and hardening.
Review: Document, refactor, and other gardening tasks, async, applying best practices with each PR.
Hardening: Identify exploits, autonomously, until the budget runs out.
Critically, human input is the limiter for the first phase and money is the limiter for the last. This quality inherently makes them separate stages (why spend to harden before you have something?). Previously, security audits were rare, discrete, and inconsistent. Now we can apply them constantly, within an optimal (we hope!) budget.
Code remains cheap, unless it needs to be secure. Even if costs go down as inference optimizations, unless models reach the point of diminishing security returns, you still need to buy more tokens than attackers do. The cost is fixed by the market value of an exploit.
...
Read the original on www.dbreunig.com »
The editor in chief of this blog was born in 2004. She uses the 1997 window manager, Enlightenment E16, daily. In this article, I describe the process of fixing a show-stopping, rare bug that dates back to 2006 in the codebase. Surprisingly, the issue has roots in a faulty implementation of Newton’s algorithm.
Some may find it weird, but I actually greatly enjoy using Enlightenment E16 as my window manager. It’s themable, hackable, lightweight (24MB peak RSS!), amenable to heavy keyboard users like myself, and most importantly - it looks goregous:
E16 first came to be in 1997, thanks to Carsten Haitzler, and it has been in development ever since. Most have moved to E17 and other newer versions; a community of hardcore enthusiasts still uses E16, and I am one of them. The codebase is quite old, and it has accumulated a lot of technical debt over the years.
Bugs always come out of the woodworks in a time scramble and this one likely sensed a prime opportunity: I was doing a lot of last-minute work on a couple of slides for a course that I will be teaching. I had a couple of PDFs with lecture slides and an exercise sheet typeset in LaTeX. At some point, I opened one of them in Atril, and the entire desktop froze.
I killed the X11 session from a TTY. Sadly, the hang was deterministic: every time I opened that specific PDF.
Attaching gdb to the live process showed every sample parked in imlib2’s font cache, under the same e16 caller:
#0 __strcmp_evex ()
#1 __imlib_hash_find (hash=0x55bc9c111420, key=“\001\001\001\001\001”) object.c:172
#2 __imlib_font_cache_glyph_get (fn=…, index=0) font_draw.c:30
#3 __imlib_font_get_next_glyph (… utf8=“Kickoff.pdf — Introduction…“) font_main.c:218
#4 __imlib_font_query_advance (…) font_query.c:89
#5 imlib_get_text_advance (…) api_text.c:231
#6 Efont_extents (…) text_ift.c:87
#7 _ift_TextSize (…) text_ift.c:156
#8 TextstateTextFitMB (ts=…, textwidth_limit=291) text.c:350
#9 TextstateTextFit (…) text.c:559
#10 TextstateTextDraw (… text=“Kickoff.pdf — Introduction…“) text.c:638
#11 ITApply (…) iclass.c:930
#12 ITApply (…) iclass.c:884
#13 _BorderWinpartITclassApply (ewin=…, i=2, force=1) borders.c:179
#14 EwinBorderUpdateInfo (ewin=…) borders.c:300
#15 EwinChangesProcess (…) ewins.c:2141
#16 EwinEventPropertyNotify (ewin=…, ev=…) ewins.c:1438
#21 main (…) main.c:320
Re-attaching repeatedly showed the program was not deadlocked. __imlib_font_cache_glyph_get was being called with different glyph indices (0, 20, 73, 81, 82, 87, 88, …) each time. So the inner font-measurement was making progress; the loop was somewhere outside it.
After some fudging, I found out that Frame 8 (TextstateTextFitMB at text.c:350) was the constant. That’s a ts->ops->TextSize(ts, new_line, 0, pw, &hh, &ascent); call inside the middle-ellipsis truncation loop that tries to fit a string into textwidth_limit = 291 pixels by nuking characters out of the middle - used when rendering the title of the PDF, that happened to also be the title of the window, too long for the decoration to contain.
Dumping the frame’s locals across many samples revealed a clean two-state oscillation:
nuke_count = 8 nc2 = 36 wc_len = 81 len_n = 76
nuke_count = 11 nc2 = 35 wc_len = 81 len_n = 73
nuke_count = 8 nc2 = 36 wc_len = 81 len_n = 76
I always saw two trial truncations, forever, same text each time.
We start at the lowest common denominator - there is likely a logic bug here.
The loop is of paticular interest to us. Abridged:
This is a Newton-style search that estimates how many more/fewer wchars to nuke based on how far off width is from textwidth_limit, using cw = width / len_n as the derivative (average pixels per char). Seeing clever and crafty solutions like this is delightful. But to anyone who has ever implemented Newton’s method, this code screams something obvious: “Where is your iteration limit?!”. Newton’s method can fail to converge, and it can also overshoot and diverge - all depending on the starting point, the nature of the function, and the quality of the derivative estimate. In this case, the method was oscillating between two points forever.
To make matters worse, the exit tolerance () is tight - accept only nc2 between [0, 3*cw). This also explains why ordinary short titles never tripped it - on shorter strings or with wider cw, the branch kicks in and the step becomes 1, which converges.
I have made three defensive changes, applied symmetrically to both the multi-byte and ASCII loops:
* Capped iteration counts at 32. Past the cap, if the current trial fits nc2 >= 0 we just accept it; otherwise bump nuke_count by 1 and retry. This guarantees termination in bounded time and picks the first fitting trial once the Newton step has been shown to oscillate.
* We now floor nuke_count at 1 inside the loop, so a negative correction can never produce the degenerate tail-overlaps-head string.
* Floor cw at 1, so a pathological zero-width measurement cannot turn the step formulas into a divide-by-zero.
Any window whose WM_NAME is long enough that the middle-ellipsis search falls into the overshoot regime reproduces this. The one in the wild:
Kickoff.pdf — Introduction to Information Theory Session 1: kickoff & first topic
Newer is not necessarily better. Fresh software carries brand new bugs for you and the maintainers to enjoy, now empowered by the barrier to contribute being much lower thanks to Large Language Models. But sometimes stable maintainers do absurdly dumb things too:
On the 3rd of April 2026, I remarked that fgetxattr(54321, NULL, NULL, 0); apparently crashes yesterday’s 6.6.y lts kernel. A call that should just return -1 and set errno to EINVAL because the path is invalid, but a stable maintainer patched it out wholesale.
Then, the awful commit was reverted, on the 8th of April. No CVE has been assigned despite an obvious Denial-Of-Service attack vector being introduced.
If this is what happens by mistake on a daily basis, what happens when the supply chain is compromised and a malicious actor intentionally introduces a bug? The mind boggles. Back when the XZ backdoor was introduced, I was scrolling through news on my Debian Sid laptop with some code compiling in the background. I learned of a backdoor in XZ Utils, potentially introduced by a state actor in version v5.6.0. Thinking back to the fact that I do, indeed, run a bleeding edge distro and update often, I immediately ran apt list –upgradable | grep xz-utils. Sure enough, the stains on my laptop from the coffee I spat out through the nose were pretty tough to deal with.
On the other hand, the amount of bugs in private checkouts of crusty old software maintained by competent developers will monotonically decrease. If I need a feature, I will implement it. If there is a problem, I only have myself to blame. There is no supply chain to compromise, and if a determined, targetted state actor wants sudo privileges on my machine - they will find a way to get it anyway. Oh, also, eI probably wasn’t going to use whatever features that my XFWM updates (the WM I used to use before!) were going to bring.
...
Read the original on iczelia.net »
*Menu prices may differ at special location restaurants, selected restaurants and for delivery.
English menu is available for your convenience
McDonald’s menu and allergen/nutrition information is available in English for the convenience of our customers, except for the information listed below, which is currently available only in Japanese in McDonald’s Japan website.
Information and notes on products and availability
*McDonald’s Japan’s allergen information only covers 8 ingredients which must be indicated on the label and 20 which are recommended by Japanese Food Labeling Standard (Food Labeling Act) as of September 2024. You can also place an order in English on our official app. Several restaurants also have English menus on hand, so please ask our crew if you are looking for an English menu.
※Click the image or product name to learn more about allergen/nutrition information, and other details.
※All displayed prices are tax included and a single, tax-inclusive price applies for both eat-in and takeout (inc. drive-thru) orders (tax-exclusive price may differ).
※Menu prices may differ at special location restaurants and selected restaurants.
※Some products are not available at all restaurants.
※“Bai Burger” menu is available for all regular burgers except for “Roasted Soy Sauce Double Thick Beef” and “Roasted Soy Sauce Egg Bacon Thick Beef”.
※Breakfast is available until 10:30am, Regular Menu is available from 10:30am and Yoru Mac menu is available from 5:00pm
※Asa Mac orders are accepted until 10:20am for Mobile Order & Pay and McDelivery
※HiruMac is available between 10:30am and 2:00pm on weekdays
※McShake®, McFloat®, Soft Twist, McFlurry® are available between 10:30am and 1:00 am the next day
※McShake® may be mixed with other flavors due to the nature of the machine. For this reason, the allergy information may differ from the usual information during limited-time product sales. Please check the latest information each time you order.
※For customized products, exact information may vary. Please be aware that customization is not a service that completely eliminates allergens.
※Oreo and the design of the Oreo cookie are trademarks licensed by the Mondelez International Group.
※ Coke is a registered trademarks of The Coca-Cola Company.
※McCafé® menu at McCafé by Barista stores availability is subject to McCafé by Barista counter business hours.
※McCafé® menu is not available for purchase at the drive-thru at some McCafé by Barista stores.
※Images are for illustrative purposes only.
※Coupons for shareholders are not redeemable for Shaka Shaka Potato® Buttered Potato Flavor.
...
Read the original on www.mcdonalds.co.jp »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.