10 interesting stories served every morning and every evening.
Imgur decided to block UK users. Honestly? I don’t really care that much. I haven’t actively browsed the site in years. But it used to be everywhere. Back when Reddit embedded everything on Imgur, maybe fifteen years ago, it was genuinely useful. Then Reddit built their own image hosting, Discord did the same, and Imgur slowly faded into the background.
Except it never fully disappeared. And since the block, I keep stumbling across Imgur links that just show “unavailable.” It’s mildly infuriating.
Here’s a concrete example. I was playing Minecraft with some work colleagues and wanted to try different shaders. Most shader pages embed preview images hosted on Imgur. So I’d click through shader after shader, and every single preview was just gone. I couldn’t see what any of them looked like without the images.
This kind of thing happens constantly now. Old forum posts, Reddit threads, documentation pages, random project READMEs. Imgur links are still scattered across the internet, and in the UK, they’re all broken.
The obvious solution is to use a VPN. Change your location, problem solved. But I have a few issues with that approach.
First, I just upgraded to 2.5 Gbps internet and I don’t want to route all my traffic through a VPN and take the speed hit. I have this bandwidth for a reason.
Second, even if I installed a VPN on my main machine, what about my phone? My laptop? My desktop? Every device would need the VPN running, and I’d have to remember to connect it before browsing. It’s messy.
I wanted something cleaner: a solution that works for every device on my network, automatically, without any client-side configuration.
I already run a homelab with Traefik as my reverse proxy, Pi-hole for DNS, and everything declaratively configured with NixOS. If you’ve read my previous post on Docker containers with secrets, you’ll recognise the pattern.
The idea was simple: intercept all requests to i.imgur.com at the DNS level, route them through a VPN-connected container, and serve the images back. Every device on my network automatically uses Pi-hole for DNS via DHCP, so this would be completely transparent.
Traefik sees the SNI hostname and routes to GluetunNginx (attached to Gluetun’s network) proxies to the real ImgurImage comes back through the tunnel to the device
Good question. Gluetun isn’t a reverse proxy. It’s a container that provides VPN connectivity to other containers attached to its network namespace. So I needed something inside Gluetun’s network to actually handle the proxying. Nginx was the simplest choice.
The Nginx config is minimal. It just does TCP passthrough with SNI:
This listens on port 443, reads the SNI header to confirm the destination, and passes the connection through to the real i.imgur.com. The TLS handshake happens end-to-end; Nginx never sees the decrypted traffic.
The compose file runs two containers. Gluetun handles the VPN connection, and Nginx attaches to Gluetun’s network:
The key detail is network_mode: “service:gluetun”. This makes Nginx share Gluetun’s network stack, so all its traffic automatically goes through the VPN tunnel.
I’m not going to mention which VPN provider I use. It’s one of the major ones with WireGuard support, but honestly I’m not thrilled with it. Use whatever you have.
The final piece is telling Traefik to route i.imgur.com traffic to the Gluetun container. This uses TCP routing with TLS passthrough:
The passthrough: true is important. It means Traefik doesn’t terminate TLS; it just inspects the SNI header and forwards the connection.
Following the same pattern from my Docker with secrets post, I created a systemd service that runs the compose stack with Agenix-managed secrets:
The VPN credentials are stored encrypted with Agenix, so my entire dotfiles repo stays public while keeping secrets safe.
Now when any device on my network requests an Imgur image, it works. My phone, my laptop, guest devices, everything. No VPN apps to install, no browser extensions, no manual configuration. Pi-hole intercepts the DNS, Traefik routes the connection, and Gluetun tunnels it through a non-UK exit point.
The latency increase is negligible for loading images, and it only affects Imgur traffic. Everything else still goes direct at full speed.
Is this overkill for viewing the occasional Imgur image? Probably. But it’s a clean solution that requires minimal ongoing maintenance, and it scratches the homelab itch. Plus I can finally see what those Minecraft shaders look like.
...
Read the original on blog.tymscar.com »
Sentence Transformers provide local, easy to use embedding models for capturing the semantic meaning of sentences and paragraphs.
The dataset in this HackerNews dataset contains vector emebeddings generated from the
all-MiniLM-L6-v2 model.
An example Python script is provided below to demonstrate how to programmatically generate embedding vectors using sentence_transformers1 Python package. The search embedding vector is then passed as an argument to the [cosineDistance()](/sql-reference/functions/distance-functions#cosineDistance) function in the SELECT` query.from sentence_transformers import SentenceTransformer
import sys
import clickhouse_connect
print(“Initializing…“)
model = SentenceTransformer(‘sentence-transformers/all-MiniLM-L6-v2’)
chclient = clickhouse_connect.get_client() # ClickHouse credentials here
while True:
# Take the search query from user
print(“Enter a search query :“)
input_query = sys.stdin.readline();
texts = [input_query]
# Run the model and obtain search vector
print(“Generating the embedding for ”, input_query);
embeddings = model.encode(texts)
print(“Querying ClickHouse…“)
params = {‘v1’:list(embeddings[0]), ‘v2’:20}
result = chclient.query(“SELECT id, title, text FROM hackernews ORDER BY cosineDistance(vector, %(v1)s) LIMIT %(v2)s”, parameters=params)
print(“Results :“)
for row in result.result_rows:
print(row[0], row[2][:100])
print(“––––-“)
An example of running the above Python script and similarity search results are shown below (only 100 characters from each of the top 20 posts are printed):Initializing…
Enter a search query :
Are OLAP cubes useful
Generating the embedding for “Are OLAP cubes useful”
Querying ClickHouse…
Results :
27742647 smartmic:
slt2021: OLAP Cube is not dead, as long as you use some form of:1. GROUP BY multiple fi ––––- 27744260 georgewfraser:A data mart is a logical organization of data to help humans understand the schema. Wh ––––- 27761434 mwexler:“We model data according to rigorous frameworks like Kimball or Inmon because we must r ––––- 28401230 chotmat: erosenbe0: OLAP database is just a copy, replica, or archive of data with a schema designe ––––- 22198879 Merick:+1 for Apache Kylin, it’s a great project and awesome open source community. If anyone i ––––- 27741776 crazydoggers:I always felt the value of an OLAP cube was uncovering questions you may not know to as ––––- 22189480 shadowsun7: _Codemonkeyism: After maintaining an OLAP cube system for some years, I’m not that ––––- 27742029 smartmic: gengstrand: My first exposure to OLAP was on a team developing a front end to Essbase that ––––- 22364133 irfansharif: simo7: I’m wondering how this technology could work for OLAP cubes. An OLAP cube ––––- 23292746 scoresmoke:When I was developing my pet project for Web analytics (
The example above demonstrated semantic search and document retrieval using ClickHouse.
A very simple but high potential generative AI example application is presented next.
The application performs the following steps:
Accepts a topic as input from the user
Generates an embedding vector for the topic by using the SentenceTransformers with model all-MiniLM-L6-v2
Retrieves highly relevant posts/comments using vector similarity search on the hackernews table
Uses LangChain and OpenAI gpt-3.5-turbo Chat API to summarize the content retrieved in step #3.
The posts/comments retrieved in step #3 are passed as context to the Chat API and are the key link in Generative AI.
An example from running the summarization application is first listed below, followed by the code for the summarization application. Running the application requires an OpenAI API key to be set in the environment variable OPENAI_API_KEY. The OpenAI API key can be obtained after registering at https://platform.openai.com.
This application demonstrates a Generative AI use-case that is applicable to multiple enterprise domains like : customer sentiment analysis, technical support automation, mining user conversations, legal documents, medical records, meeting transcripts, financial statements, etc$ python3 summarize.py
Enter a search topic :
ClickHouse performance experiences
Generating the embedding for ––> ClickHouse performance experiences
Querying ClickHouse to retrieve relevant articles…
Initializing chatgpt-3.5-turbo model…
Summarizing search results retrieved from ClickHouse…
Summary from chatgpt-3.5:
The discussion focuses on comparing ClickHouse with various databases like TimescaleDB, Apache Spark,
AWS Redshift, and QuestDB, highlighting ClickHouse’s cost-efficient high performance and suitability
for analytical applications. Users praise ClickHouse for its simplicity, speed, and resource efficiency
in handling large-scale analytics workloads, although some challenges like DMLs and difficulty in backups
are mentioned. ClickHouse is recognized for its real-time aggregate computation capabilities and solid
engineering, with comparisons made to other databases like Druid and MemSQL. Overall, ClickHouse is seen
as a powerful tool for real-time data processing, analytics, and handling large volumes of data
efficiently, gaining popularity for its impressive performance and cost-effectiveness.
Code for the above application :print(“Initializing…“)
import sys
import json
import time
from sentence_transformers import SentenceTransformer
import clickhouse_connect
from langchain.docstore.document import Document
from langchain.text_splitter import CharacterTextSplitter
from langchain.chat_models import ChatOpenAI
from langchain.prompts import PromptTemplate
from langchain.chains.summarize import load_summarize_chain
import textwrap
import tiktoken
def num_tokens_from_string(string: str, encoding_name: str) -> int:
encoding = tiktoken.encoding_for_model(encoding_name)
num_tokens = len(encoding.encode(string))
return num_tokens
model = SentenceTransformer(‘sentence-transformers/all-MiniLM-L6-v2’)
chclient = clickhouse_connect.get_client(compress=False) # ClickHouse credentials here
while True:
# Take the search query from user
print(“Enter a search topic :“)
input_query = sys.stdin.readline();
texts = [input_query]
# Run the model and obtain search or reference vector
print(“Generating the embedding for ––> ”, input_query);
embeddings = model.encode(texts)
print(“Querying ClickHouse…“)
params = {‘v1’:list(embeddings[0]), ‘v2’:100}
result = chclient.query(“SELECT id,title,text FROM hackernews ORDER BY cosineDistance(vector, %(v1)s) LIMIT %(v2)s”, parameters=params)
# Just join all the search results
doc_results = “”
for row in result.result_rows:
doc_results = doc_results + “\n” + row[2]
print(“Initializing chatgpt-3.5-turbo model”)
model_name = “gpt-3.5-turbo”
text_splitter = CharacterTextSplitter.from_tiktoken_encoder(
model_name=model_name
texts = text_splitter.split_text(doc_results)
docs = [Document(page_content=t) for t in texts]
llm = ChatOpenAI(temperature=0, model_name=model_name)
prompt_template = “”″
Write a concise summary of the following in not more than 10 sentences:
...
Read the original on clickhouse.com »
Toulouse, France, 28 November 2025 — Analysis of a recent event involving an A320 Family aircraft has revealed that intense solar radiation may corrupt data critical to the functioning of flight controls.
Airbus has consequently identified a significant number of A320 Family aircraft currently in-service which may be impacted.
Airbus has worked proactively with the aviation authorities to request immediate precautionary action from operators via an Alert Operators Transmission (AOT) in order to implement the available software and/or hardware protection, and ensure the fleet is safe to fly. This AOT will be reflected in an Emergency Airworthiness Directive from the European Union Aviation Safety Agency (EASA).
Airbus acknowledges these recommendations will lead to operational disruptions to passengers and customers. We apologise for the inconvenience caused and will work closely with operators, while keeping safety as our number one and overriding priority.
...
Read the original on www.airbus.com »
I don’t remember when I first started noticing that people I knew out in the world had lost their sense of erotic privacy, but I do remember the day it struck me as a phenomenon that had escaped my timeline and entered my real, fleshy life. It was last year, when I was having a conversation with a friend of mine, who, for the record, is five years younger than me (I’m 31). I told my friend about an erotic encounter I’d just experienced and very much delighted in, in which I had my hair brushed at the same time by two very beautiful women at the hair salon — one was teaching the other how to do it a certain way. When I finished my story, my friend looked at me, horrified.
“They had no idea you felt something sexual about them,” she said. “What if they found out? Lowkey, I hate to say this but: you took advantage of them.” I was shocked. I tried to explain — and it felt extremely absurd to explain — that this had happened in my body and in my thoughts, which were private to me and which nobody had the right to know about. But they did have the right, my friend argued. She demanded that I apologize to the women for sexualizing them. Offended at having been accused — in my view, in extremely bad faith — of being some kind of peep-show creep, I tried to argue that I’d simply responded in a physical way to an unexpected, direct, and involuntary stimulus. Back and forth, back and forth, we fought like this for a while. In fact, it ended the friendship.
There were other conversations, too, that suggested to me that conceptions of love and sex have changed fundamentally among people I know. Too many of my friends and acquaintances — of varying degrees of “onlineness,” from veteran discourse observers to casual browsers — seem to have internalized the internet’s tendency to reach for the least charitable interpretation of every glancing thought and, as a result, to have pathologized what I would characterize as the normal, internal vagaries of desire.
Hence, there was the friend who justified her predilection for being praised in bed as a “kink” inherited through the “trauma” of her father always harping on her because of her grades. There was the friend who felt entitled to posting screenshots of intimate conversations on Twitter after a messy breakup so that she could get a ruling on “who was the crazy one.” Then there was the friend who bitterly described a man he was dating as a “fuckboy” because he stood him up, claiming that their having enjoyed sex together beforehand was “emotionally manipulative.” When I dug a bit deeper, it turned out the man in question had just gotten out of a seven-year relationship and realized he wasn’t ready to be sexually intimate, and while he was rude to stand my friend up, it shocked me how quick my friend was to categorize his rightfully hurt feelings as something pathological or sinister in the other person, and that he did this in order to preemptively shield himself from being cast as the villain in what was a multi-party experience. This last friend I asked: “Who are you defending yourself against?” To which he answered, to my astonishment: “I don’t know. The world.”
I choose these examples from my personal life because they express sentiments that were once the kind of stuff I encountered only in the messy battlegrounds of Twitter, amid discussions about whether Sabrina Carpenter is being oversexualized, whether kinks are akin to a sexual orientation, whether a woman can truly consent in an age-gap relationship, and whether exposure to sex scenes in movies violates viewer consent. It is quite easy to dismiss these “discourse wars” as a “puritanism” afflicting the young, a reactionary current to be solved with a different, corrective discourse of pro-sex liberation, distributed via those same channels. If only it were so! To me, the reality goes deeper and is bleaker.
The fact is that our most intimate interactions with others are now governed by the expectation of surveillance and punishment from an online public. One can never be sure that this public or someone who could potentially expose us to it isn’t there, always secretly filming, posting, taking notes, ready to pounce the second one does something cringe or problematic (as defined by whom?). To claim that these matters are merely discursive in nature is to ignore the problem. Because love and sex are so intimate and vulnerable, the stakes of punishment are higher, and the fear of it penetrates deeper into the psyche and is harder to rationalize away than, say, fear of pushback from tweeting a divisive political opinion.
I should state at this point that this is not an essay about “cancel culture going too far,” a topic which can now be historicized as little more than a rhetorical cudgel wielded successfully by the right to wrest cultural power back from an ascendant progressive liberalism. This was especially true after the prominence of organized campaigns such as #MeToo. #MeToo was smeared by liberals and conservatives alike (united, as they always are, in misogyny) as being inherently punitive in nature, meant to punish men who’d fallen into a rough patch of bad behavior, or who, perhaps, might not have done anything at all (the falsely accused or the misinterpreted man became the real victim, in this view). #MeToo did make use of the call-out — the story shared in a spreadsheet anonymously or in a signed op-ed — but the call-outs had a purpose: to end a long-standing and long-permitted norm of sexual abuse within institutions. Underlying this was a discursive practice and a form of solidarity building in which people believed that sharing their stories of trauma en masse could bring about structural change. As someone who participated myself, I too believed in this theory and saw it as necessary, cathartic, and political, and far from vigilante justice.
But the pushback against #MeToo reveals a certain peril to storytelling as politics, not only in the retraumatization evident in the practice of revealing one’s most intimate harms before an infinite online audience, which could always include those listening in bad faith. But also, a discursive market opened up in which trauma became a kind of currency of authenticity, resulting in a doubled exploitation. This idea, while not very nice, lingers in the use of harm as an authoritative form of rhetorical defense. The problem here is not what is said, but how it is used. A friction has since emerged between an awareness of weaponization of harm and emotion and the continued need to express oneself as vulnerably as possible in order to come off as sincere. This friction is unresolved.
The organized goals of the #MeToo movement are missing from the new puritanism. I think that the prudish revulsion I’ve seen online and in my own life has as much to do with surveillance as with sex. Punishing strangers for their perceived perversion is a form of compensation for a process that is already completed: the erosion of erotic and emotional privacy through internet-driven surveillance practices, practices we have since turned inward on ourselves. In short, we have become our own panopticons.
On the rightmost side of the spectrum, punitive anti-erotic surveillance is very explicit and very real, especially for women. The Andrew Tates of the world and the practitioners of extreme forms of misogyny have no problem with using internet tools and social media websites for mass shaming and explicit harm. Covert filming of sex acts, AI deep fakes, extortion, and revenge porn are all realities one has to contend with when thinking about hooking up or going to public places such as nightclubs and gay bars. This is blackmail at its most explicit and extreme, meant to further solidify a link between sex and fear.
But that link between sex and fear is operating in more “benign” or common modes of internet practice. There is an online culture that thinks nothing of submitting screenshots, notes, videos, and photos with calls for collective judgement. When it became desirable and permissible to transform our own lives into content, it didn’t take long before a sense of entitlement emerged that extended that transformation to people we know and to strangers. My ex sent me this text, clearly she is the crazy one, right? Look at this dumb/funny/cringe Hinge profile! Look at this note some guy sent me, is this a red flag? Look at this random woman I photographed buying wine, coconut oil, and a long cucumber at the supermarket!
I think these kinds of posts sometimes amount to little more than common bullying, but they are on a continuum with a puritan discourse in which intimate questions, practices, and beliefs about queerness, sexuality, gender presentation, and desire are also subjected to days-long piles-on. In both instances, the instinct to submit online strangers to viral discipline is given a faux-radical sheen. It’s a kind of casual blackmail that warns everyone to conform or be exposed; a way of saying if you don’t cave to my point of view, redefine yourself in my image of what sexuality is or should be, and (most importantly) apologize to me and the public, I will subject you to my large following and there will be hell to pay. Such unproductive and antisocial behavior is justified as a step toward liberation from predation, misogyny, or any number of other harms. But the punitive mindset we’ve developed towards relationships is indicative of an inability to imagine a future of gendered or sexual relations without subjugation. To couch that in the language of harm reduction and trauma delegitimizes both.
There are other ways the politics of surveillance have become a kind of funhouse mirror. It is seen as more and more normal to track one’s partner through Find My iPhone or an AirTag, even though the potential for abuse of this technology is staggering and obvious. There are all kinds of new products, such as a biometric ring that is allegedly able to tell you whether your partner is cheating, that expand this capability into more and more granular settings. That’s all before we get into the endless TikToks about “why I go through my partner’s text messages.” That men use these tactics and tools to control women is a known threat. What is astonishing is the lengths to which some women will go to use these same technologies, claiming that they are necessary to prevent harm — especially that caused by cheating, which is now seen as some kind of lifelong trauma or permanently damnable offense instead of one of the rather quotidian, if very painful, ways we hurt one another. Each of these surveillance practices operates from a feeling of entitlement and control over other people, their bodies, and what they do.
Pundits like to decree sexlessness as a Gen-Z problem, to argue no one is fucking because they are too on their phones. However, it is always too easy to blame the young. It was my generation that failed to instill the social norms necessary to prevent a situation where fear of strangers on the internet has successfully replaced the disciplinary apparatus more commonly held by religious or conservative doctrine. Even when, as in my experience in the salon, I am acting in the privacy of my own body, someone is always there watching, ready to interpret my actions, problematize them so as to share in the same sense of magical thinking, the same insecurities, and to be punished for not being insecure in the same way.
It’s only in retrospect that I’m able to realize the toll that constant, nagging interaction with my devices and the internet has taken on my thinking life and my sex life. I remember very viscerally when I’d just come out of the closet as bisexual in 2016. When I embarked on a journey to find the kind of lover I wanted to be, my only experience with the world of queerness was online through memes, articles, and others’ social media presentation of themselves and of politics. Queer sex was not something that could be discovered through sensation, through physical interaction, but was rather a catalog of specific acts and roles one was already expected to know. I was terrified of making some kind of mistake, of being the wrong kind of bisexual, of misrepresenting myself in an offensive way (could I use the term “soft butch” if I wasn’t a lesbian?), of being exposed somehow as a fraud. When the time came for me to have sex for the first time, what should have been a joyous occasion was instead burdened with a sense of being watched. I could not let the natural processes of erotic discovery take their course, so caught up was I in judging myself from the perspective of strangers to whom I owed nothing.
But it wasn’t just a matter of queerness, either. When I hooked up with men, I could only perceive of sex the same way, not as situational but as a set of prescribed acts and scenes, many of which I wanted to explore. However, this time I interrogated these urges as being sociogenic in nature and somehow harmful to me, when they were, in fact, private, and I did not, in reality, feel harmed. Because I wanted, at one point in my life, to be tied up and gagged, the disempowering nature of such a want necessitated trying to justify it against invisible accusations with some kind of traumatogenic and immutable quality. Maybe it was because I was raped in college. Maybe I was just inherently submissive. One of the great ironies in the history of sex is that pathologization used to be a way of controlling sexual desire. (All are familiar with the many myths that masturbation would turn one blind.) Now it is a way of exempting oneself, of relinquishing control of one’s actions so as to absolve them of scrutiny. My little bondage moment couldn’t be problematic if it couldn’t be helped. It couldn’t be subjected to interrogation if there was something I could point to to say “it’s beyond my control, don’t judge me!” One day, however, I came to an important revelation: The reality was much simpler. It was a passing phase, a desire that originated with a specific man and lost its charm after I moved on from him. There wasn’t some deterministic quality in myself that made me like this. My desire was not fixed in nature. My sexual qualities were transient and not inborn. What aroused me was wonderfully, entirely situational.
A situational eroticism is what is needed now, in our literalist times. It’s exhausting, how everything is so readily defined by types, acts, traumas, kinks, fetishes, pathology, and aesthetics. To me, our predilection for determinism is an expected psychological response to excessive surveillance. A situational eroticism decouples sensation from narrative and typology. It allows us to feel without excuse and to relate our feelings to our immediate embodied moment, grounded in a fundamental sense of personal privacy. While it is admirable to try and understand ourselves and important to protect ourselves from harm and investigate critically the ways in which what we want may put us at risk of that harm — or at risk of doing harm to others — sometimes desires just are, and they are not that way for long. Arousal is a matter of the self, which takes place within the body, a space no one can see into. It is often a mystery, a surprise, a discovery. It can happen at a small scale, say, the frisson of two sets of fingers in one’s hair at once. It is beautiful, unplanned and does not judge itself because it is an inert sensation, unimbued with premeditated meaning. This should liberate rather than frighten us. Maybe what it means doesn’t matter. Maybe we don’t have to justify it even to ourselves.
But in order to facilitate a return to situational eroticism, we need to kill the panopticon in our heads. That means first killing the panopticon we’ve built for others. There is no purpose in vindictive or thoughtless exposure. Not everything needs to be subjected to public opinion, not every anecdote is worth sharing, not every debate needs engagement, especially those debates which have no material basis to them, no ask, no funnel for all that energy. We need to stop confusing vigilantism with justice and posting with politics. That does not mean we stop the work that #MeToo started, but that revenge is a weapon best utilized collectively against the enemies of liberation. We need to protect the vulnerable from exploitative technologies and practices, repeatedly denounce their use, and work towards a world without sexual coercion, digital or otherwise.
On an individual level, we need to abandon or reshape our relationships with our phones and regain a sense of our own personal and mental privacy. It’s a matter of existential, metaphysical importance. Only when this decoupling from ourselves and the mediated performance of ourselves is complete, can we begin the process of returning to our own bodies out there, in the world, with no one watching or reading our thoughts except those we want to. The truth is, we are very afraid not of sex, but of exposure. Only when we are unafraid can we begin to let desire flourish. Only when we return to ourselves can we really know what we want.
Kate Wagner is the architecture critic at The Nation. Her award-winning cultural writing has been featured in magazines ranging from The Baffler to the New Republic.
...
Read the original on lux-magazine.com »
Molly is an independent Signal fork for Android with improved features:
Extra theme that follows your device palette
When you are gone for a set period of time
New and better features to come
...
Read the original on molly.im »
OpenAI is now internally testing ‘ads’ inside ChatGPT that could redefine the web economy.
Up until now, the ChatGPT experience has been completely free.
While there are premium plans and models, you don’t see GPT sell you products or show ads. On the other hand, Google Search has ads that influence your buying behaviour.
As spotted by Tibor on X, ChatGPT Android app 1.2025.329 beta includes new references to an “ads feature” with “bazaar content”, “search ad” and “search ads carousel.”
This move could disrupt the web economy, as what most people don’t understand is that GPT likely knows more about users than Google.
For example, OpenAI could create personalised ads on ChatGPT that promote products that you really want to buy. It might also sneak in ads in the search ads, similar to Google Search ads.
The leak suggests that ads will initially be limited to the search experience only, but this may change in the future.
ChatGPT has roughly 800 million people using it every week, up from 100 million weekly users in November 2023 and about 300 million weekly users in late 2024.
An OpenAI-backed study estimated 700 million users sending 18 billion messages per week by July 2025, which lines up with this growth, and other analysts now peg traffic at around 5–6 billion visits per month.
GPT handles about 2.5 billion prompts a day, and India has become the single biggest user base, ahead of the US.
ChatGPT has everything it needs for ads to succeed. What do you think?
...
Read the original on www.bleepingcomputer.com »
Every couple of years somebody notices that large tech companies sometimes produce surprisingly sloppy code. If you haven’t worked at a big company, it might be hard to understand how this happens. Big tech companies pay well enough to attract many competent engineers. They move slowly enough that it looks like they’re able to take their time and do solid work. How does bad code happen?
I think the main reason is that big companies are full of engineers working outside their area of expertise. The average big tech employee stays for only a year or two. In fact, big tech compensation packages are typically designed to put a four-year cap on engineer tenure: after four years, the initial share grant is fully vested, causing engineers to take what can be a 50% pay cut. Companies do extend temporary yearly refreshes, but it obviously incentivizes engineers to go find another job where they don’t have to wonder if they’re going to get the other half of their compensation each year.
If you count internal mobility, it’s even worse. The longest I have ever stayed on a single team or codebase was three years, near the start of my career. I expect to be re-orged at least every year, and often much more frequently.
However, the average tenure of a codebase in a big tech company is a lot longer than that. Many of the services I work on are a decade old or more, and have had many, many different owners over the years. That means many big tech engineers are constantly “figuring it out”. A pretty high percentage of code changes are made by “beginners”: people who have onboarded to the company, the codebase, or even the programming language in the past six months.
To some extent, this problem is mitigated by “old hands”: engineers who happen to have been in the orbit of a particular system for long enough to develop real expertise. These engineers can give deep code reviews and reliably catch obvious problems. But relying on “old hands” has two problems.
First, this process is entirely informal. Big tech companies make surprisingly little effort to develop long-term expertise in individual systems, and once they’ve got it they seem to barely care at all about retaining it. Often the engineers in question are moved to different services, and have to either keep up their “old hand” duties on an effectively volunteer basis, or abandon them and become a relative beginner on a brand new system.
Second, experienced engineers are always overloaded. It is a busy job being one of the few engineers who has deep expertise on a particular service. You don’t have enough time to personally review every software change, or to be actively involved in every decision-making process. Remember that you also have your own work to do: if you spend all your time reviewing changes and being involved in discussions, you’ll likely be punished by the company for not having enough individual output.
Putting all this together, what does the median productive engineer at a big tech company look like? They are usually:
* competent enough to pass the hiring bar and be able to do the work, but either
* working on a codebase or language that is largely new to them, or
* trying to stay on top of a flood of code changes while also juggling their own work.
They are almost certainly working to a deadline, or to a series of overlapping deadlines for different projects. In other words, they are trying to do their best in an environment that is not set up to produce quality code.
That’s how “obviously” bad code happens. For instance, a junior engineer picks up a ticket for an annoying bug in a codebase they’re barely familiar with. They spend a few days figuring it out and come up with a hacky solution. One of the more senior “old hands” (if they’re lucky) glances over it in a spare half-hour, vetoes it, and suggests something slightly better that would at least work. The junior engineer implements that as best they can, tests that it works, it gets briefly reviewed and shipped, and everyone involved immediately moves on to higher-priority work. Five years later somebody notices this and thinks “wow, that’s hacky - how did such bad code get written at such a big software company”?
I have written a lot about the internal tech company dynamics that contribute to this. Most directly, in Seeing like a software company I argue that big tech companies consistently prioritize internal legibility - the ability to see at a glance who’s working on what and to change it at will - over productivity. Big companies know that treating engineers as fungible and moving them around destroys their ability to develop long-term expertise in a single codebase. That’s a deliberate tradeoff. They’re giving up some amount of expertise and software quality in order to gain the ability to rapidly deploy skilled engineers onto whatever the problem-of-the-month is.
I don’t know if this is a good idea or a bad idea. It certainly seems to be working for the big tech companies, particularly now that “how fast can you pivot to something AI-related” is so important. But if you’re doing this, then of course you’re going to produce some genuinely bad code. That’s what happens when you ask engineers to rush out work on systems they’re unfamiliar with.
Individual engineers are entirely powerless to alter this dynamic. This is particularly true in 2025, when the balance of power has tilted away from engineers and towards tech company leadership. The most you can do as an individual engineer is to try and become an “old hand”: to develop expertise in at least one area, and to use it to block the worst changes and steer people towards at least minimally-sensible technical decisions. But even that is often swimming against the current of the organization, and if inexpertly done can cause you to get PIP-ed or worse.
I think a lot of this comes down to the distinction between pure and impure software engineering. To pure engineers - engineers working on self-contained technical projects, like a programming language - the only explanation for bad code is incompetence. But impure engineers operate more like plumbers or electricians. They’re working to deadlines on projects that are relatively new to them, and even if their technical fundamentals are impeccable, there’s always something about the particular setup of this situation that’s awkward or surprising. To impure engineers, bad code is inevitable. As long as the overall system works well enough, the project is a success.
At big tech companies, engineers don’t get to decide if they’re working on pure or impure engineering work. It’s not their codebase! If the company wants to move you from working on database infrastructure to building the new payments system, they’re fully entitled to do that. The fact that you might make some mistakes in an unfamiliar system - or that your old colleagues on the database infra team might suffer without your expertise - is a deliberate tradeoff being made by the company, not the engineer.
It’s fine to point out examples of bad code at big companies. If nothing else, it can be an effective way to get those specific examples fixed, since execs usually jump at the chance to turn bad PR into good PR. But I think it’s a mistake to attribute primary responsibility to the engineers at those companies. If you could wave a magic wand and make every engineer twice as strong, you would still have bad code, because almost nobody can come into a brand new codebase and quickly make changes with zero mistakes. The root cause is that most big company engineers are forced to do most of their work in unfamiliar codebases.
edit: this post got lots of comments on both Hacker News and lobste.rs.
It was surprising to me that many commenters find this point of view unplesasantly nihilistic. I consider myself fairly optimistic about my work. In fact, I meant this post as a rousing defence of big tech software engineers from their critics! Still, I found this response blog post to be an excellent articulation of the “this is too cynical” position, and will likely write a followup post about it soon. If you can’t wait, I wrote a bit on this topic at the start of 2025 in Is it cynical to do what your manager wants?.
Some Hacker News commenters had alternate theories for why bad code happens: lack of motivation, deliberately demoralizing engineers so they won’t unionize, or just purely optimizing for speed. I don’t find these compelling, based on my own experience. Many of my colleagues are highly motivated, and I just don’t believe any tech company is deliberately trying to make its engineers demoralized and unhappy.
A few readers disagreed with me about RSUs providing an incentive to leave, because their companies give stock refreshers. I don’t know about this. I get refreshers too, but if they’re not in the contract, then I don’t think it matters - the company can decide not to give you 50% of your comp at-will by just pausing the refreshers, which is an incentive to move jobs so it’s locked in for four more years.
...
Read the original on www.seangoedecke.com »
When we launched Skald, we wanted it to not only be self-hostable, but also for one to be able to run it without sending any data to third-parties.
With LLMs getting better and better, privacy-sensitive organizations shouldn’t have to choose between being left behind by not accessing frontier models and doing away with their committment to (or legal requirement for) data privacy.
So here’s what we did to support this use case and also some benchmarks comparing performance when using proprietary APIs vs self-hosted open-source tech.
A basic RAG usually has the following core components:
And most times it also has these as well:
What that means is that when you’re looking to build a fully local RAG setup, you’ll need to substitute whatever SaaS providers you’re using for a local option for each of those components.
Here’s a table with some examples of what we might use in a scenario where we can use third-party Cloud services and one where we can’t:
Do note that running something locally does not mean it needs to be open-source, as one could pay for a license to self-host proprietary software. But at Skald our goal was to use fully open-source tech, which is what I’ll be convering here.
The table above is far from covering all available options on both columns, but basically it gives you an indication of what to research into in order to pick a tool that works for you.
As with anything, what works for you will greatly depend on your use case. And you need to be prepared to run a few more services than you’re used to if you’ve just been calling APIs.
For our local stack, we went with the easiest setup for now to get it working (and it does! see writeup on this lower down) but will be running benchmarks on all other options to determine the best possible setup.
This is what we have today:
Vector DB: Postgres + pgvector. We already use Postgres and didn’t want to bundle another service into our stack, but this is controversial and we will be running benchmarks to make a better informed decision here. Note that pgvector will serve a lot of use cases well all the way up to hundreds of thousands of documents, though.
Vector embeddings: Users can configure this in Skald and we use Sentence Transformers (all-MiniLM-L6-v2) as our default (solid all-around performer for speed and retrieval, English-only). I also ran Skald with bge-m3 (larger, multi-language) and share the results later in this post.
LLM: We don’t even bundle a default with Skald and it’s up to the users to run and manage this. I tested our setup with GPT-OSS 20B on EC2 (results shown below).
Reranker: Users can also configure this in Skald, and the default is the Sentence Transformers cross encoder (solid, English-only). I’ve also used bge-reranker-v2-m3 and mmarco-mMiniLMv2-L12-H384-v1 which offer multi-lingual support.
Document parsing: There isn’t much of a question on this one. We’re using Docling. It’s great. We run it via docling-serve.
So the main goal here was first to get something working then ensure it worked well with our platform and could be easily deployed. From here we’ll be running extensive benchmarks and working with our clients to provide a solid setup that both performs well but is also not a nightmare to deploy and manage.
From that perspective, this was a great success.
Deploying a production instance of Skald with this whole stack took me 8 minutes, and that comes bundled with the vector database (well, Postgres), a reranking and embedding service, and Docling.
The only thing I needed to run separately was the LLM, which I did via llama.cpp.
Having gotten this sorted, I imported all the content from the PostHog website [1] and set up a tiny dataset [2] of questions and expected answers inside of Skald, then used our Experiments feature to run the RAG over this dataset.
I explicitly kept the topK values really high (100 for the vector search and 50 for post-reranking), as I was mostly testing for accuracy and wanted to see the performance when questions required e.g. aggregating context over 15+ documents.
So without any more delay, here are the results of my not-very-scientific at all benchmark using the experimentation platform inside of Skald.
This is our default Cloud setup. We use voyage-3-large and rerank-2.5 from Voyage AI as our embedding and reranking models respectively, and we default to Claude Sonnet 3.7 for responses (users can configure the model though).
Our LLM-as-a-Judge gave an average score of 9.45 to the responses, and I basically agree with the assessment. All answers were correct, with one missing a few extra bits of context.
With the control experiment done, I then moved on to a setup where I kept Voyage as the embeddings provider and reranker, and then used GPT-OSS 20B running on a llama.cpp server on a g5.2xlarge EC2 instance as the LLM.
The goal here was to see how well the open-source LLM model itself stacked up against a frontier model accessed via API.
And it did great!
We don’t yet support LLM-as-a-Judge on fully local deployments, so the only score we have here is mine. I scored the answers an average of 9.18 and they were all correct, with two of them just missing a few bits of information or highlighting less relevant information from the context.
Lastly, it was time for the moment of truth: running a fully local setup.
For this I ran two tests:
The most popular open-source models are all-MiniLM-L6-v2 for embeddings and ms-marco-MiniLM-L6-v2 as the reranker, so I used those for my first benchmark.
Here the average score was 7.10. Not bad, but definitely not great. However, when we dig into the results, we can get a better understanding of how this setup fails.
Basically, it got all point queries right, which are questions where the answer is somewhere in the mess of documents, but can be found from one specific place.
Where it failed was:
* Non-english query: The embeddings model and the reranker are English-based, so my question in Portuguese obviously got no answer
* An ambiguous question with very little context (“what’s ch”)
* Aggregating information from multiple documents/chunks e.g. it only found 5 out of PostHog’s 7 funding rounds, and only a subset of the PostHog competitors that offer session replay (as mentioned in the source data)
In my view, this is good news. That means that the default options will go a long way and should give you very good performance if your use case is only doing point queries in English. The other great thing is that these models are also fast.
Now, if you need to handle ambiguity better, or handle questions in other languages, then this setup is simply not for you.
The next test I did used bge-m3 as the embeddings model and mmarco-mMiniLMv2-L12-H384-v1 as the reranker. The embeddings model is supposedly much better than the one used in the previous test and is also multi-lingual. The reranker on the other hand uses the same cross-encoder from the previous test as the base model but also adds multi-lingual support. The more standard option here would have been the much more popular bge-reranker-v2-m3 model, but I found it to be much slower. I intend to tweak my setup and test it again, however.
Anyway, onto the results! I scored it 8.63 on average, which is very good. There were no complete failures, and it handled the question in Portuguese well.
The mistakes it made were:
* This new setup also did not do the best job at aggregating information, missing 2 of PostHog’s funding rounds, and a couple of its session replay competitors
* It also answered a question correctly, but added incorrect additional context after it
So overall it performed quite well. Again what we what saw was the main problem is when the context needed for the response is scattered across multiple documents. There are various techniques to help with this and we’ll be trialing some soon! They haven’t been needed on the Cloud version because better models save you from having to add complexity for minimal performance gains, but as we’re focused on building a really solid setup for local deploys, we’ll be looking into this more and more.
I hope this writeup has provided you with at least some insight and context into building a local RAG, and also the fact that it does work, it can serve a lot of use cases, and that the tendency is for this setup to get better and better as a) models improve b) we get more open-source models across the board, with both being things that we seem to be trending towards.
As for us at Skald, we intend to polish this setup further in order to serve even more use cases really well, as well as intend to soon be publishing more legitimate benchmarks for models in the open-source space, from LLMs to rerankers.
If you’re a company that needs to run AI tooling in air-gapped infrastructure, let’s chat — feel free to email me at yakko [at] useskald [dot] com.
Lastly, if you want to get involved, feel free to chat to us over on our GitHub repo (MIT-licensed) or catch us on Slack.
[1] I used the PostHog website here because the website content is MIT-licensed (yes, wild) and readily-available as markdown on GitHub and having worked there I know a lot of answers off the top of my head making it a great dataset of ~2000 documents that I know well.
[2] The questions and answers dataset I used for the experiments was the following:
...
Read the original on blog.yakkomajuri.com »
Please login or register.
Login with username, password and session length
There is no Mac in OS X
(And Mac OS 8!)
Hey, guys!
Surely y’all know and have enjoyed Mac OS 9.2.2 booting and beautifully-running on all four Mac mini G4 models for close to 8 years now. (Wow!)
Well, that was one massive revolution…
… But most of us did not think we would live to see the day New World ROM machines, even more so the likes of the Mac mini G4, to NATIVELY boot System 7:
(Gotta love it trying to display 1 GB RAM capacity.)
Before your eyeballs leave your eyesockets completely, I ought to warn that there’s still much to be sorted out in this, especially sound, video and networking (the usual suspects). In other words, your mileage may vary, so keep expectations in check!
OK, so HOW in the WORLD is any of this possible?
It turned out “New World ROM” Macs had a cousin born out of the clone program (until the usual villain, Steve Jobs, came and killed it), which was an architecture called “CHRP” (pronounced “chirp”). It was the successor to PReP, but, unlike PReP, Mac OS was also going to be officially-bootable on it. Close to no CHRP machines ever saw the light of the day, thanks to Jobs’ return. Nonetheless, Apple internally developed Mac OS 7.6 ~ 8.0 for CHRP systems before it got axed. It’s just that they never released it, but the development was done regardless. On October 2025, it turned out someone preserved some of these Mac versions, which were then acquired and preserved and shared with the world. (Macintosh Garden link, archive.org link.)
Although CHRP was left to die, the so-called “New World ROM” Macs inherited much of its architecture and design. As you probably know, these Macs rely on an extra system file called “Mac OS ROM”, whereas “Old World ROM” Macs do not need it, and can use their own actual ROM to get Mac OS going. This meant any Mac OS version unaware of the concept of a Mac OS ROM file could not just simply boot in a New World ROM Mac normally. People were able to boot Mac OS versions as low as 8.1, but not any lower, and that too only for the very first few New World ROM Macs, but none of the later ones, which increasingly had a higher and higher minimum OS version.
But not anymore, as the following major events happened:
- The recent Mac OS 8.0 CHRP leaks provided an earlier ROM file that, it turns out, allows regular Mac OS 8.0 to boot, as well. Or, alternatively, the Mac OS ROM file that always worked with Mac OS 8.1 also worked on these Mac OS 8.0 CHRP releases. (Exact details are fuzzy in my memory by now, so someone else might want to correct me if I got something wrong.)
- The recent Mac OS 7.6 CHRP leak provided an additional System Enabler file, which could be exploited for loading Mac OS ROM files. I forget if that’s how it worked out-of-the-box, or if a bit of hacking to the System Enabler was required for that, however what I do remember clearly is that, while the System Enabler was hardcoded so that artifically no OS earlier than 7.6 could use it, the OS version check could be patched out of it, so that System 7.5.x (and potentially earlier) can also use it.
In other words, this file is the reason that earlier Mac OS versions can make use of the Mac OS ROM file, thus bringing Mac OS 7.6.1 and earlier potentially to ALL New World ROM Macs!
(Trivia tidbit: Apparently this enabler was also present in certain archives of the Mac OS 8.0 betas from when it was still known as “Mac OS 7.7”. Oops! This thing was right under our nose all this while!)
- Of course, as hinted at previously, a System Enabler _alone_ is NOT enough to boot System 7 and the like when even much newer systems that were already aware of the Mac OS ROM file could not boot. The newer the model of the New World ROM Mac, the less you could actually “go back”. The reason is simple: Mac OS ROM files, over time through its various versions, would get new features added, BUT also would remove older ones which were required by older OS versions. The solution? Using ELN’s great Mac OS ROM patching tools (plus other tools of his own), “Rairii” AKA “Wack0”, known for his amazing PPC Windows NT 3.51 / NT 4.0 project on PowerMacs and the Nintendo GC / Wii / Wii U, analyzed many of these Mac OS ROM files, and fixed + patched + stitched together new Mac OS ROM files that attempt to keep ALL the old features that were removed AND all the new features that were added. In other words, the ultimate Mac OS ROM file that boots everything and runs everything (roughly-speaking). He also is the one who figured out and hacked the System Enabler to also accept OSes earlier than Mac OS 7.6.
Keep in mind, however, that this effort essentially allows Macs that are already able to boot SOME version of Mac OS to ALSO boot older versions. But if a given machine cannot boot ANY Mac OS version, such as the two DLSD PowerBook G4s (15″, 17″), these patches cannot do anything about that: Their incompatibilities need to be addressed first and separately.
One more interesting thing to note about the similarity between CHRP systems and New World ROM Macs: If you check ANY “Mac OS ROM” file to see its TYPE and CREATOR codes, you will see they are “tbxi” and, you guessed it, “chrp”, respectively. I couldn’t believe “chrp” was in ALL the Mac OS ROM files all these years!
Where can I get ahold of this EPIC stuff ? ? ? ? ?
Rairii’s “super” ROMs are available on this GitHub repository, under releases. You may also fetch the patched System Enabler for Mac OS 7.6.1 and earlier from there, and place it in the System Folder. Make sure to download the files from the latest release there.
Note that he applied his patches to 3 different versions of the (US) ROMs:
- 10.2.1 with CPU Software 5.9: The “latest and greatest” Mac OS ROM file of all Mac OS. For reference, this is also the ROM version that the 1.628 GB max RAM Mac OS ROM we have was based on (thus going beyond the 1.5 GB limit), although do note that the RAM limit break patches are NOT included in this, at least not yet as of the time of writing.
- 2.5.1: A much earlier version of the ROM, but still new enough to support USB. See the GitHub page for details.
- 1.7.1: A very early ROM, which can be well-leveraged by very early New World ROM Macs. See the GitHub page for details.
Note you need ROM version 9.1 or higher to use ATA-6 AKA Ultra ATA/100 AKA Kauai drivers, which are essential on the likes of the Mac mini G4 and the MDD. Special notes for the Mac mini G4 are further down.
What is the COMPLETE list of Mac OS versions that now boot?
To be exact, this is the complete list of OSes I have attempted, all on the Mac mini G4 1.5GHz model, with the following results:
- System 6.0.8: No boot. You get a Happy Mac, followed by a blinking question mark in a floppy icon. (Note: Although this very attempt is UTTERLY insane for multiple technical reasons, it might be not AS seemingly-impossible as one may think, as the 68k emulator resides within the Mac OS ROM file.)
- System 7.0: No boot. You get a Happy Mac, but then a warning window pops up saying System 7.0 cannot boot on this computer.
- System 7.1.2: No boot. You get a Happy Mac, but then a warning window pops up saying System 7.1 cannot boot on this computer.
- System 7.5: BOOTS AND IS STABLE. It requires you to hold shift to turn Extensions (and Control Panels / INITs) off, though, or to get rid of the “Mouse” Control Panel (and possibly more). The system is surprisingly stable! I tested the British version of this one, as Apple’s Mac OS Anthology discs did not include the US installers, for some very slacker-y reason.
- System 7.5.2: Boots, but very broken, close to nothing works. It could be because System 7.5.2 was always VERY machine-specific, and is apparently one of the most broken versions of Mac OS of ALL time, regardless. The machine-specific enablers, and other things, might be what is making it so unstable.
- System 7.5.3: BOOTS AND IS STABLE. It requires you to hold shift to turn Extensions (and Control Panels / INITs) off, though, or to get rid of the “Mouse” Control Panel (and possibly more). The system is surprisingly stable!
- Mac OS 7.6: BOOTS AND IS STABLE. Holding shift is not required here. What else can I say? It “works”.
- Mac OS 8.1: BOOTS AND IS STABLE. Holding shift is not required here, either. Behaves much the same as the others, except we now have HFS+ by default. Still, it did NOT like me having a 940 GB HFS+ partition, and prompted me to either eject it or format it. (To be fair, older OSes tried to do that, too, but Mac OS 8.1 was THE OS to _officially_ be able to handle HFS+ properly, so there are no excuses for it to fail here. Mac OS 9.2 ~ 9.2.2 all work perfectly with it.)
- Mac OS 8.5: No boot. Rather, it seems like it WOULD boot, but starting with Mac OS 8.5, Mac OS now always checks to see if the machine you are booting from is within a list of Apple-endorsed machine IDs for the given Mac OS version. In other words, Mac OS 8.5 does not know what the Mac mini G4 is, nor what a G4 Cube is (our Mac mini G4 ROM file makes the mini pretend to be the latter). It seems it should be possible to patch out the machine check. According to Rairii, this should be able to be patched out by disabling such a check on the “boot” resource in the Resource Fork of the System file, in ID 3 (also known as “boot3″). For Mac OS 8.6, it seems like this check happens at the end of boot3, wherever a check for machine ID 406 is located, in which after it’s detected, the code checks to see if the exact Mac model is whitelisted or not.
- Mac OS 8.5.1: No boot. All that applies to Mac OS 8.5 also applies to Mac OS 8.5.1.
- Mac OS 8.6: No boot. It crashes during the start screen, when the loading bar appears, but before the first extension gets to load. See the top-left corner of the picture for a glitchy visual artifact. Same happens if you try to boot with Extensions off.
- Mac OS 9.0.4: No boot. It crashes during the start screen, when the loading bar appears, but before the first extension gets to load. Same happens if you try to boot with Extensions off. Exact same symptoms as when trying to boot Mac OS 8.6 at least on this mini model, including the visual artifact on the top-left corner.
- Mac OS 9.1: No boot. It crashes during the start screen, when the loading bar appears, but before the first extension gets to load. Same happens if you try to boot with Extensions off. Exact same symptoms as when trying to boot Mac OS 8.6 and Mac OS 9.0.4 at least on this mini model, including the visual artifact on the top-left corner.
- Mac OS 9.2 ~ 9.2.2: BEST OS EVER, BOOTS AND RUNS BEAUTIFULLY. ’Nuff said.
Note that, although I describe many of these as “stable”, I mean you can use much of it normally (sound/video/networking aside) without it crashing or misbehaving, at least not too hard, but that is not to say everything works, because that is just not the case. For example, when present, avoid opening the Apple System Profiler, unless you want a massive crash as it struggles trying to profile and gather all the information about your system. Some other apps or Control Panels might either not work, or work up to a certain point, after which they might freeze, requiring you to Force Quit the Finder to keep on going. And so on.
As you can see, I did not yet try System 7.5.5, Mac OS 7.6.1 and Mac OS 8.0. That’s because they all are most likely working exactly as their neighbouring versions. But feel free to confirm.
Most non-mini systems should be able to boot Mac OS 8.6 ~ Mac OS 9.1 just fine. A “Mac OS 8.6 Enabler”, so to speak, by LightBulbFun, can be renamed as e.g. “Sawteeth” and put inside the System Folder for some machines that cannot boot Mac OS 8.6 normally, so that they can, then, boot it. It is actually a Mac OS ROM file, but can function as a complementary, helper file to aid the actual Mac OS ROM file in this case. If you’d like, check here for more info. I have attached “Sawteeth.bin” to this post for convenience. LightBulbFun first shared it on this post, specifically through this MEGA link.
Most non-mini systems should also be able to boot Mac OS 8.5 and 8.5.1, especially on G3s and earlier. Some G4 Macs might need to spoof the Mac model in Open Firmware (or some other Forth script added to ROM) to boot, though, or patch the check out like I mentioned for the mini earlier. The reason the mini doesn’t have the spoofing as an option is that any spoofing in OF would be overwritten by its own specialized Mac OS ROM, which spoofs a G4 Cube, which is clearly not in the whitelist of supported machines for Mac OS 8.5 and 8.5.1.
Also note that the mini behaves as reported above with Mac OS 8.6 with or without this “8.6 enabler” file (and with or without the System Enabler for Mac OS 7.6.1 and earlier, both of which don’t seem to get in the way of later, nor earlier, OSes).
Most importantly, I did not yet attempt to identify which are the latest versions of each Control Panel and Extension for each of these OSes. If I did, I’m sure it would help a lot, and perhaps address quite a number of these problems. The more people chime in on this effort, the better! Imagine if we had a proper “Mac mini G4 System 7.5.5” CD, then an “MDD Mac OS 8.5.1″ CD, then an “iBook G3 Mac OS 7.6.1” CD, and so on. Everyone with a G3 or G4 Mac can help by trying things out!
Namely, something akin to MacTron’s efforts highlighting the latest Extensions for Mac OS 9.2.2 and Mac OS 8.6 like this, but also for every other Mac OS version:
But how did you get the mini to boot? It requires its own special ROM!
Indeed it does! All credit goes to ELN and all of those who helped him on Mac OS 9 Lives!: you can simply use his tooling (which was also very useful for Rairii) to re-apply the Mac-mini-G4-specific ROM patches to Rairii’s latest 10.2.1 ROM, and voila! It works as well as you would hope it to!
You can even use the resulting ROM for Mac OS 9.2.2, as well, even though you don’t have to: Originally, the Mac mini G4 ROM as we see them in RossDarker’s Mac mini G4 CDs version 8 and 9 (AKA v8 and v9), as well as in all the previous versions, were based on the US ROM v9.6.1. I could not find an explanation as to why ROM v10.2.1 wasn’t used in the end, even when digging the old Mac mini G4 thread again that started it all. Perhaps because we already had a working ROM with v9.6.1 and did not want to risk breaking anything, or who knows. However, I have thoroughly tested Mac OS 9.2.2 with this new ROM combination (latest Rairii 10.2.1 + latest Mac mini G4 patches AKA v9 patches), and from what I could tell, everything behaves exactly the same as with the previous ROM we always used. Except now we have the ability to use the same ROM to also boot System 7.5 (I still can’t believe this, even though it is true).
(For the record, while the 9.6.1 ROM was also modified to spoof the Mac mini G4 model identifier as a G4 Cube, we also tried to spoof it as a QuickSilver 2002 at one point, but someone reported sound issues with that, and so it was quickly changed back to a G4 Cube and such a change never made it into one of RossDarker’s CDs. So just about everyone using Mac OS on the mini for all these years has had a ROM reporting to the OS as a G4 Cube, exclusively.)
To apply the Mac mini G4 patches, I used ELN’s tbxi and tbxi-patches to apply his “macmini.py” script. You can follow the instructions as per the tbxi-patches page, which you should not let intimidate you even if you are not used to this kind of thing. It’s quick and easy, and the scripts are also fully-commentated very nicely by ELN if you are curious about what it is doing and why.
In my case, first I tried using the latest Python 3.13.9 both from Windows 7 (bad idea due to resource fork loss) and macOS 10.14.6 Mojave, but neither worked: it seems like that version of Python was just too new. I then retried with Python 3.8.10 instead (which I chose thinking it might be more period-appropriate for the script’s age) on Mojave, which worked flawlessly. I didn’t try it, but perhaps an older Python version might work on PowerPC OS X, as well.
I used the Python installer from the official website, and I also used an “official” Git installer from here (thus avoiding any package manager headache… man, how I hate non-Mac-OS systems, including OS X, and package managers in general…)
If somehow someone with plenty of Python knowledge and the willingness to put enough time into it wished to, both tbxi and tbxi-tools could, perhaps, be ported to MacPython 2.3.5, so that we could do all this patching from Mac OS 9.2.2 directly and natively without leaving our main OS. That would also be awesome! (Of course, it helps that this is also available on more recent systems nonetheless, because then everyone gets to join in on the fun with all kinds of different backgrounds and setups.)
For convenience, I attached the final patched ROM to this post, so that anyone can go wild on their minis right away!
Why should I care when Mac OS 9.2.2 already boots, and runs better?
It is also my opinion Mac OS 9.2.2 is the greatest OS, and Mac OS, ever, but not everything that is possible in earlier Mac OS versions is possible in Mac OS 9.2.2. For example, some software requires Mac OS 9.0.4 or earlier to work. A lot of software is System-7-exclusive.
Some people also just prefer the likes of System 7 for its even-lighter memory footprint, lack of mandated Appearance Manager and the like. Mac OS 9.2.2 is already overkill-fast on the mini, and on most New World ROM Macs, but the likes of System 7.5 are just RIDICULOUSLY fast. Even more ridiculously. I still am trying to come into terms with how indescribably fast using it on the mini was. It got even faster when I thought there was no way to get “faster than instantaneous”, as Mac OS 9.2.2 always felt instantaneous like no other system already!
People might also have some other kind of reason and/or special attachment to an earlier OS version. Or maybe people want to explore older OS APIs and behaviors, perhaps even make a new application they want to know how it will behave on bare-metal not just on Mac OS 9, but also System 7 etc..
The value is in opening up the doors that give us, the users, more options that help us all out.
Final remarks
Above all, thank you to everyone that made this possible. But I wanted to emphasize and give special thanks to Rairii for engineering all these ROMs, Mac84 for archiving and sharing all the CHRP discs, ELN for engineering all the Mac mini G4 ROM compatibility scripts and creating all the ROM and other Mac OS tooling, and to the Mac community at large everywhere that assisted in all of this into becoming reality. There’s honestly many, many people to thank we owe over this one way or another, both in small and big ways.
I can’t wait to see what people will do with all these new Mac OS versions on their New World ROM systems over the course of time!
« Last Edit: November 27, 2025, 11:24:38 AM by Jubadub »
There is no Mac in OS X
For the record, I posted this on all these 3 places for greater reach:
- Here (where the Mac mini G4 ROM is);
- Macintosh Garden;
- System 7 Today.
These posts were also linked to from the following places, with discussions of their own:
- MacRumors PPC;
- 68kMLA.
I’m mentioning this here in case anyone wants to quickly jump to discussions happening on either side of the isle.
EDIT: For some reason I seem unable to make posts with attachments (they just erase my message and prompt me to start a new topic), so I’m trying to see what file upload service I should use for the Mac mini G4 ROM that boots System 7 (and the “Sawtooth.bin” file).
I’d rather not create a Garden page just for this just yet as this project is still in its infancy, so any Mac-friendly file upload service recommendations would be appreciated.
EDIT 2: Ah, I got it figured out: Attachments cannot be ”.bin” nor ”.hqx”… This should change… I repackaged the contents as ”.sit” for now and put them up in the first post, as intended.
« Last Edit: Today at 02:14:33 AM by Jubadub »
Quote from: Jubadub on November 27, 2025, 10:13:03 AMI have thoroughly tested Mac OS 9.2.2 with this new ROM combination (latest Rairii 10.2.1 + latest Mac mini G4 patches AKA v9 patches), and from what I could tell, everything behaves exactly the same as with the previous ROM we always used.
Long shot, but, is the mouse-freezing bug still present?
There is no Mac in OS X
Quote from: n8blz on November 27, 2025, 03:28:43 PMQuote from: Jubadub on November 27, 2025, 10:13:03 AMI have thoroughly tested Mac OS 9.2.2 with this new ROM combination (latest Rairii 10.2.1 + latest Mac mini G4 patches AKA v9 patches), and from what I could tell, everything behaves exactly the same as with the previous ROM we always used.
Long shot, but, is the mouse-freezing bug still present?
You know… Now that you mention it, I don’t think I encountered it. But maybe I was just “lucky”: even with the previous ROM, it was not THAT common for me to encounter it (but at the same time, it wasn’t exactly very rare).
If you have a mini, are you able to check it on your end, as well? The more people we have trying this out, the more likely we are to truly find out.
I suspect the mouse glitch probably remains, since as far as we know it’s a shortcoming to be addressed in our patches for the mini rather than Apple’s own code, but… who knows?
« Reply #4 on: Yesterday at 04:38:32 AM »
@Jubadub, thank you for writing all this! I will need to re-read it several times to fully grasp what’s going on there.
This is EPIC!
If you’re not part of the solution, you’re part of the problem.
« Reply #5 on: Yesterday at 09:23:44 AM »
BRAVO Jubadub!
Had been wondering about your absence ’round here lately.
Now that’s evident, considering what you’ve been working on.
Extremely well done and VERY highly commendable.
Not everyone is capable of pulling all of that information together
...
Read the original on macos9lives.com »
Run Windows applications (including Microsoft 365 and Adobe Creative Cloud) on GNU/Linux with KDE Plasma, GNOME or XFCE, integrated seamlessly as if they were native to the OS.
Creating shortcuts to selected Windows applications on the host GNU/Linux OS.
Using FreeRDP as a backend to seamlessly render Windows applications alongside GNU/Linux applications.
* The GNU/Linux /home directory is accessible within Windows via the \\tsclient\home mount.
* Integration with Nautilus, allowing you to right-click files to open them with specific Windows applications based on the file MIME type.
* The official taskbar widget enables seamless administration of the Windows subsystem and offers an easy way to launch Windows applications.
* Microsoft Office links (e.g. ms-word://) from the host system are automatically opened in the Windows subsystem. (Note: You may need to use a User Agent Switcher browser extension and set the User-Agent to Windows, as the Office webapps typically hide the “Open in Desktop App” option for Linux users.)
WinApps supports ALL Windows applications. Support does not, however, extend to kernel-level anti-cheat systems (e.g. Riot Vanguard).
Scanning Windows for any community tested applications (list below).
Scanning Windows for any other .exe files listed within the Windows Registry.
Community tested applications benefit from high-resolution icons and pre-populated MIME types. This enables file managers to determine which Windows applications should open files based on file extensions. Icons for other detected applications are pulled from .exe files.
Contributing to the list of supported applications is encouraged through submission of pull requests! Please help us grow the WinApps community.
Please note that the provided list of community tested applications is community-driven. As such, some applications may not be tested and verified by the WinApps team.
Both Docker and Podman are recommended backends for running the Windows virtual machine, as they facilitate an automated Windows installation process. WinApps is also compatible with libvirt. While this method requires considerably more manual configuration, it also provides greater virtual machine customisation options. All three methods leverage the KVM hypervisor, ensuring excellent virtual machine performance. Ultimately, the choice of backend depends on your specific use case.
The following guides are available:
If you already have a Windows VM or server you wish to use with WinApps, you will still have to follow the final steps described in the libvirt documentation.
WinApps requires FreeRDP version 3 or later. If not available for your distribution through your package manager, you can install the Flatpak:
flatpak install flathub com.freerdp. FreeRDP
sudo flatpak override –filesystem=home com.freerdp.FreeRDP # To use `+home-drive`
However, if you have weird issues like #233 when running Flatpak, please compile FreeRDP from source according to this guide.
Create a configuration file at ~/.config/winapps/winapps.conf containing the following:
# WINAPPS CONFIGURATION FILE #
# INSTRUCTIONS
# - Leading and trailing whitespace are ignored.
# - Empty lines are ignored.
# - Lines starting with ‘#’ are ignored.
# - All characters following a ‘#’ are ignored.
# [WINDOWS USERNAME]
RDP_USER=“MyWindowsUser”
# [WINDOWS PASSWORD]
# NOTES:
# - If using FreeRDP v3.9.0 or greater, you *have* to set a password
RDP_PASS=“MyWindowsPassword”
# [WINDOWS DOMAIN]
# DEFAULT VALUE: ‘’ (BLANK)
RDP_DOMAIN=“”
# [WINDOWS IPV4 ADDRESS]
# NOTES:
# - If using ‘libvirt’, ‘RDP_IP’ will be determined by WinApps at runtime if left unspecified.
# DEFAULT VALUE:
# - ‘docker’: ‘127.0.0.1’
# - ‘podman’: ‘127.0.0.1’
# - ‘libvirt’: ‘’ (BLANK)
RDP_IP=“127.0.0.1”
# [VM NAME]
# NOTES:
# - Only applicable when using ‘libvirt’
# - The libvirt VM name must match so that WinApps can determine VM IP, start the VM, etc.
# DEFAULT VALUE: ‘RDPWindows’
VM_NAME=“RDPWindows”
# [WINAPPS BACKEND]
# DEFAULT VALUE: ‘docker’
# VALID VALUES:
# - ‘docker’
# - ‘podman’
# - ‘libvirt’
# - ‘manual’
WAFLAVOR=“docker”
# [DISPLAY SCALING FACTOR]
# NOTES:
# - If an unsupported value is specified, a warning will be displayed.
# - If an unsupported value is specified, WinApps will use the closest supported value.
# DEFAULT VALUE: ‘100’
# VALID VALUES:
# - ‘100’
# - ‘140’
# - ‘180’
RDP_SCALE=“100”
# [MOUNTING REMOVABLE PATHS FOR FILES]
# NOTES:
# - By default, `udisks` (which you most likely have installed) uses /run/media for mounting removable devices.
# This improves compatibility with most desktop environments (DEs).
# ATTENTION: The Filesystem Hierarchy Standard (FHS) recommends /media instead. Verify your system’s configuration.
# - To manually mount devices, you may optionally use /mnt.
# REFERENCE: https://wiki.archlinux.org/title/Udisks#Mount_to_/media
REMOVABLE_MEDIA=“/run/media”
# [ADDITIONAL FREERDP FLAGS & ARGUMENTS]
# NOTES:
# - You can try adding /network:lan to these flags in order to increase performance, however, some users have faced issues with this.
# If this does not work or if it does not work without the flag, you can try adding /nsc and /gfx.
# DEFAULT VALUE: ‘/cert:tofu /sound /microphone +home-drive’
# VALID VALUES: See https://github.com/awakecoding/FreeRDP-Manuals/blob/master/User/FreeRDP-User-Manual.markdown
RDP_FLAGS=“/cert:tofu /sound /microphone +home-drive”
# [DEBUG WINAPPS]
# NOTES:
# - Creates and appends to ~/.local/share/winapps/winapps.log when running WinApps.
# DEFAULT VALUE: ‘true’
# VALID VALUES:
# - ‘true’
# - ‘false’
DEBUG=“true”
# [AUTOMATICALLY PAUSE WINDOWS]
# NOTES:
# - This is currently INCOMPATIBLE with ‘manual’.
# DEFAULT VALUE: ‘off’
# VALID VALUES:
# - ‘on’
# - ‘off’
AUTOPAUSE=“off”
# [AUTOMATICALLY PAUSE WINDOWS TIMEOUT]
...
Read the original on github.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.