10 interesting stories served every morning and every evening.




1 443 shares, 18 trendiness

Imgur Geo-Blocked the UK, So I Geo-Unblocked My Entire Network

Imgur de­cided to block UK users. Honestly? I don’t re­ally care that much. I haven’t ac­tively browsed the site in years. But it used to be every­where. Back when Reddit em­bed­ded every­thing on Imgur, maybe fif­teen years ago, it was gen­uinely use­ful. Then Reddit built their own im­age host­ing, Discord did the same, and Imgur slowly faded into the back­ground.

Except it never fully dis­ap­peared. And since the block, I keep stum­bling across Imgur links that just show unavailable.” It’s mildly in­fu­ri­at­ing.

Here’s a con­crete ex­am­ple. I was play­ing Minecraft with some work col­leagues and wanted to try dif­fer­ent shaders. Most shader pages em­bed pre­view im­ages hosted on Imgur. So I’d click through shader af­ter shader, and every sin­gle pre­view was just gone. I could­n’t see what any of them looked like with­out the im­ages.

This kind of thing hap­pens con­stantly now. Old fo­rum posts, Reddit threads, doc­u­men­ta­tion pages, ran­dom pro­ject READMEs. Imgur links are still scat­tered across the in­ter­net, and in the UK, they’re all bro­ken.

The ob­vi­ous so­lu­tion is to use a VPN. Change your lo­ca­tion, prob­lem solved. But I have a few is­sues with that ap­proach.

First, I just up­graded to 2.5 Gbps in­ter­net and I don’t want to route all my traf­fic through a VPN and take the speed hit. I have this band­width for a rea­son.

Second, even if I in­stalled a VPN on my main ma­chine, what about my phone? My lap­top? My desk­top? Every de­vice would need the VPN run­ning, and I’d have to re­mem­ber to con­nect it be­fore brows­ing. It’s messy.

I wanted some­thing cleaner: a so­lu­tion that works for every de­vice on my net­work, au­to­mat­i­cally, with­out any client-side con­fig­u­ra­tion.

I al­ready run a home­lab with Traefik as my re­verse proxy, Pi-hole for DNS, and every­thing de­clar­a­tively con­fig­ured with NixOS. If you’ve read my pre­vi­ous post on Docker con­tain­ers with se­crets, you’ll recog­nise the pat­tern.

The idea was sim­ple: in­ter­cept all re­quests to i.imgur.com at the DNS level, route them through a VPN-connected con­tainer, and serve the im­ages back. Every de­vice on my net­work au­to­mat­i­cally uses Pi-hole for DNS via DHCP, so this would be com­pletely trans­par­ent.

Traefik sees the SNI host­name and routes to GluetunNginx (attached to Gluetun’s net­work) prox­ies to the real ImgurImage comes back through the tun­nel to the de­vice

Good ques­tion. Gluetun is­n’t a re­verse proxy. It’s a con­tainer that pro­vides VPN con­nec­tiv­ity to other con­tain­ers at­tached to its net­work name­space. So I needed some­thing in­side Gluetun’s net­work to ac­tu­ally han­dle the prox­y­ing. Nginx was the sim­plest choice.

The Nginx con­fig is min­i­mal. It just does TCP passthrough with SNI:

This lis­tens on port 443, reads the SNI header to con­firm the des­ti­na­tion, and passes the con­nec­tion through to the real i.imgur.com. The TLS hand­shake hap­pens end-to-end; Nginx never sees the de­crypted traf­fic.

The com­pose file runs two con­tain­ers. Gluetun han­dles the VPN con­nec­tion, and Nginx at­taches to Gluetun’s net­work:

The key de­tail is net­work_­mode: service:gluetun”. This makes Nginx share Gluetun’s net­work stack, so all its traf­fic au­to­mat­i­cally goes through the VPN tun­nel.

I’m not go­ing to men­tion which VPN provider I use. It’s one of the ma­jor ones with WireGuard sup­port, but hon­estly I’m not thrilled with it. Use what­ever you have.

The fi­nal piece is telling Traefik to route i.imgur.com traf­fic to the Gluetun con­tainer. This uses TCP rout­ing with TLS passthrough:

The passthrough: true is im­por­tant. It means Traefik does­n’t ter­mi­nate TLS; it just in­spects the SNI header and for­wards the con­nec­tion.

Following the same pat­tern from my Docker with se­crets post, I cre­ated a sys­temd ser­vice that runs the com­pose stack with Agenix-managed se­crets:

The VPN cre­den­tials are stored en­crypted with Agenix, so my en­tire dot­files repo stays pub­lic while keep­ing se­crets safe.

Now when any de­vice on my net­work re­quests an Imgur im­age, it works. My phone, my lap­top, guest de­vices, every­thing. No VPN apps to in­stall, no browser ex­ten­sions, no man­ual con­fig­u­ra­tion. Pi-hole in­ter­cepts the DNS, Traefik routes the con­nec­tion, and Gluetun tun­nels it through a non-UK exit point.

The la­tency in­crease is neg­li­gi­ble for load­ing im­ages, and it only af­fects Imgur traf­fic. Everything else still goes di­rect at full speed.

Is this overkill for view­ing the oc­ca­sional Imgur im­age? Probably. But it’s a clean so­lu­tion that re­quires min­i­mal on­go­ing main­te­nance, and it scratches the home­lab itch. Plus I can fi­nally see what those Minecraft shaders look like.

...

Read the original on blog.tymscar.com »

2 429 shares, 16 trendiness

Hacker News vector search dataset

Sentence Transformers pro­vide lo­cal, easy to use em­bed­ding mod­els for cap­tur­ing the se­man­tic mean­ing of sen­tences and para­graphs.

The dataset in this HackerNews dataset con­tains vec­tor emebed­dings gen­er­ated from the

all-MiniLM-L6-v2 model.

An ex­am­ple Python script is pro­vided be­low to demon­strate how to pro­gram­mat­i­cally gen­er­ate em­bed­ding vec­tors us­ing sen­tence_­trans­form­ers1 Python pack­age. The search em­bed­ding vec­tor is then passed as an ar­gu­ment to the [cosineDistance()](/sql-reference/functions/distance-functions#cosineDistance) func­tion in the SELECT` query.from sen­tence_­trans­form­ers im­port SentenceTransformer

im­port sys

im­port click­house­_­con­nect

print(“Ini­tial­iz­ing…“)

model = SentenceTransformer(‘sentence-transformers/all-MiniLM-L6-v2’)

chclient = click­house­_­con­nect.get_­client() # ClickHouse cre­den­tials here

while True:

# Take the search query from user

print(“En­ter a search query :“)

in­put_­query = sys.stdin.read­line();

texts = [input_query]

# Run the model and ob­tain search vec­tor

print(“Gen­er­at­ing the em­bed­ding for , in­put_­query);

em­bed­dings = model.en­code(texts)

print(“Query­ing ClickHouse…“)

params = {‘v1’:list(embeddings[0]), v2’:20}

re­sult = chclient.query(“SE­LECT id, ti­tle, text FROM hack­ernews ORDER BY cosineDis­tance(vec­tor, %(v1)s) LIMIT %(v2)s”, pa­ra­me­ters=params)

print(“Re­sults :“)

for row in re­sult.re­sult_rows:

print(row[0], row[2][:100])

print(“––––-“)

An ex­am­ple of run­ning the above Python script and sim­i­lar­ity search re­sults are shown be­low (only 100 char­ac­ters from each of the top 20 posts are printed):Ini­tial­iz­ing…

Enter a search query :

Are OLAP cubes use­ful

Generating the em­bed­ding for Are OLAP cubes use­ful”

Querying ClickHouse…

Results :

27742647 smart­mic:

slt2021: OLAP Cube is not dead, as long as you use some form of:1. GROUP BY mul­ti­ple fi ––––- 27744260 georgewfraser:A data mart is a log­i­cal or­ga­ni­za­tion of data to help hu­mans un­der­stand the schema. Wh ––––- 27761434 mwexler:“We model data ac­cord­ing to rig­or­ous frame­works like Kimball or Inmon be­cause we must r ––––- 28401230 chot­mat: erosen­be0: OLAP data­base is just a copy, replica, or archive of data with a schema de­signe ––––- 22198879 Merick:+1 for Apache Kylin, it’s a great pro­ject and awe­some open source com­mu­nity. If any­one i ––––- 27741776 crazy­dog­gers:I al­ways felt the value of an OLAP cube was un­cov­er­ing ques­tions you may not know to as ––––- 22189480 shad­ow­sun7: _Codemonkeyism: After main­tain­ing an OLAP cube sys­tem for some years, I’m not that ––––- 27742029 smart­mic: gengstrand: My first ex­po­sure to OLAP was on a team de­vel­op­ing a front end to Essbase that ––––- 22364133 ir­fan­sharif: simo7: I’m won­der­ing how this tech­nol­ogy could work for OLAP cubes. An OLAP cube ––––- 23292746 scoresmoke:When I was de­vel­op­ing my pet pro­ject for Web an­a­lyt­ics (

The ex­am­ple above demon­strated se­man­tic search and doc­u­ment re­trieval us­ing ClickHouse.

A very sim­ple but high po­ten­tial gen­er­a­tive AI ex­am­ple ap­pli­ca­tion is pre­sented next.

The ap­pli­ca­tion per­forms the fol­low­ing steps:

Accepts a topic as in­put from the user

Generates an em­bed­ding vec­tor for the topic by us­ing the SentenceTransformers with model all-MiniLM-L6-v2

Retrieves highly rel­e­vant posts/​com­ments us­ing vec­tor sim­i­lar­ity search on the hack­ernews table

Uses LangChain and OpenAI gpt-3.5-turbo Chat API to sum­ma­rize the con­tent re­trieved in step #3.

The posts/​com­ments re­trieved in step #3 are passed as con­text to the Chat API and are the key link in Generative AI.

An ex­am­ple from run­ning the sum­ma­riza­tion ap­pli­ca­tion is first listed be­low, fol­lowed by the code for the sum­ma­riza­tion ap­pli­ca­tion. Running the ap­pli­ca­tion re­quires an OpenAI API key to be set in the en­vi­ron­ment vari­able OPENAI_API_KEY. The OpenAI API key can be ob­tained af­ter reg­is­ter­ing at https://​plat­form.ope­nai.com.

This ap­pli­ca­tion demon­strates a Generative AI use-case that is ap­plic­a­ble to mul­ti­ple en­ter­prise do­mains like : cus­tomer sen­ti­ment analy­sis, tech­ni­cal sup­port au­toma­tion, min­ing user con­ver­sa­tions, le­gal doc­u­ments, med­ical records, meet­ing tran­scripts, fi­nan­cial state­ments, etc$ python3 sum­ma­rize.py

Enter a search topic :

ClickHouse per­for­mance ex­pe­ri­ences

Generating the em­bed­ding for ––> ClickHouse per­for­mance ex­pe­ri­ences

Querying ClickHouse to re­trieve rel­e­vant ar­ti­cles…

Initializing chat­gpt-3.5-turbo model…

Summarizing search re­sults re­trieved from ClickHouse…

Summary from chat­gpt-3.5:

The dis­cus­sion fo­cuses on com­par­ing ClickHouse with var­i­ous data­bases like TimescaleDB, Apache Spark,

AWS Redshift, and QuestDB, high­light­ing ClickHouse’s cost-ef­fi­cient high per­for­mance and suit­abil­ity

for an­a­lyt­i­cal ap­pli­ca­tions. Users praise ClickHouse for its sim­plic­ity, speed, and re­source ef­fi­ciency

in han­dling large-scale an­a­lyt­ics work­loads, al­though some chal­lenges like DMLs and dif­fi­culty in back­ups

are men­tioned. ClickHouse is rec­og­nized for its real-time ag­gre­gate com­pu­ta­tion ca­pa­bil­i­ties and solid

en­gi­neer­ing, with com­par­isons made to other data­bases like Druid and MemSQL. Overall, ClickHouse is seen

as a pow­er­ful tool for real-time data pro­cess­ing, an­a­lyt­ics, and han­dling large vol­umes of data

ef­fi­ciently, gain­ing pop­u­lar­ity for its im­pres­sive per­for­mance and cost-ef­fec­tive­ness.

Code for the above ap­pli­ca­tion :print(“Initializing…“)

im­port sys

im­port json

im­port time

from sen­tence_­trans­form­ers im­port SentenceTransformer

im­port click­house­_­con­nect

from langchain.doc­store.doc­u­ment im­port Document

from langchain.tex­t_s­plit­ter im­port CharacterTextSplitter

from langchain.chat_­mod­els im­port ChatOpenAI

from langchain.prompts im­port PromptTemplate

from langchain.chains.sum­ma­rize im­port load­_­sum­ma­rize_chain

im­port tex­twrap

im­port tik­to­ken

def num_­to­ken­s_from_string(string: str, en­cod­ing_­name: str) -> int:

en­cod­ing = tik­to­ken.en­cod­ing_­for_­model(en­cod­ing_­name)

num_­to­kens = len(en­cod­ing.en­code(string))

re­turn num_­to­kens

model = SentenceTransformer(‘sentence-transformers/all-MiniLM-L6-v2’)

chclient = click­house­_­con­nect.get_­client(com­press=False) # ClickHouse cre­den­tials here

while True:

# Take the search query from user

print(“En­ter a search topic :“)

in­put_­query = sys.stdin.read­line();

texts = [input_query]

# Run the model and ob­tain search or ref­er­ence vec­tor

print(“Gen­er­at­ing the em­bed­ding for ––> , in­put_­query);

em­bed­dings = model.en­code(texts)

print(“Query­ing ClickHouse…“)

params = {‘v1’:list(embeddings[0]), v2’:100}

re­sult = chclient.query(“SE­LECT id,ti­tle,text FROM hack­ernews ORDER BY cosineDis­tance(vec­tor, %(v1)s) LIMIT %(v2)s”, pa­ra­me­ters=params)

# Just join all the search re­sults

doc_re­sults =

for row in re­sult.re­sult_rows:

doc_re­sults = doc_re­sults + \n” + row[2]

print(“Ini­tial­iz­ing chat­gpt-3.5-turbo model”)

mod­el_­name = gpt-3.5-turbo”

tex­t_s­plit­ter = CharacterTextSplitter.from_tiktoken_encoder(

mod­el_­name=mod­el_­name

texts = tex­t_s­plit­ter.split_­text(doc_re­sults)

docs = [Document(page_content=t) for t in texts]

llm = ChatOpenAI(temperature=0, mod­el_­name=mod­el_­name)

promp­t_tem­plate = ”″

Write a con­cise sum­mary of the fol­low­ing in not more than 10 sen­tences:

...

Read the original on clickhouse.com »

3 424 shares, 23 trendiness

Airbus update on A320 Family precautionary fleet action

Toulouse, France, 28 November 2025 — Analysis of a re­cent event in­volv­ing an A320 Family air­craft has re­vealed that in­tense so­lar ra­di­a­tion may cor­rupt data crit­i­cal to the func­tion­ing of flight con­trols.

Airbus has con­se­quently iden­ti­fied a sig­nif­i­cant num­ber of A320 Family air­craft cur­rently in-ser­vice which may be im­pacted.

Airbus has worked proac­tively with the avi­a­tion au­thor­i­ties to re­quest im­me­di­ate pre­cau­tion­ary ac­tion from op­er­a­tors via an Alert Operators Transmission (AOT) in or­der to im­ple­ment the avail­able soft­ware and/​or hard­ware pro­tec­tion, and en­sure the fleet is safe to fly. This AOT will be re­flected in an Emergency Airworthiness Directive from the European Union Aviation Safety Agency (EASA).

Airbus ac­knowl­edges these rec­om­men­da­tions will lead to op­er­a­tional dis­rup­tions to pas­sen­gers and cus­tomers. We apol­o­gise for the in­con­ve­nience caused and will work closely with op­er­a­tors, while keep­ing safety as our num­ber one and over­rid­ing pri­or­ity.

...

Read the original on www.airbus.com »

4 389 shares, 14 trendiness

Bringing Sexy Back

I don’t re­mem­ber when I first started notic­ing that peo­ple I knew out in the world had lost their sense of erotic pri­vacy, but I do re­mem­ber the day it struck me as a phe­nom­e­non that had es­caped my time­line and en­tered my real, fleshy life. It was last year, when I was hav­ing a con­ver­sa­tion with a friend of mine, who, for the record, is five years younger than me (I’m 31). I told my friend about an erotic en­counter I’d just ex­pe­ri­enced and very much de­lighted in, in which I had my hair brushed at the same time by two very beau­ti­ful women at the hair sa­lon — one was teach­ing the other how to do it a cer­tain way. When I fin­ished my story, my friend looked at me, hor­ri­fied.

They had no idea you felt some­thing sex­ual about them,” she said. What if they found out? Lowkey, I hate to say this but: you took ad­van­tage of them.” I was shocked. I tried to ex­plain — and it felt ex­tremely ab­surd to ex­plain — that this had hap­pened in my body and in my thoughts, which were pri­vate to me and which no­body had the right to know about. But they did have the right, my friend ar­gued. She de­manded that I apol­o­gize to the women for sex­u­al­iz­ing them. Offended at hav­ing been ac­cused — in my view, in ex­tremely bad faith — of be­ing some kind of peep-show creep, I tried to ar­gue that I’d sim­ply re­sponded in a phys­i­cal way to an un­ex­pected, di­rect, and in­vol­un­tary stim­u­lus. Back and forth, back and forth, we fought like this for a while. In fact, it ended the friend­ship.

There were other con­ver­sa­tions, too, that sug­gested to me that con­cep­tions of love and sex have changed fun­da­men­tally among peo­ple I know. Too many of my friends and ac­quain­tances — of vary­ing de­grees of onlineness,” from vet­eran dis­course ob­servers to ca­sual browsers — seem to have in­ter­nal­ized the in­ter­net’s ten­dency to reach for the least char­i­ta­ble in­ter­pre­ta­tion of every glanc­ing thought and, as a re­sult, to have pathol­o­gized what I would char­ac­ter­ize as the nor­mal, in­ter­nal va­garies of de­sire.

Hence, there was the friend who jus­ti­fied her predilec­tion for be­ing praised in bed as a kink” in­her­ited through the trauma” of her fa­ther al­ways harp­ing on her be­cause of her grades. There was the friend who felt en­ti­tled to post­ing screen­shots of in­ti­mate con­ver­sa­tions on Twitter af­ter a messy breakup so that she could get a rul­ing on who was the crazy one.” Then there was the friend who bit­terly de­scribed a man he was dat­ing as a fuckboy” be­cause he stood him up, claim­ing that their hav­ing en­joyed sex to­gether be­fore­hand was emotionally ma­nip­u­la­tive.” When I dug a bit deeper, it turned out the man in ques­tion had just got­ten out of a seven-year re­la­tion­ship and re­al­ized he was­n’t ready to be sex­u­ally in­ti­mate, and while he was rude to stand my friend up, it shocked me how quick my friend was to cat­e­go­rize his right­fully hurt feel­ings as some­thing patho­log­i­cal or sin­is­ter in the other per­son, and that he did this in or­der to pre­emp­tively shield him­self from be­ing cast as the vil­lain in what was a multi-party ex­pe­ri­ence. This last friend I asked: Who are you de­fend­ing your­self against?” To which he an­swered, to my as­ton­ish­ment: I don’t know. The world.”

I choose these ex­am­ples from my per­sonal life be­cause they ex­press sen­ti­ments that were once the kind of stuff I en­coun­tered only in the messy bat­tle­grounds of Twitter, amid dis­cus­sions about whether Sabrina Carpenter is be­ing over­sex­u­al­ized, whether kinks are akin to a sex­ual ori­en­ta­tion, whether a woman can truly con­sent in an age-gap re­la­tion­ship, and whether ex­po­sure to sex scenes in movies vi­o­lates viewer con­sent. It is quite easy to dis­miss these discourse wars” as a puritanism” af­flict­ing the young, a re­ac­tionary cur­rent to be solved with a dif­fer­ent, cor­rec­tive dis­course of pro-sex lib­er­a­tion, dis­trib­uted via those same chan­nels. If only it were so! To me, the re­al­ity goes deeper and is bleaker.

The fact is that our most in­ti­mate in­ter­ac­tions with oth­ers are now gov­erned by the ex­pec­ta­tion of sur­veil­lance and pun­ish­ment from an on­line pub­lic. One can never be sure that this pub­lic or some­one who could po­ten­tially ex­pose us to it is­n’t there, al­ways se­cretly film­ing, post­ing, tak­ing notes, ready to pounce the sec­ond one does some­thing cringe or prob­lem­atic (as de­fined by whom?). To claim that these mat­ters are merely dis­cur­sive in na­ture is to ig­nore the prob­lem. Because love and sex are so in­ti­mate and vul­ner­a­ble, the stakes of pun­ish­ment are higher, and the fear of it pen­e­trates deeper into the psy­che and is harder to ra­tio­nal­ize away than, say, fear of push­back from tweet­ing a di­vi­sive po­lit­i­cal opin­ion.

I should state at this point that this is not an es­say about cancel cul­ture go­ing too far,” a topic which can now be his­tori­cized as lit­tle more than a rhetor­i­cal cud­gel wielded suc­cess­fully by the right to wrest cul­tural power back from an as­cen­dant pro­gres­sive lib­er­al­ism. This was es­pe­cially true af­ter the promi­nence of or­ga­nized cam­paigns such as #MeToo. #MeToo was smeared by lib­er­als and con­ser­v­a­tives alike (united, as they al­ways are, in misog­yny) as be­ing in­her­ently puni­tive in na­ture, meant to pun­ish men who’d fallen into a rough patch of bad be­hav­ior, or who, per­haps, might not have done any­thing at all (the falsely ac­cused or the mis­in­ter­preted man be­came the real vic­tim, in this view). #MeToo did make use of the call-out — the story shared in a spread­sheet anony­mously or in a signed op-ed — but the call-outs had a pur­pose: to end a long-stand­ing and long-per­mit­ted norm of sex­ual abuse within in­sti­tu­tions. Underlying this was a dis­cur­sive prac­tice and a form of sol­i­dar­ity build­ing in which peo­ple be­lieved that shar­ing their sto­ries of trauma en masse could bring about struc­tural change. As some­one who par­tic­i­pated my­self, I too be­lieved in this the­ory and saw it as nec­es­sary, cathar­tic, and po­lit­i­cal, and far from vig­i­lante jus­tice.

But the push­back against #MeToo re­veals a cer­tain peril to sto­ry­telling as pol­i­tics, not only in the re­trauma­ti­za­tion ev­i­dent in the prac­tice of re­veal­ing one’s most in­ti­mate harms be­fore an in­fi­nite on­line au­di­ence, which could al­ways in­clude those lis­ten­ing in bad faith. But also, a dis­cur­sive mar­ket opened up in which trauma be­came a kind of cur­rency of au­then­tic­ity, re­sult­ing in a dou­bled ex­ploita­tion. This idea, while not very nice, lingers in the use of harm as an au­thor­i­ta­tive form of rhetor­i­cal de­fense. The prob­lem here is not what is said, but how it is used. A fric­tion has since emerged be­tween an aware­ness of weaponiza­tion of harm and emo­tion and the con­tin­ued need to ex­press one­self as vul­ner­a­bly as pos­si­ble in or­der to come off as sin­cere. This fric­tion is un­re­solved.

The or­ga­nized goals of the #MeToo move­ment are miss­ing from the new pu­ri­tanism. I think that the prud­ish re­vul­sion I’ve seen on­line and in my own life has as much to do with sur­veil­lance as with sex. Punishing strangers for their per­ceived per­ver­sion is a form of com­pen­sa­tion for a process that is al­ready com­pleted: the ero­sion of erotic and emo­tional pri­vacy through in­ter­net-dri­ven sur­veil­lance prac­tices, prac­tices we have since turned in­ward on our­selves. In short, we have be­come our own panop­ti­cons.

On the right­most side of the spec­trum, puni­tive anti-erotic sur­veil­lance is very ex­plicit and very real, es­pe­cially for women. The Andrew Tates of the world and the prac­ti­tion­ers of ex­treme forms of misog­yny have no prob­lem with us­ing in­ter­net tools and so­cial me­dia web­sites for mass sham­ing and ex­plicit harm. Covert film­ing of sex acts, AI deep fakes, ex­tor­tion, and re­venge porn are all re­al­i­ties one has to con­tend with when think­ing about hook­ing up or go­ing to pub­lic places such as night­clubs and gay bars. This is black­mail at its most ex­plicit and ex­treme, meant to fur­ther so­lid­ify a link be­tween sex and fear.

But that link be­tween sex and fear is op­er­at­ing in more benign” or com­mon modes of in­ter­net prac­tice. There is an on­line cul­ture that thinks noth­ing of sub­mit­ting screen­shots, notes, videos, and pho­tos with calls for col­lec­tive judge­ment. When it be­came de­sir­able and per­mis­si­ble to trans­form our own lives into con­tent, it did­n’t take long be­fore a sense of en­ti­tle­ment emerged that ex­tended that trans­for­ma­tion to peo­ple we know and to strangers. My ex sent me this text, clearly she is the crazy one, right? Look at this dumb/​funny/​cringe Hinge pro­file! Look at this note some guy sent me, is this a red flag? Look at this ran­dom woman I pho­tographed buy­ing wine, co­conut oil, and a long cu­cum­ber at the su­per­mar­ket!

I think these kinds of posts some­times amount to lit­tle more than com­mon bul­ly­ing, but they are on a con­tin­uum with a pu­ri­tan dis­course in which in­ti­mate ques­tions, prac­tices, and be­liefs about queer­ness, sex­u­al­ity, gen­der pre­sen­ta­tion, and de­sire are also sub­jected to days-long piles-on. In both in­stances, the in­stinct to sub­mit on­line strangers to vi­ral dis­ci­pline is given a faux-rad­i­cal sheen. It’s a kind of ca­sual black­mail that warns every­one to con­form or be ex­posed; a way of say­ing if you don’t cave to my point of view, re­de­fine your­self in my im­age of what sex­u­al­ity is or should be, and (most im­por­tantly) apol­o­gize to me and the pub­lic, I will sub­ject you to my large fol­low­ing and there will be hell to pay. Such un­pro­duc­tive and an­ti­so­cial be­hav­ior is jus­ti­fied as a step to­ward lib­er­a­tion from pre­da­tion, misog­yny, or any num­ber of other harms. But the puni­tive mind­set we’ve de­vel­oped to­wards re­la­tion­ships is in­dica­tive of an in­abil­ity to imag­ine a fu­ture of gen­dered or sex­ual re­la­tions with­out sub­ju­ga­tion. To couch that in the lan­guage of harm re­duc­tion and trauma dele­git­imizes both.

There are other ways the pol­i­tics of sur­veil­lance have be­come a kind of fun­house mir­ror. It is seen as more and more nor­mal to track one’s part­ner through Find My iPhone or an AirTag, even though the po­ten­tial for abuse of this tech­nol­ogy is stag­ger­ing and ob­vi­ous. There are all kinds of new prod­ucts, such as a bio­met­ric ring that is al­legedly able to tell you whether your part­ner is cheat­ing, that ex­pand this ca­pa­bil­ity into more and more gran­u­lar set­tings. That’s all be­fore we get into the end­less TikToks about why I go through my part­ner’s text mes­sages.” That men use these tac­tics and tools to con­trol women is a known threat. What is as­ton­ish­ing is the lengths to which some women will go to use these same tech­nolo­gies, claim­ing that they are nec­es­sary to pre­vent harm — es­pe­cially that caused by cheat­ing, which is now seen as some kind of life­long trauma or per­ma­nently damnable of­fense in­stead of one of the rather quo­tid­ian, if very painful, ways we hurt one an­other. Each of these sur­veil­lance prac­tices op­er­ates from a feel­ing of en­ti­tle­ment and con­trol over other peo­ple, their bod­ies, and what they do.

Pundits like to de­cree sex­less­ness as a Gen-Z prob­lem, to ar­gue no one is fuck­ing be­cause they are too on their phones. However, it is al­ways too easy to blame the young. It was my gen­er­a­tion that failed to in­still the so­cial norms nec­es­sary to pre­vent a sit­u­a­tion where fear of strangers on the in­ter­net has suc­cess­fully re­placed the dis­ci­pli­nary ap­pa­ra­tus more com­monly held by re­li­gious or con­ser­v­a­tive doc­trine. Even when, as in my ex­pe­ri­ence in the sa­lon, I am act­ing in the pri­vacy of my own body, some­one is al­ways there watch­ing, ready to in­ter­pret my ac­tions, prob­lema­tize them so as to share in the same sense of mag­i­cal think­ing, the same in­se­cu­ri­ties, and to be pun­ished for not be­ing in­se­cure in the same way.

It’s only in ret­ro­spect that I’m able to re­al­ize the toll that con­stant, nag­ging in­ter­ac­tion with my de­vices and the in­ter­net has taken on my think­ing life and my sex life. I re­mem­ber very vis­cer­ally when I’d just come out of the closet as bi­sex­ual in 2016. When I em­barked on a jour­ney to find the kind of lover I wanted to be, my only ex­pe­ri­ence with the world of queer­ness was on­line through memes, ar­ti­cles, and oth­ers’ so­cial me­dia pre­sen­ta­tion of them­selves and of pol­i­tics. Queer sex was not some­thing that could be dis­cov­ered through sen­sa­tion, through phys­i­cal in­ter­ac­tion, but was rather a cat­a­log of spe­cific acts and roles one was al­ready ex­pected to know. I was ter­ri­fied of mak­ing some kind of mis­take, of be­ing the wrong kind of bi­sex­ual, of mis­rep­re­sent­ing my­self in an of­fen­sive way (could I use the term soft butch” if I was­n’t a les­bian?), of be­ing ex­posed some­how as a fraud. When the time came for me to have sex for the first time, what should have been a joy­ous oc­ca­sion was in­stead bur­dened with a sense of be­ing watched. I could not let the nat­ural processes of erotic dis­cov­ery take their course, so caught up was I in judg­ing my­self from the per­spec­tive of strangers to whom I owed noth­ing.

But it was­n’t just a mat­ter of queer­ness, ei­ther. When I hooked up with men, I could only per­ceive of sex the same way, not as sit­u­a­tional but as a set of pre­scribed acts and scenes, many of which I wanted to ex­plore. However, this time I in­ter­ro­gated these urges as be­ing so­cio­genic in na­ture and some­how harm­ful to me, when they were, in fact, pri­vate, and I did not, in re­al­ity, feel harmed. Because I wanted, at one point in my life, to be tied up and gagged, the dis­em­pow­er­ing na­ture of such a want ne­ces­si­tated try­ing to jus­tify it against in­vis­i­ble ac­cu­sa­tions with some kind of trau­mato­genic and im­mutable qual­ity. Maybe it was be­cause I was raped in col­lege. Maybe I was just in­her­ently sub­mis­sive. One of the great ironies in the his­tory of sex is that pathol­o­giza­tion used to be a way of con­trol­ling sex­ual de­sire. (All are fa­mil­iar with the many myths that mas­tur­ba­tion would turn one blind.) Now it is a way of ex­empt­ing one­self, of re­lin­quish­ing con­trol of one’s ac­tions so as to ab­solve them of scrutiny. My lit­tle bondage mo­ment could­n’t be prob­lem­atic if it could­n’t be helped. It could­n’t be sub­jected to in­ter­ro­ga­tion if there was some­thing I could point to to say it’s be­yond my con­trol, don’t judge me!” One day, how­ever, I came to an im­por­tant rev­e­la­tion: The re­al­ity was much sim­pler. It was a pass­ing phase, a de­sire that orig­i­nated with a spe­cific man and lost its charm af­ter I moved on from him. There was­n’t some de­ter­min­is­tic qual­ity in my­self that made me like this. My de­sire was not fixed in na­ture. My sex­ual qual­i­ties were tran­sient and not in­born. What aroused me was won­der­fully, en­tirely sit­u­a­tional.

A sit­u­a­tional eroti­cism is what is needed now, in our lit­er­al­ist times. It’s ex­haust­ing, how every­thing is so read­ily de­fined by types, acts, trau­mas, kinks, fetishes, pathol­ogy, and aes­thet­ics. To me, our predilec­tion for de­ter­min­ism is an ex­pected psy­cho­log­i­cal re­sponse to ex­ces­sive sur­veil­lance. A sit­u­a­tional eroti­cism de­cou­ples sen­sa­tion from nar­ra­tive and ty­pol­ogy. It al­lows us to feel with­out ex­cuse and to re­late our feel­ings to our im­me­di­ate em­bod­ied mo­ment, grounded in a fun­da­men­tal sense of per­sonal pri­vacy. While it is ad­mirable to try and un­der­stand our­selves and im­por­tant to pro­tect our­selves from harm and in­ves­ti­gate crit­i­cally the ways in which what we want may put us at risk of that harm — or at risk of do­ing harm to oth­ers — some­times de­sires just are, and they are not that way for long. Arousal is a mat­ter of the self, which takes place within the body, a space no one can see into. It is of­ten a mys­tery, a sur­prise, a dis­cov­ery. It can hap­pen at a small scale, say, the fris­son of two sets of fin­gers in one’s hair at once. It is beau­ti­ful, un­planned and does not judge it­self be­cause it is an in­ert sen­sa­tion, unim­bued with pre­med­i­tated mean­ing. This should lib­er­ate rather than frighten us. Maybe what it means does­n’t mat­ter. Maybe we don’t have to jus­tify it even to our­selves.

But in or­der to fa­cil­i­tate a re­turn to sit­u­a­tional eroti­cism, we need to kill the panop­ti­con in our heads. That means first killing the panop­ti­con we’ve built for oth­ers. There is no pur­pose in vin­dic­tive or thought­less ex­po­sure. Not every­thing needs to be sub­jected to pub­lic opin­ion, not every anec­dote is worth shar­ing, not every de­bate needs en­gage­ment, es­pe­cially those de­bates which have no ma­te­r­ial ba­sis to them, no ask, no fun­nel for all that en­ergy. We need to stop con­fus­ing vig­i­lan­tism with jus­tice and post­ing with pol­i­tics. That does not mean we stop the work that #MeToo started, but that re­venge is a weapon best uti­lized col­lec­tively against the en­e­mies of lib­er­a­tion. We need to pro­tect the vul­ner­a­ble from ex­ploita­tive tech­nolo­gies and prac­tices, re­peat­edly de­nounce their use, and work to­wards a world with­out sex­ual co­er­cion, dig­i­tal or oth­er­wise.

On an in­di­vid­ual level, we need to aban­don or re­shape our re­la­tion­ships with our phones and re­gain a sense of our own per­sonal and men­tal pri­vacy. It’s a mat­ter of ex­is­ten­tial, meta­phys­i­cal im­por­tance. Only when this de­cou­pling from our­selves and the me­di­ated per­for­mance of our­selves is com­plete, can we be­gin the process of re­turn­ing to our own bod­ies out there, in the world, with no one watch­ing or read­ing our thoughts ex­cept those we want to. The truth is, we are very afraid not of sex, but of ex­po­sure. Only when we are un­afraid can we be­gin to let de­sire flour­ish. Only when we re­turn to our­selves can we re­ally know what we want.

Kate Wagner is the ar­chi­tec­ture critic at The Nation. Her award-win­ning cul­tural writ­ing has been fea­tured in mag­a­zines rang­ing from The Baffler to the New Republic.

...

Read the original on lux-magazine.com »

5 382 shares, 17 trendiness

Molly

Molly is an in­de­pen­dent Signal fork for Android with im­proved fea­tures:

Extra theme that fol­lows your de­vice palette

When you are gone for a set pe­riod of time

New and bet­ter fea­tures to come

...

Read the original on molly.im »

6 375 shares, 87 trendiness

Leak confirms OpenAI is preparing ads on ChatGPT for public roll out

OpenAI is now in­ter­nally test­ing ads’ in­side ChatGPT that could re­de­fine the web econ­omy.

Up un­til now, the ChatGPT ex­pe­ri­ence has been com­pletely free.

While there are pre­mium plans and mod­els, you don’t see GPT sell you prod­ucts or show ads. On the other hand, Google Search has ads that in­flu­ence your buy­ing be­hav­iour.

As spot­ted by Tibor on X, ChatGPT Android app 1.2025.329 beta in­cludes new ref­er­ences to an ads fea­ture” with bazaar con­tent”, search ad” and search ads carousel.”

This move could dis­rupt the web econ­omy, as what most peo­ple don’t un­der­stand is that GPT likely knows more about users than Google.

For ex­am­ple, OpenAI could cre­ate per­son­alised ads on ChatGPT that pro­mote prod­ucts that you re­ally want to buy. It might also sneak in ads in the search ads, sim­i­lar to Google Search ads.

The leak sug­gests that ads will ini­tially be lim­ited to the search ex­pe­ri­ence only, but this may change in the fu­ture.

ChatGPT has roughly 800 mil­lion peo­ple us­ing it every week, up from 100 mil­lion weekly users in November 2023 and about 300 mil­lion weekly users in late 2024.

An OpenAI-backed study es­ti­mated 700 mil­lion users send­ing 18 bil­lion mes­sages per week by July 2025, which lines up with this growth, and other an­a­lysts now peg traf­fic at around 5–6 bil­lion vis­its per month.

GPT handles about 2.5 bil­lion prompts a day, and India has be­come the sin­gle biggest user base, ahead of the US.

ChatGPT has every­thing it needs for ads to suc­ceed. What do you think?

...

Read the original on www.bleepingcomputer.com »

7 362 shares, 16 trendiness

How good engineers write bad code at big companies

Every cou­ple of years some­body no­tices that large tech com­pa­nies some­times pro­duce sur­pris­ingly sloppy code. If you haven’t worked at a big com­pany, it might be hard to un­der­stand how this hap­pens. Big tech com­pa­nies pay well enough to at­tract many com­pe­tent en­gi­neers. They move slowly enough that it looks like they’re able to take their time and do solid work. How does bad code hap­pen?

I think the main rea­son is that big com­pa­nies are full of en­gi­neers work­ing out­side their area of ex­per­tise. The av­er­age big tech em­ployee stays for only a year or two. In fact, big tech com­pen­sa­tion pack­ages are typ­i­cally de­signed to put a four-year cap on en­gi­neer tenure: af­ter four years, the ini­tial share grant is fully vested, caus­ing en­gi­neers to take what can be a 50% pay cut. Companies do ex­tend tem­po­rary yearly re­freshes, but it ob­vi­ously in­cen­tivizes en­gi­neers to go find an­other job where they don’t have to won­der if they’re go­ing to get the other half of their com­pen­sa­tion each year.

If you count in­ter­nal mo­bil­ity, it’s even worse. The longest I have ever stayed on a sin­gle team or code­base was three years, near the start of my ca­reer. I ex­pect to be re-orged at least every year, and of­ten much more fre­quently.

However, the av­er­age tenure of a code­base in a big tech com­pany is a lot longer than that. Many of the ser­vices I work on are a decade old or more, and have had many, many dif­fer­ent own­ers over the years. That means many big tech en­gi­neers are con­stantly figuring it out”. A pretty high per­cent­age of code changes are made by beginners”: peo­ple who have on­boarded to the com­pany, the code­base, or even the pro­gram­ming lan­guage in the past six months.

To some ex­tent, this prob­lem is mit­i­gated by old hands”: en­gi­neers who hap­pen to have been in the or­bit of a par­tic­u­lar sys­tem for long enough to de­velop real ex­per­tise. These en­gi­neers can give deep code re­views and re­li­ably catch ob­vi­ous prob­lems. But re­ly­ing on old hands” has two prob­lems.

First, this process is en­tirely in­for­mal. Big tech com­pa­nies make sur­pris­ingly lit­tle ef­fort to de­velop long-term ex­per­tise in in­di­vid­ual sys­tems, and once they’ve got it they seem to barely care at all about re­tain­ing it. Often the en­gi­neers in ques­tion are moved to dif­fer­ent ser­vices, and have to ei­ther keep up their old hand” du­ties on an ef­fec­tively vol­un­teer ba­sis, or aban­don them and be­come a rel­a­tive be­gin­ner on a brand new sys­tem.

Second, ex­pe­ri­enced en­gi­neers are al­ways over­loaded. It is a busy job be­ing one of the few en­gi­neers who has deep ex­per­tise on a par­tic­u­lar ser­vice. You don’t have enough time to per­son­ally re­view every soft­ware change, or to be ac­tively in­volved in every de­ci­sion-mak­ing process. Remember that you also have your own work to do: if you spend all your time re­view­ing changes and be­ing in­volved in dis­cus­sions, you’ll likely be pun­ished by the com­pany for not hav­ing enough in­di­vid­ual out­put.

Putting all this to­gether, what does the me­dian pro­duc­tive en­gi­neer at a big tech com­pany look like? They are usu­ally:

* com­pe­tent enough to pass the hir­ing bar and be able to do the work, but ei­ther

* work­ing on a code­base or lan­guage that is largely new to them, or

* try­ing to stay on top of a flood of code changes while also jug­gling their own work.

They are al­most cer­tainly work­ing to a dead­line, or to a se­ries of over­lap­ping dead­lines for dif­fer­ent pro­jects. In other words, they are try­ing to do their best in an en­vi­ron­ment that is not set up to pro­duce qual­ity code.

That’s how obviously” bad code hap­pens. For in­stance, a ju­nior en­gi­neer picks up a ticket for an an­noy­ing bug in a code­base they’re barely fa­mil­iar with. They spend a few days fig­ur­ing it out and come up with a hacky so­lu­tion. One of the more se­nior old hands” (if they’re lucky) glances over it in a spare half-hour, ve­toes it, and sug­gests some­thing slightly bet­ter that would at least work. The ju­nior en­gi­neer im­ple­ments that as best they can, tests that it works, it gets briefly re­viewed and shipped, and every­one in­volved im­me­di­ately moves on to higher-pri­or­ity work. Five years later some­body no­tices this and thinks wow, that’s hacky - how did such bad code get writ­ten at such a big soft­ware com­pany”?

I have writ­ten a lot about the in­ter­nal tech com­pany dy­nam­ics that con­tribute to this. Most di­rectly, in Seeing like a soft­ware com­pany I ar­gue that big tech com­pa­nies con­sis­tently pri­or­i­tize in­ter­nal leg­i­bil­ity - the abil­ity to see at a glance who’s work­ing on what and to change it at will - over pro­duc­tiv­ity. Big com­pa­nies know that treat­ing en­gi­neers as fun­gi­ble and mov­ing them around de­stroys their abil­ity to de­velop long-term ex­per­tise in a sin­gle code­base. That’s a de­lib­er­ate trade­off. They’re giv­ing up some amount of ex­per­tise and soft­ware qual­ity in or­der to gain the abil­ity to rapidly de­ploy skilled en­gi­neers onto what­ever the prob­lem-of-the-month is.

I don’t know if this is a good idea or a bad idea. It cer­tainly seems to be work­ing for the big tech com­pa­nies, par­tic­u­larly now that how fast can you pivot to some­thing AI-related” is so im­por­tant. But if you’re do­ing this, then of course you’re go­ing to pro­duce some gen­uinely bad code. That’s what hap­pens when you ask en­gi­neers to rush out work on sys­tems they’re un­fa­mil­iar with.

Individual en­gi­neers are en­tirely pow­er­less to al­ter this dy­namic. This is par­tic­u­larly true in 2025, when the bal­ance of power has tilted away from en­gi­neers and to­wards tech com­pany lead­er­ship. The most you can do as an in­di­vid­ual en­gi­neer is to try and be­come an old hand”: to de­velop ex­per­tise in at least one area, and to use it to block the worst changes and steer peo­ple to­wards at least min­i­mally-sen­si­ble tech­ni­cal de­ci­sions. But even that is of­ten swim­ming against the cur­rent of the or­ga­ni­za­tion, and if in­ex­pertly done can cause you to get PIP-ed or worse.

I think a lot of this comes down to the dis­tinc­tion be­tween pure and im­pure soft­ware en­gi­neer­ing. To pure en­gi­neers - en­gi­neers work­ing on self-con­tained tech­ni­cal pro­jects, like a pro­gram­ming lan­guage - the only ex­pla­na­tion for bad code is in­com­pe­tence. But im­pure en­gi­neers op­er­ate more like plumbers or elec­tri­cians. They’re work­ing to dead­lines on pro­jects that are rel­a­tively new to them, and even if their tech­ni­cal fun­da­men­tals are im­pec­ca­ble, there’s al­ways some­thing about the par­tic­u­lar setup of this sit­u­a­tion that’s awk­ward or sur­pris­ing. To im­pure en­gi­neers, bad code is in­evitable. As long as the over­all sys­tem works well enough, the pro­ject is a suc­cess.

At big tech com­pa­nies, en­gi­neers don’t get to de­cide if they’re work­ing on pure or im­pure en­gi­neer­ing work. It’s not their code­base! If the com­pany wants to move you from work­ing on data­base in­fra­struc­ture to build­ing the new pay­ments sys­tem, they’re fully en­ti­tled to do that. The fact that you might make some mis­takes in an un­fa­mil­iar sys­tem - or that your old col­leagues on the data­base in­fra team might suf­fer with­out your ex­per­tise - is a de­lib­er­ate trade­off be­ing made by the com­pany, not the en­gi­neer.

It’s fine to point out ex­am­ples of bad code at big com­pa­nies. If noth­ing else, it can be an ef­fec­tive way to get those spe­cific ex­am­ples fixed, since ex­ecs usu­ally jump at the chance to turn bad PR into good PR. But I think it’s a mis­take to at­tribute pri­mary re­spon­si­bil­ity to the en­gi­neers at those com­pa­nies. If you could wave a magic wand and make every en­gi­neer twice as strong, you would still have bad code, be­cause al­most no­body can come into a brand new code­base and quickly make changes with zero mis­takes. The root cause is that most big com­pany en­gi­neers are forced to do most of their work in un­fa­mil­iar code­bases.

edit: this post got lots of com­ments on both Hacker News and lob­ste.rs.

It was sur­pris­ing to me that many com­menters find this point of view un­ple­sas­antly ni­hilis­tic. I con­sider my­self fairly op­ti­mistic about my work. In fact, I meant this post as a rous­ing de­fence of big tech soft­ware en­gi­neers from their crit­ics! Still, I found this re­sponse blog post to be an ex­cel­lent ar­tic­u­la­tion of the this is too cyn­i­cal” po­si­tion, and will likely write a fol­lowup post about it soon. If you can’t wait, I wrote a bit on this topic at the start of 2025 in Is it cyn­i­cal to do what your man­ager wants?.

Some Hacker News com­menters had al­ter­nate the­o­ries for why bad code hap­pens: lack of mo­ti­va­tion, de­lib­er­ately de­mor­al­iz­ing en­gi­neers so they won’t union­ize, or just purely op­ti­miz­ing for speed. I don’t find these com­pelling, based on my own ex­pe­ri­ence. Many of my col­leagues are highly mo­ti­vated, and I just don’t be­lieve any tech com­pany is de­lib­er­ately try­ing to make its en­gi­neers de­mor­al­ized and un­happy.

A few read­ers dis­agreed with me about RSUs pro­vid­ing an in­cen­tive to leave, be­cause their com­pa­nies give stock re­fresh­ers. I don’t know about this. I get re­fresh­ers too, but if they’re not in the con­tract, then I don’t think it mat­ters - the com­pany can de­cide not to give you 50% of your comp at-will by just paus­ing the re­fresh­ers, which is an in­cen­tive to move jobs so it’s locked in for four more years.

...

Read the original on www.seangoedecke.com »

8 346 shares, 14 trendiness

So you wanna build a local RAG?

When we launched Skald, we wanted it to not only be self-hostable, but also for one to be able to run it with­out send­ing any data to third-par­ties.

With LLMs get­ting bet­ter and bet­ter, pri­vacy-sen­si­tive or­ga­ni­za­tions should­n’t have to choose be­tween be­ing left be­hind by not ac­cess­ing fron­tier mod­els and do­ing away with their com­mitt­ment to (or le­gal re­quire­ment for) data pri­vacy.

So here’s what we did to sup­port this use case and also some bench­marks com­par­ing per­for­mance when us­ing pro­pri­etary APIs vs self-hosted open-source tech.

A ba­sic RAG usu­ally has the fol­low­ing core com­po­nents:

And most times it also has these as well:

What that means is that when you’re look­ing to build a fully lo­cal RAG setup, you’ll need to sub­sti­tute what­ever SaaS providers you’re us­ing for a lo­cal op­tion for each of those com­po­nents.

Here’s a table with some ex­am­ples of what we might use in a sce­nario where we can use third-party Cloud ser­vices and one where we can’t:

Do note that run­ning some­thing lo­cally does not mean it needs to be open-source, as one could pay for a li­cense to self-host pro­pri­etary soft­ware. But at Skald our goal was to use fully open-source tech, which is what I’ll be con­ver­ing here.

The table above is far from cov­er­ing all avail­able op­tions on both columns, but ba­si­cally it gives you an in­di­ca­tion of what to re­search into in or­der to pick a tool that works for you.

As with any­thing, what works for you will greatly de­pend on your use case. And you need to be pre­pared to run a few more ser­vices than you’re used to if you’ve just been call­ing APIs.

For our lo­cal stack, we went with the eas­i­est setup for now to get it work­ing (and it does! see writeup on this lower down) but will be run­ning bench­marks on all other op­tions to de­ter­mine the best pos­si­ble setup.

This is what we have to­day:

Vector DB: Postgres + pgvec­tor. We al­ready use Postgres and did­n’t want to bun­dle an­other ser­vice into our stack, but this is con­tro­ver­sial and we will be run­ning bench­marks to make a bet­ter in­formed de­ci­sion here. Note that pgvec­tor will serve a lot of use cases well all the way up to hun­dreds of thou­sands of doc­u­ments, though.

Vector em­bed­dings: Users can con­fig­ure this in Skald and we use Sentence Transformers (all-MiniLM-L6-v2) as our de­fault (solid all-around per­former for speed and re­trieval, English-only). I also ran Skald with bge-m3 (larger, multi-lan­guage) and share the re­sults later in this post.

LLM: We don’t even bun­dle a de­fault with Skald and it’s up to the users to run and man­age this. I tested our setup with GPT-OSS 20B on EC2 (results shown be­low).

Reranker: Users can also con­fig­ure this in Skald, and the de­fault is the Sentence Transformers cross en­coder (solid, English-only). I’ve also used bge-reranker-v2-m3 and mmarco-mMiniLMv2-L12-H384-v1 which of­fer multi-lin­gual sup­port.

Document pars­ing: There is­n’t much of a ques­tion on this one. We’re us­ing Docling. It’s great. We run it via do­cling-serve.

So the main goal here was first to get some­thing work­ing then en­sure it worked well with our plat­form and could be eas­ily de­ployed. From here we’ll be run­ning ex­ten­sive bench­marks and work­ing with our clients to pro­vide a solid setup that both per­forms well but is also not a night­mare to de­ploy and man­age.

From that per­spec­tive, this was a great suc­cess.

Deploying a pro­duc­tion in­stance of Skald with this whole stack took me 8 min­utes, and that comes bun­dled with the vec­tor data­base (well, Postgres), a rerank­ing and em­bed­ding ser­vice, and Docling.

The only thing I needed to run sep­a­rately was the LLM, which I did via llama.cpp.

Having got­ten this sorted, I im­ported all the con­tent from the PostHog web­site [1] and set up a tiny dataset [2] of ques­tions and ex­pected an­swers in­side of Skald, then used our Experiments fea­ture to run the RAG over this dataset.

I ex­plic­itly kept the topK val­ues re­ally high (100 for the vec­tor search and 50 for post-rerank­ing), as I was mostly test­ing for ac­cu­racy and wanted to see the per­for­mance when ques­tions re­quired e.g. ag­gre­gat­ing con­text over 15+ doc­u­ments.

So with­out any more de­lay, here are the re­sults of my not-very-sci­en­tific at all bench­mark us­ing the ex­per­i­men­ta­tion plat­form in­side of Skald.

This is our de­fault Cloud setup. We use voy­age-3-large and rerank-2.5 from Voyage AI as our em­bed­ding and rerank­ing mod­els re­spec­tively, and we de­fault to Claude Sonnet 3.7 for re­sponses (users can con­fig­ure the model though).

Our LLM-as-a-Judge gave an av­er­age score of 9.45 to the re­sponses, and I ba­si­cally agree with the as­sess­ment. All an­swers were cor­rect, with one miss­ing a few ex­tra bits of con­text.

With the con­trol ex­per­i­ment done, I then moved on to a setup where I kept Voyage as the em­bed­dings provider and reranker, and then used GPT-OSS 20B run­ning on a llama.cpp server on a g5.2xlarge EC2 in­stance as the LLM.

The goal here was to see how well the open-source LLM model it­self stacked up against a fron­tier model ac­cessed via API.

And it did great!

We don’t yet sup­port LLM-as-a-Judge on fully lo­cal de­ploy­ments, so the only score we have here is mine. I scored the an­swers an av­er­age of 9.18 and they were all cor­rect, with two of them just miss­ing a few bits of in­for­ma­tion or high­light­ing less rel­e­vant in­for­ma­tion from the con­text.

Lastly, it was time for the mo­ment of truth: run­ning a fully lo­cal setup.

For this I ran two tests:

The most pop­u­lar open-source mod­els are all-MiniLM-L6-v2 for em­bed­dings and ms-marco-MiniLM-L6-v2 as the reranker, so I used those for my first bench­mark.

Here the av­er­age score was 7.10. Not bad, but def­i­nitely not great. However, when we dig into the re­sults, we can get a bet­ter un­der­stand­ing of how this setup fails.

Basically, it got all point queries right, which are ques­tions where the an­swer is some­where in the mess of doc­u­ments, but can be found from one spe­cific place.

Where it failed was:

* Non-english query: The em­bed­dings model and the reranker are English-based, so my ques­tion in Portuguese ob­vi­ously got no an­swer

* An am­bigu­ous ques­tion with very lit­tle con­text (“what’s ch”)

* Aggregating in­for­ma­tion from mul­ti­ple doc­u­ments/​chunks e.g. it only found 5 out of PostHog’s 7 fund­ing rounds, and only a sub­set of the PostHog com­peti­tors that of­fer ses­sion re­play (as men­tioned in the source data)

In my view, this is good news. That means that the de­fault op­tions will go a long way and should give you very good per­for­mance if your use case is only do­ing point queries in English. The other great thing is that these mod­els are also fast.

Now, if you need to han­dle am­bi­gu­ity bet­ter, or han­dle ques­tions in other lan­guages, then this setup is sim­ply not for you.

The next test I did used bge-m3 as the em­bed­dings model and mmarco-mMiniLMv2-L12-H384-v1 as the reranker. The em­bed­dings model is sup­pos­edly much bet­ter than the one used in the pre­vi­ous test and is also multi-lin­gual. The reranker on the other hand uses the same cross-en­coder from the pre­vi­ous test as the base model but also adds multi-lin­gual sup­port. The more stan­dard op­tion here would have been the much more pop­u­lar bge-reranker-v2-m3 model, but I found it to be much slower. I in­tend to tweak my setup and test it again, how­ever.

Anyway, onto the re­sults! I scored it 8.63 on av­er­age, which is very good. There were no com­plete fail­ures, and it han­dled the ques­tion in Portuguese well.

The mis­takes it made were:

* This new setup also did not do the best job at ag­gre­gat­ing in­for­ma­tion, miss­ing 2 of PostHog’s fund­ing rounds, and a cou­ple of its ses­sion re­play com­peti­tors

* It also an­swered a ques­tion cor­rectly, but added in­cor­rect ad­di­tional con­text af­ter it

So over­all it per­formed quite well. Again what we what saw was the main prob­lem is when the con­text needed for the re­sponse is scat­tered across mul­ti­ple doc­u­ments. There are var­i­ous tech­niques to help with this and we’ll be tri­al­ing some soon! They haven’t been needed on the Cloud ver­sion be­cause bet­ter mod­els save you from hav­ing to add com­plex­ity for min­i­mal per­for­mance gains, but as we’re fo­cused on build­ing a re­ally solid setup for lo­cal de­ploys, we’ll be look­ing into this more and more.

I hope this writeup has pro­vided you with at least some in­sight and con­text into build­ing a lo­cal RAG, and also the fact that it does work, it can serve a lot of use cases, and that the ten­dency is for this setup to get bet­ter and bet­ter as a) mod­els im­prove b) we get more open-source mod­els across the board, with both be­ing things that we seem to be trend­ing to­wards.

As for us at Skald, we in­tend to pol­ish this setup fur­ther in or­der to serve even more use cases re­ally well, as well as in­tend to soon be pub­lish­ing more le­git­i­mate bench­marks for mod­els in the open-source space, from LLMs to rerankers.

If you’re a com­pany that needs to run AI tool­ing in air-gapped in­fra­struc­ture, let’s chat — feel free to email me at yakko [at] us­eskald [dot] com.

Lastly, if you want to get in­volved, feel free to chat to us over on our GitHub repo (MIT-licensed) or catch us on Slack.

[1] I used the PostHog web­site here be­cause the web­site con­tent is MIT-licensed (yes, wild) and read­ily-avail­able as mark­down on GitHub and hav­ing worked there I know a lot of an­swers off the top of my head mak­ing it a great dataset of ~2000 doc­u­ments that I know well.

[2] The ques­tions and an­swers dataset I used for the ex­per­i­ments was the fol­low­ing:

...

Read the original on blog.yakkomajuri.com »

9 261 shares, 18 trendiness

System 7 natively boots on the Mac mini G4!

Please lo­gin or reg­is­ter.

Login with user­name, pass­word and ses­sion length

There is no Mac in OS X

(And Mac OS 8!)

Hey, guys!

Surely y’all know and have en­joyed Mac OS 9.2.2 boot­ing and beau­ti­fully-run­ning on all four Mac mini G4 mod­els for close to 8 years now. (Wow!)

Well, that was one mas­sive rev­o­lu­tion…

… But most of us did not think we would live to see the day New World ROM ma­chines, even more so the likes of the Mac mini G4, to NATIVELY boot System 7:

(Gotta love it try­ing to dis­play 1 GB RAM ca­pac­ity.)

Before your eye­balls leave your eye­sock­ets com­pletely, I ought to warn that there’s still much to be sorted out in this, es­pe­cially sound, video and net­work­ing (the usual sus­pects). In other words, your mileage may vary, so keep ex­pec­ta­tions in check!

OK, so HOW in the WORLD is any of this pos­si­ble?

It turned out New World ROM Macs had a cousin born out of the clone pro­gram (until the usual vil­lain, Steve Jobs, came and killed it), which was an ar­chi­tec­ture called CHRP (pronounced chirp”). It was the suc­ces­sor to PReP, but, un­like PReP, Mac OS was also go­ing to be of­fi­cially-bootable on it. Close to no CHRP ma­chines ever saw the light of the day, thanks to Jobs’ re­turn. Nonetheless, Apple in­ter­nally de­vel­oped Mac OS 7.6 ~ 8.0 for CHRP sys­tems be­fore it got axed. It’s just that they never re­leased it, but the de­vel­op­ment was done re­gard­less. On October 2025, it turned out some­one pre­served some of these Mac ver­sions, which were then ac­quired and pre­served and shared with the world. (Macintosh Garden link, archive.org link.)

Although CHRP was left to die, the so-called New World ROM Macs in­her­ited much of its ar­chi­tec­ture and de­sign. As you prob­a­bly know, these Macs rely on an ex­tra sys­tem file called Mac OS ROM, whereas Old World ROM Macs do not need it, and can use their own ac­tual ROM to get Mac OS go­ing. This meant any Mac OS ver­sion un­aware of the con­cept of a Mac OS ROM file could not just sim­ply boot in a New World ROM Mac nor­mally. People were able to boot Mac OS ver­sions as low as 8.1, but not any lower, and that too only for the very first few New World ROM Macs, but none of the later ones, which in­creas­ingly had a higher and higher min­i­mum OS ver­sion.

But not any­more, as the fol­low­ing ma­jor events hap­pened:

- The re­cent Mac OS 8.0 CHRP leaks pro­vided an ear­lier ROM file that, it turns out, al­lows reg­u­lar Mac OS 8.0 to boot, as well. Or, al­ter­na­tively, the Mac OS ROM file that al­ways worked with Mac OS 8.1 also worked on these Mac OS 8.0 CHRP re­leases. (Exact de­tails are fuzzy in my mem­ory by now, so some­one else might want to cor­rect me if I got some­thing wrong.)

- The re­cent Mac OS 7.6 CHRP leak pro­vided an ad­di­tional System Enabler file, which could be ex­ploited for load­ing Mac OS ROM files. I for­get if that’s how it worked out-of-the-box, or if a bit of hack­ing to the System Enabler was re­quired for that, how­ever what I do re­mem­ber clearly is that, while the System Enabler was hard­coded so that ar­tif­i­cally no OS ear­lier than 7.6 could use it, the OS ver­sion check could be patched out of it, so that System 7.5.x (and po­ten­tially ear­lier) can also use it.

In other words, this file is the rea­son that ear­lier Mac OS ver­sions can make use of the Mac OS ROM file, thus bring­ing Mac OS 7.6.1 and ear­lier po­ten­tially to ALL New World ROM Macs!

(Trivia tid­bit: Apparently this en­abler was also pre­sent in cer­tain archives of the Mac OS 8.0 be­tas from when it was still known as Mac OS 7.7”. Oops! This thing was right un­der our nose all this while!)

- Of course, as hinted at pre­vi­ously, a System Enabler _alone_ is NOT enough to boot System 7 and the like when even much newer sys­tems that were al­ready aware of the Mac OS ROM file could not boot. The newer the model of the New World ROM Mac, the less you could ac­tu­ally go back”. The rea­son is sim­ple: Mac OS ROM files, over time through its var­i­ous ver­sions, would get new fea­tures added, BUT also would re­move older ones which were re­quired by older OS ver­sions. The so­lu­tion? Using ELNs great Mac OS ROM patch­ing tools (plus other tools of his own), Rairii” AKA Wack0”, known for his amaz­ing PPC Windows NT 3.51 / NT 4.0 pro­ject on PowerMacs and the Nintendo GC / Wii / Wii U, an­a­lyzed many of these Mac OS ROM files, and fixed + patched + stitched to­gether new Mac OS ROM files that at­tempt to keep ALL the old fea­tures that were re­moved AND all the new fea­tures that were added. In other words, the ul­ti­mate Mac OS ROM file that boots every­thing and runs every­thing (roughly-speaking). He also is the one who fig­ured out and hacked the System Enabler to also ac­cept OSes ear­lier than Mac OS 7.6.

Keep in mind, how­ever, that this ef­fort es­sen­tially al­lows Macs that are al­ready able to boot SOME ver­sion of Mac OS to ALSO boot older ver­sions. But if a given ma­chine can­not boot ANY Mac OS ver­sion, such as the two DLSD PowerBook G4s (15″, 17″), these patches can­not do any­thing about that: Their in­com­pat­i­bil­i­ties need to be ad­dressed first and sep­a­rately.

One more in­ter­est­ing thing to note about the sim­i­lar­ity be­tween CHRP sys­tems and New World ROM Macs: If you check ANY Mac OS ROM file to see its TYPE and CREATOR codes, you will see they are tbxi” and, you guessed it, chrp”, re­spec­tively. I could­n’t be­lieve chrp” was in ALL the Mac OS ROM files all these years!

Where can I get ahold of this EPIC stuff ? ? ? ? ?

Rairii’s super” ROMs are avail­able on this GitHub repos­i­tory, un­der re­leases. You may also fetch the patched System Enabler for Mac OS 7.6.1 and ear­lier from there, and place it in the System Folder. Make sure to down­load the files from the lat­est re­lease there.

Note that he ap­plied his patches to 3 dif­fer­ent ver­sions of the (US) ROMs:

- 10.2.1 with CPU Software 5.9: The latest and great­est” Mac OS ROM file of all Mac OS. For ref­er­ence, this is also the ROM ver­sion that the 1.628 GB max RAM Mac OS ROM we have was based on (thus go­ing be­yond the 1.5 GB limit), al­though do note that the RAM limit break patches are NOT in­cluded in this, at least not yet as of the time of writ­ing.

- 2.5.1: A much ear­lier ver­sion of the ROM, but still new enough to sup­port USB. See the GitHub page for de­tails.

- 1.7.1: A very early ROM, which can be well-lever­aged by very early New World ROM Macs. See the GitHub page for de­tails.

Note you need ROM ver­sion 9.1 or higher to use ATA-6 AKA Ultra ATA/100 AKA Kauai dri­vers, which are es­sen­tial on the likes of the Mac mini G4 and the MDD. Special notes for the Mac mini G4 are fur­ther down.

What is the COMPLETE list of Mac OS ver­sions that now boot?

To be ex­act, this is the com­plete list of OSes I have at­tempted, all on the Mac mini G4 1.5GHz model, with the fol­low­ing re­sults:

- System 6.0.8: No boot. You get a Happy Mac, fol­lowed by a blink­ing ques­tion mark in a floppy icon. (Note: Although this very at­tempt is UTTERLY in­sane for mul­ti­ple tech­ni­cal rea­sons, it might be not AS seem­ingly-im­pos­si­ble as one may think, as the 68k em­u­la­tor re­sides within the Mac OS ROM file.)

- System 7.0: No boot. You get a Happy Mac, but then a warn­ing win­dow pops up say­ing System 7.0 can­not boot on this com­puter.

- System 7.1.2: No boot. You get a Happy Mac, but then a warn­ing win­dow pops up say­ing System 7.1 can­not boot on this com­puter.

- System 7.5: BOOTS AND IS STABLE. It re­quires you to hold shift to turn Extensions (and Control Panels / INITs) off, though, or to get rid of the Mouse” Control Panel (and pos­si­bly more). The sys­tem is sur­pris­ingly sta­ble! I tested the British ver­sion of this one, as Apple’s Mac OS Anthology discs did not in­clude the US in­stallers, for some very slacker-y rea­son.

- System 7.5.2: Boots, but very bro­ken, close to noth­ing works. It could be be­cause System 7.5.2 was al­ways VERY ma­chine-spe­cific, and is ap­par­ently one of the most bro­ken ver­sions of Mac OS of ALL time, re­gard­less. The ma­chine-spe­cific en­ablers, and other things, might be what is mak­ing it so un­sta­ble.

- System 7.5.3: BOOTS AND IS STABLE. It re­quires you to hold shift to turn Extensions (and Control Panels / INITs) off, though, or to get rid of the Mouse” Control Panel (and pos­si­bly more). The sys­tem is sur­pris­ingly sta­ble!

- Mac OS 7.6: BOOTS AND IS STABLE. Holding shift is not re­quired here. What else can I say? It works”.

- Mac OS 8.1: BOOTS AND IS STABLE. Holding shift is not re­quired here, ei­ther. Behaves much the same as the oth­ers, ex­cept we now have HFS+ by de­fault. Still, it did NOT like me hav­ing a 940 GB HFS+ par­ti­tion, and prompted me to ei­ther eject it or for­mat it. (To be fair, older OSes tried to do that, too, but Mac OS 8.1 was THE OS to _officially_ be able to han­dle HFS+ prop­erly, so there are no ex­cuses for it to fail here. Mac OS 9.2 ~ 9.2.2 all work per­fectly with it.)

- Mac OS 8.5: No boot. Rather, it seems like it WOULD boot, but start­ing with Mac OS 8.5, Mac OS now al­ways checks to see if the ma­chine you are boot­ing from is within a list of Apple-endorsed ma­chine IDs for the given Mac OS ver­sion. In other words, Mac OS 8.5 does not know what the Mac mini G4 is, nor what a G4 Cube is (our Mac mini G4 ROM file makes the mini pre­tend to be the lat­ter). It seems it should be pos­si­ble to patch out the ma­chine check. According to Rairii, this should be able to be patched out by dis­abling such a check on the boot” re­source in the Resource Fork of the System file, in ID 3 (also known as boot3″). For Mac OS 8.6, it seems like this check hap­pens at the end of boot3, wher­ever a check for ma­chine ID 406 is lo­cated, in which af­ter it’s de­tected, the code checks to see if the ex­act Mac model is whitelisted or not.

- Mac OS 8.5.1: No boot. All that ap­plies to Mac OS 8.5 also ap­plies to Mac OS 8.5.1.

- Mac OS 8.6: No boot. It crashes dur­ing the start screen, when the load­ing bar ap­pears, but be­fore the first ex­ten­sion gets to load. See the top-left cor­ner of the pic­ture for a glitchy vi­sual ar­ti­fact. Same hap­pens if you try to boot with Extensions off.

- Mac OS 9.0.4: No boot. It crashes dur­ing the start screen, when the load­ing bar ap­pears, but be­fore the first ex­ten­sion gets to load. Same hap­pens if you try to boot with Extensions off. Exact same symp­toms as when try­ing to boot Mac OS 8.6 at least on this mini model, in­clud­ing the vi­sual ar­ti­fact on the top-left cor­ner.

- Mac OS 9.1: No boot. It crashes dur­ing the start screen, when the load­ing bar ap­pears, but be­fore the first ex­ten­sion gets to load. Same hap­pens if you try to boot with Extensions off. Exact same symp­toms as when try­ing to boot Mac OS 8.6 and Mac OS 9.0.4 at least on this mini model, in­clud­ing the vi­sual ar­ti­fact on the top-left cor­ner.

- Mac OS 9.2 ~ 9.2.2: BEST OS EVER, BOOTS AND RUNS BEAUTIFULLY. Nuff said.

Note that, al­though I de­scribe many of these as stable”, I mean you can use much of it nor­mally (sound/video/networking aside) with­out it crash­ing or mis­be­hav­ing, at least not too hard, but that is not to say every­thing works, be­cause that is just not the case. For ex­am­ple, when pre­sent, avoid open­ing the Apple System Profiler, un­less you want a mas­sive crash as it strug­gles try­ing to pro­file and gather all the in­for­ma­tion about your sys­tem. Some other apps or Control Panels might ei­ther not work, or work up to a cer­tain point, af­ter which they might freeze, re­quir­ing you to Force Quit the Finder to keep on go­ing. And so on.

As you can see, I did not yet try System 7.5.5, Mac OS 7.6.1 and Mac OS 8.0. That’s be­cause they all are most likely work­ing ex­actly as their neigh­bour­ing ver­sions. But feel free to con­firm.

Most non-mini sys­tems should be able to boot Mac OS 8.6 ~ Mac OS 9.1 just fine. A Mac OS 8.6 Enabler”, so to speak, by LightBulbFun, can be re­named as e.g. Sawteeth” and put in­side the System Folder for some ma­chines that can­not boot Mac OS 8.6 nor­mally, so that they can, then, boot it. It is ac­tu­ally a Mac OS ROM file, but can func­tion as a com­ple­men­tary, helper file to aid the ac­tual Mac OS ROM file in this case. If you’d like, check here for more info. I have at­tached Sawteeth.bin” to this post for con­ve­nience. LightBulbFun first shared it on this post, specif­i­cally through this MEGA link.

Most non-mini sys­tems should also be able to boot Mac OS 8.5 and 8.5.1, es­pe­cially on G3s and ear­lier. Some G4 Macs might need to spoof the Mac model in Open Firmware (or some other Forth script added to ROM) to boot, though, or patch the check out like I men­tioned for the mini ear­lier. The rea­son the mini does­n’t have the spoof­ing as an op­tion is that any spoof­ing in OF would be over­writ­ten by its own spe­cial­ized Mac OS ROM, which spoofs a G4 Cube, which is clearly not in the whitelist of sup­ported ma­chines for Mac OS 8.5 and 8.5.1.

Also note that the mini be­haves as re­ported above with Mac OS 8.6 with or with­out this 8.6 en­abler” file (and with or with­out the System Enabler for Mac OS 7.6.1 and ear­lier, both of which don’t seem to get in the way of later, nor ear­lier, OSes).

Most im­por­tantly, I did not yet at­tempt to iden­tify which are the lat­est ver­sions of each Control Panel and Extension for each of these OSes. If I did, I’m sure it would help a lot, and per­haps ad­dress quite a num­ber of these prob­lems. The more peo­ple chime in on this ef­fort, the bet­ter! Imagine if we had a proper Mac mini G4 System 7.5.5” CD, then an MDD Mac OS 8.5.1″ CD, then an iBook G3 Mac OS 7.6.1” CD, and so on. Everyone with a G3 or G4 Mac can help by try­ing things out!

Namely, some­thing akin to MacTron’s ef­forts high­light­ing the lat­est Extensions for Mac OS 9.2.2 and Mac OS 8.6 like this, but also for every other Mac OS ver­sion:

But how did you get the mini to boot? It re­quires its own spe­cial ROM!

Indeed it does! All credit goes to ELN and all of those who helped him on Mac OS 9 Lives!: you can sim­ply use his tool­ing (which was also very use­ful for Rairii) to re-ap­ply the Mac-mini-G4-specific ROM patches to Rairii’s lat­est 10.2.1 ROM, and voila! It works as well as you would hope it to!

You can even use the re­sult­ing ROM for Mac OS 9.2.2, as well, even though you don’t have to: Originally, the Mac mini G4 ROM as we see them in RossDarker’s Mac mini G4 CDs ver­sion 8 and 9 (AKA v8 and v9), as well as in all the pre­vi­ous ver­sions, were based on the US ROM v9.6.1. I could not find an ex­pla­na­tion as to why ROM v10.2.1 was­n’t used in the end, even when dig­ging the old Mac mini G4 thread again that started it all. Perhaps be­cause we al­ready had a work­ing ROM with v9.6.1 and did not want to risk break­ing any­thing, or who knows. However, I have thor­oughly tested Mac OS 9.2.2 with this new ROM com­bi­na­tion (latest Rairii 10.2.1 + lat­est Mac mini G4 patches AKA v9 patches), and from what I could tell, every­thing be­haves ex­actly the same as with the pre­vi­ous ROM we al­ways used. Except now we have the abil­ity to use the same ROM to also boot System 7.5 (I still can’t be­lieve this, even though it is true).

(For the record, while the 9.6.1 ROM was also mod­i­fied to spoof the Mac mini G4 model iden­ti­fier as a G4 Cube, we also tried to spoof it as a QuickSilver 2002 at one point, but some­one re­ported sound is­sues with that, and so it was quickly changed back to a G4 Cube and such a change never made it into one of RossDarker’s CDs. So just about every­one us­ing Mac OS on the mini for all these years has had a ROM re­port­ing to the OS as a G4 Cube, ex­clu­sively.)

To ap­ply the Mac mini G4 patches, I used ELNs tbxi and tbxi-patches to ap­ply his macmini.py” script. You can fol­low the in­struc­tions as per the tbxi-patches page, which you should not let in­tim­i­date you even if you are not used to this kind of thing. It’s quick and easy, and the scripts are also fully-com­men­tated very nicely by ELN if you are cu­ri­ous about what it is do­ing and why.

In my case, first I tried us­ing the lat­est Python 3.13.9 both from Windows 7 (bad idea due to re­source fork loss) and ma­cOS 10.14.6 Mojave, but nei­ther worked: it seems like that ver­sion of Python was just too new. I then re­tried with Python 3.8.10 in­stead (which I chose think­ing it might be more pe­riod-ap­pro­pri­ate for the scrip­t’s age) on Mojave, which worked flaw­lessly. I did­n’t try it, but per­haps an older Python ver­sion might work on PowerPC OS X, as well.

I used the Python in­staller from the of­fi­cial web­site, and I also used an official” Git in­staller from here (thus avoid­ing any pack­age man­ager headache… man, how I hate non-Mac-OS sys­tems, in­clud­ing OS X, and pack­age man­agers in gen­eral…)

If some­how some­one with plenty of Python knowl­edge and the will­ing­ness to put enough time into it wished to, both tbxi and tbxi-tools could, per­haps, be ported to MacPython 2.3.5, so that we could do all this patch­ing from Mac OS 9.2.2 di­rectly and na­tively with­out leav­ing our main OS. That would also be awe­some! (Of course, it helps that this is also avail­able on more re­cent sys­tems nonethe­less, be­cause then every­one gets to join in on the fun with all kinds of dif­fer­ent back­grounds and se­tups.)

For con­ve­nience, I at­tached the fi­nal patched ROM to this post, so that any­one can go wild on their minis right away!

Why should I care when Mac OS 9.2.2 al­ready boots, and runs bet­ter?

It is also my opin­ion Mac OS 9.2.2 is the great­est OS, and Mac OS, ever, but not every­thing that is pos­si­ble in ear­lier Mac OS ver­sions is pos­si­ble in Mac OS 9.2.2. For ex­am­ple, some soft­ware re­quires Mac OS 9.0.4 or ear­lier to work. A lot of soft­ware is System-7-exclusive.

Some peo­ple also just pre­fer the likes of System 7 for its even-lighter mem­ory foot­print, lack of man­dated Appearance Manager and the like. Mac OS 9.2.2 is al­ready overkill-fast on the mini, and on most New World ROM Macs, but the likes of System 7.5 are just RIDICULOUSLY fast. Even more ridicu­lously. I still am try­ing to come into terms with how in­de­scrib­ably fast us­ing it on the mini was. It got even faster when I thought there was no way to get faster than in­stan­ta­neous”, as Mac OS 9.2.2 al­ways felt in­stan­ta­neous like no other sys­tem al­ready!

People might also have some other kind of rea­son and/​or spe­cial at­tach­ment to an ear­lier OS ver­sion. Or maybe peo­ple want to ex­plore older OS APIs and be­hav­iors, per­haps even make a new ap­pli­ca­tion they want to know how it will be­have on bare-metal not just on Mac OS 9, but also System 7 etc..

The value is in open­ing up the doors that give us, the users, more op­tions that help us all out.

Final re­marks

Above all, thank you to every­one that made this pos­si­ble. But I wanted to em­pha­size and give spe­cial thanks to Rairii for en­gi­neer­ing all these ROMs, Mac84 for archiv­ing and shar­ing all the CHRP discs, ELN for en­gi­neer­ing all the Mac mini G4 ROM com­pat­i­bil­ity scripts and cre­at­ing all the ROM and other Mac OS tool­ing, and to the Mac com­mu­nity at large every­where that as­sisted in all of this into be­com­ing re­al­ity. There’s hon­estly many, many peo­ple to thank we owe over this one way or an­other, both in small and big ways.

I can’t wait to see what peo­ple will do with all these new Mac OS ver­sions on their New World ROM sys­tems over the course of time!

« Last Edit: November 27, 2025, 11:24:38 AM by Jubadub »

There is no Mac in OS X

For the record, I posted this on all these 3 places for greater reach:

- Here (where the Mac mini G4 ROM is);

- Macintosh Garden;

- System 7 Today.

These posts were also linked to from the fol­low­ing places, with dis­cus­sions of their own:

- MacRumors PPC;

- 68kMLA.

I’m men­tion­ing this here in case any­one wants to quickly jump to dis­cus­sions hap­pen­ing on ei­ther side of the isle.

EDIT: For some rea­son I seem un­able to make posts with at­tach­ments (they just erase my mes­sage and prompt me to start a new topic), so I’m try­ing to see what file up­load ser­vice I should use for the Mac mini G4 ROM that boots System 7 (and the Sawtooth.bin” file).

I’d rather not cre­ate a Garden page just for this just yet as this pro­ject is still in its in­fancy, so any Mac-friendly file up­load ser­vice rec­om­men­da­tions would be ap­pre­ci­ated.

EDIT 2: Ah, I got it fig­ured out: Attachments can­not be .bin” nor .hqx”… This should change… I repack­aged the con­tents as .sit” for now and put them up in the first post, as in­tended.

« Last Edit: Today at 02:14:33 AM by Jubadub »

Quote from: Jubadub on November 27, 2025, 10:13:03 AMI have thor­oughly tested Mac OS 9.2.2 with this new ROM com­bi­na­tion (latest Rairii 10.2.1 + lat­est Mac mini G4 patches AKA v9 patches), and from what I could tell, every­thing be­haves ex­actly the same as with the pre­vi­ous ROM we al­ways used.

Long shot, but, is the mouse-freez­ing bug still pre­sent?

There is no Mac in OS X

Quote from: n8blz on November 27, 2025, 03:28:43 PMQuote from: Jubadub on November 27, 2025, 10:13:03 AMI have thor­oughly tested Mac OS 9.2.2 with this new ROM com­bi­na­tion (latest Rairii 10.2.1 + lat­est Mac mini G4 patches AKA v9 patches), and from what I could tell, every­thing be­haves ex­actly the same as with the pre­vi­ous ROM we al­ways used.

Long shot, but, is the mouse-freez­ing bug still pre­sent?

You know… Now that you men­tion it, I don’t think I en­coun­tered it. But maybe I was just lucky”: even with the pre­vi­ous ROM, it was not THAT com­mon for me to en­counter it (but at the same time, it was­n’t ex­actly very rare).

If you have a mini, are you able to check it on your end, as well? The more peo­ple we have try­ing this out, the more likely we are to truly find out.

I sus­pect the mouse glitch prob­a­bly re­mains, since as far as we know it’s a short­com­ing to be ad­dressed in our patches for the mini rather than Apple’s own code, but… who knows?

« Reply #4 on: Yesterday at 04:38:32 AM »

@Jubadub, thank you for writ­ing all this! I will need to re-read it sev­eral times to fully grasp what’s go­ing on there.

This is EPIC!

If you’re not part of the so­lu­tion, you’re part of the prob­lem.

« Reply #5 on: Yesterday at 09:23:44 AM »

BRAVO Jubadub!

Had been won­der­ing about your ab­sence round here lately.

Now that’s ev­i­dent, con­sid­er­ing what you’ve been work­ing on.

Extremely well done and VERY highly com­mend­able.

Not every­one is ca­pa­ble of pulling all of that in­for­ma­tion to­gether

...

Read the original on macos9lives.com »

10 252 shares, 21 trendiness

winapps-org/winapps: Run Windows apps such as Microsoft Office/Adobe in Linux (Ubuntu/Fedora) and GNOME/KDE as if they were a part of the native OS, including Nautilus integration. Hard fork of https://github.com/Fmstrat/winapps/

Run Windows ap­pli­ca­tions (including Microsoft 365 and Adobe Creative Cloud) on GNU/Linux with KDE Plasma, GNOME or XFCE, in­te­grated seam­lessly as if they were na­tive to the OS.

Creating short­cuts to se­lected Windows ap­pli­ca­tions on the host GNU/Linux OS.

Using FreeRDP as a back­end to seam­lessly ren­der Windows ap­pli­ca­tions along­side GNU/Linux ap­pli­ca­tions.

* The GNU/Linux /home di­rec­tory is ac­ces­si­ble within Windows via the \\tsclient\home mount.

* Integration with Nautilus, al­low­ing you to right-click files to open them with spe­cific Windows ap­pli­ca­tions based on the file MIME type.

* The of­fi­cial taskbar wid­get en­ables seam­less ad­min­is­tra­tion of the Windows sub­sys­tem and of­fers an easy way to launch Windows ap­pli­ca­tions.

* Microsoft Office links (e.g. ms-word://) from the host sys­tem are au­to­mat­i­cally opened in the Windows sub­sys­tem. (Note: You may need to use a User Agent Switcher browser ex­ten­sion and set the User-Agent to Windows, as the Office we­bapps typ­i­cally hide the Open in Desktop App” op­tion for Linux users.)

WinApps sup­ports ALL Windows ap­pli­ca­tions. Support does not, how­ever, ex­tend to ker­nel-level anti-cheat sys­tems (e.g. Riot Vanguard).

Scanning Windows for any com­mu­nity tested ap­pli­ca­tions (list be­low).

Scanning Windows for any other .exe files listed within the Windows Registry.

Community tested ap­pli­ca­tions ben­e­fit from high-res­o­lu­tion icons and pre-pop­u­lated MIME types. This en­ables file man­agers to de­ter­mine which Windows ap­pli­ca­tions should open files based on file ex­ten­sions. Icons for other de­tected ap­pli­ca­tions are pulled from .exe files.

Contributing to the list of sup­ported ap­pli­ca­tions is en­cour­aged through sub­mis­sion of pull re­quests! Please help us grow the WinApps com­mu­nity.

Please note that the pro­vided list of com­mu­nity tested ap­pli­ca­tions is com­mu­nity-dri­ven. As such, some ap­pli­ca­tions may not be tested and ver­i­fied by the WinApps team.

Both Docker and Podman are rec­om­mended back­ends for run­ning the Windows vir­tual ma­chine, as they fa­cil­i­tate an au­to­mated Windows in­stal­la­tion process. WinApps is also com­pat­i­ble with lib­virt. While this method re­quires con­sid­er­ably more man­ual con­fig­u­ra­tion, it also pro­vides greater vir­tual ma­chine cus­tomi­sa­tion op­tions. All three meth­ods lever­age the KVM hy­per­vi­sor, en­sur­ing ex­cel­lent vir­tual ma­chine per­for­mance. Ultimately, the choice of back­end de­pends on your spe­cific use case.

The fol­low­ing guides are avail­able:

If you al­ready have a Windows VM or server you wish to use with WinApps, you will still have to fol­low the fi­nal steps de­scribed in the lib­virt doc­u­men­ta­tion.

WinApps re­quires FreeRDP ver­sion 3 or later. If not avail­able for your dis­tri­b­u­tion through your pack­age man­ager, you can in­stall the Flatpak:

flat­pak in­stall flathub com.freerdp. FreeRDP

sudo flat­pak over­ride –filesystem=home com.freerdp.FreeRDP # To use `+home-drive`

However, if you have weird is­sues like #233 when run­ning Flatpak, please com­pile FreeRDP from source ac­cord­ing to this guide.

Create a con­fig­u­ra­tion file at ~/.config/winapps/winapps.conf con­tain­ing the fol­low­ing:

# WINAPPS CONFIGURATION FILE #

# INSTRUCTIONS

# - Leading and trail­ing white­space are ig­nored.

# - Empty lines are ig­nored.

# - Lines start­ing with #’ are ig­nored.

# - All char­ac­ters fol­low­ing a #’ are ig­nored.

# [WINDOWS USERNAME]

RDP_USER=“MyWindowsUser”

# [WINDOWS PASSWORD]

# NOTES:

# - If us­ing FreeRDP v3.9.0 or greater, you *have* to set a pass­word

RDP_PASS=“MyWindowsPassword”

# [WINDOWS DOMAIN]

# DEFAULT VALUE: ’ (BLANK)

RDP_DOMAIN=“”

# [WINDOWS IPV4 ADDRESS]

# NOTES:

# - If us­ing libvirt’, RDP_IP’ will be de­ter­mined by WinApps at run­time if left un­spec­i­fied.

# DEFAULT VALUE:

# - docker’: 127.0.0.1’

# - podman’: 127.0.0.1’

# - libvirt’: ’ (BLANK)

RDP_IP=“127.0.0.1”

# [VM NAME]

# NOTES:

# - Only ap­plic­a­ble when us­ing libvirt’

# - The lib­virt VM name must match so that WinApps can de­ter­mine VM IP, start the VM, etc.

# DEFAULT VALUE: RDPWindows’

VM_NAME=“RDPWindows”

# [WINAPPS BACKEND]

# DEFAULT VALUE: docker’

# VALID VALUES:

# - docker’

# - podman’

# - libvirt’

# - manual’

WAFLAVOR=“docker”

# [DISPLAY SCALING FACTOR]

# NOTES:

# - If an un­sup­ported value is spec­i­fied, a warn­ing will be dis­played.

# - If an un­sup­ported value is spec­i­fied, WinApps will use the clos­est sup­ported value.

# DEFAULT VALUE: 100’

# VALID VALUES:

# - 100’

# - 140’

# - 180’

RDP_SCALE=“100”

# [MOUNTING REMOVABLE PATHS FOR FILES]

# NOTES:

# - By de­fault, `udisks` (which you most likely have in­stalled) uses /run/media for mount­ing re­mov­able de­vices.

# This im­proves com­pat­i­bil­ity with most desk­top en­vi­ron­ments (DEs).

# ATTENTION: The Filesystem Hierarchy Standard (FHS) rec­om­mends /media in­stead. Verify your sys­tem’s con­fig­u­ra­tion.

# - To man­u­ally mount de­vices, you may op­tion­ally use /mnt.

# REFERENCE: https://​wiki.arch­linux.org/​ti­tle/​Ud­isks#Moun­t_­to_/​me­dia

REMOVABLE_MEDIA=“/run/media”

# [ADDITIONAL FREERDP FLAGS & ARGUMENTS]

# NOTES:

# - You can try adding /network:lan to these flags in or­der to in­crease per­for­mance, how­ever, some users have faced is­sues with this.

# If this does not work or if it does not work with­out the flag, you can try adding /nsc and /gfx.

# DEFAULT VALUE: /cert:tofu /sound /microphone +home-drive’

# VALID VALUES: See https://​github.com/​awake­cod­ing/​FreeRDP-Man­u­als/​blob/​mas­ter/​User/​FreeRDP-User-Man­ual.mark­down

RDP_FLAGS=“/cert:tofu /sound /microphone +home-drive”

# [DEBUG WINAPPS]

# NOTES:

# - Creates and ap­pends to ~/.local/share/winapps/winapps.log when run­ning WinApps.

# DEFAULT VALUE: true’

# VALID VALUES:

# - true’

# - false’

DEBUG=“true”

# [AUTOMATICALLY PAUSE WINDOWS]

# NOTES:

# - This is cur­rently INCOMPATIBLE with manual’.

# DEFAULT VALUE: off’

# VALID VALUES:

# - on’

# - off’

AUTOPAUSE=“off”

# [AUTOMATICALLY PAUSE WINDOWS TIMEOUT]

...

Read the original on github.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.