10 interesting stories served every morning and every evening.




1 1,524 shares, 82 trendiness

microgpt

This is a brief guide to my new art pro­ject mi­crogpt, a sin­gle file of 200 lines of pure Python with no de­pen­den­cies that trains and in­fer­ences a GPT. This file con­tains the full al­go­rith­mic con­tent of what is needed: dataset of doc­u­ments, to­k­enizer, au­to­grad en­gine, a GPT-2-like neural net­work ar­chi­tec­ture, the Adam op­ti­mizer, train­ing loop, and in­fer­ence loop. Everything else is just ef­fi­ciency. I can­not sim­plify this any fur­ther. This script is the cul­mi­na­tion of mul­ti­ple pro­jects (micrograd, make­more, nanogpt, etc.) and a decade-long ob­ses­sion to sim­plify LLMs to their bare es­sen­tials, and I think it is beau­ti­ful 🥹. It even breaks per­fectly across 3 columns:

Where to find it:

This GitHub gist has the full source code: mi­crogpt.py

It’s also avail­able on this web page: https://​karpa­thy.ai/​mi­crogpt.html

Also avail­able as a Google Colab note­book

The fol­low­ing is my guide on step­ping an in­ter­ested reader through the code.

The fuel of large lan­guage mod­els is a stream of text data, op­tion­ally sep­a­rated into a set of doc­u­ments. In pro­duc­tion-grade ap­pli­ca­tions, each doc­u­ment would be an in­ter­net web page but for mi­crogpt we use a sim­pler ex­am­ple of 32,000 names, one per line:

# Let there be an in­put dataset `docs`: list[str] of doc­u­ments (e.g. a dataset of names)

if not os.path.ex­ists(‘in­put.txt’):

im­port url­lib.re­quest

names_url = https://​raw.githubuser­con­tent.com/​karpa­thy/​make­more/​refs/​heads/​mas­ter/​names.txt

url­lib.re­quest.url­re­trieve(names_url, input.txt’)

docs = [l.strip() for l in open(‘in­put.txt’).read().strip().split(‘\n’) if l.strip()] # list[str] of doc­u­ments

ran­dom.shuf­fle(docs)

print(f”num docs: {len(docs)}“)

The dataset looks like this. Each name is a doc­u­ment:

The goal of the model is to learn the pat­terns in the data and then gen­er­ate sim­i­lar new doc­u­ments that share the sta­tis­ti­cal pat­terns within. As a pre­view, by the end of the script our model will gen­er­ate (“hallucinate”!) new, plau­si­ble-sound­ing names. Skipping ahead, we’ll get:

It does­n’t look like much, but from the per­spec­tive of a model like ChatGPT, your con­ver­sa­tion with it is just a funny look­ing document”. When you ini­tial­ize the doc­u­ment with your prompt, the mod­el’s re­sponse from its per­spec­tive is just a sta­tis­ti­cal doc­u­ment com­ple­tion.

Under the hood, neural net­works work with num­bers, not char­ac­ters, so we need a way to con­vert text into a se­quence of in­te­ger to­ken ids and back. Production to­k­eniz­ers like tik­to­ken (used by GPT-4) op­er­ate on chunks of char­ac­ters for ef­fi­ciency, but the sim­plest pos­si­ble to­k­enizer just as­signs one in­te­ger to each unique char­ac­ter in the dataset:

# Let there be a Tokenizer to trans­late strings to dis­crete sym­bols and back

uchars = sorted(set(‘’.join(docs))) # unique char­ac­ters in the dataset be­come to­ken ids 0..n-1

BOS = len(uchars) # to­ken id for the spe­cial Beginning of Sequence (BOS) to­ken

vo­cab_­size = len(uchars) + 1 # to­tal num­ber of unique to­kens, +1 is for BOS

print(f”vo­cab size: {vocab_size}“)

In the code above, we col­lect all unique char­ac­ters across the dataset (which are just all the low­er­case let­ters a-z), sort them, and each let­ter gets an id by its in­dex. Note that the in­te­ger val­ues them­selves have no mean­ing at all; each to­ken is just a sep­a­rate dis­crete sym­bol. Instead of 0, 1, 2 they might as well be dif­fer­ent emoji. In ad­di­tion, we cre­ate one more spe­cial to­ken called BOS (Beginning of Sequence), which acts as a de­lim­iter: it tells the model a new doc­u­ment starts/​ends here”. Later dur­ing train­ing, each doc­u­ment gets wrapped with BOS on both sides: [BOS, e, m, m, a, BOS]. The model learns that BOS ini­tates a new name, and that an­other BOS ends it. Therefore, we have a fi­nal vo­cavu­lary of 27 (26 pos­si­ble low­er­case char­ac­ters a-z and +1 for the BOS to­ken).

Training a neural net­work re­quires gra­di­ents: for each pa­ra­me­ter in the model, we need to know if I nudge this num­ber up a lit­tle, does the loss go up or down, and by how much?”. The com­pu­ta­tion graph has many in­puts (the model pa­ra­me­ters and the in­put to­kens) but fun­nels down to a sin­gle scalar out­put: the loss (we’ll de­fine ex­actly what the loss is be­low). Backpropagation starts at that sin­gle out­put and works back­wards through the graph, com­put­ing the gra­di­ent of the loss with re­spect to every in­put. It re­lies on the chain rule from cal­cu­lus. In pro­duc­tion, li­braries like PyTorch han­dle this au­to­mat­i­cally. Here, we im­ple­ment it from scratch in a sin­gle class called Value:

class Value:

__slots__ = (‘data’, grad’, _children’, _local_grads’)

def __init__(self, data, chil­dren=(), lo­cal_­grads=()):

self.data = data # scalar value of this node cal­cu­lated dur­ing for­ward pass

self.grad = 0 # de­riv­a­tive of the loss w.r.t. this node, cal­cu­lated in back­ward pass

self._chil­dren = chil­dren # chil­dren of this node in the com­pu­ta­tion graph

self._lo­cal_­grads = lo­cal_­grads # lo­cal de­riv­a­tive of this node w.r.t. its chil­dren

def __add__(self, other):

other = other if isin­stance(other, Value) else Value(other)

re­turn Value(self.data + other.data, (self, other), (1, 1))

def __mul__(self, other):

other = other if isin­stance(other, Value) else Value(other)

re­turn Value(self.data * other.data, (self, other), (other.data, self.data))

def __pow__(self, other): re­turn Value(self.data**other, (self,), (other * self.data**(other-1),))

def log(self): re­turn Value(math.log(self.data), (self,), (1/self.data,))

def exp(self): re­turn Value(math.exp(self.data), (self,), (math.exp(self.data),))

def relu(self): re­turn Value(max(0, self.data), (self,), (float(self.data > 0),))

def __neg__(self): re­turn self * -1

def __radd__(self, other): re­turn self + other

def __sub__(self, other): re­turn self + (-other)

def __rsub__(self, other): re­turn other + (-self)

def __rmul__(self, other): re­turn self * other

def __truediv__(self, other): re­turn self * other**-1

def __rtruediv__(self, other): re­turn other * self**-1

def back­ward(self):

topo = []

vis­ited = set()

def build_­topo(v):

if v not in vis­ited:

vis­ited.add(v)

for child in v._chil­dren:

build_­topo(child)

topo.ap­pend(v)

build_­topo(self)

self.grad = 1

for v in re­versed(topo):

for child, lo­cal_­grad in zip(v._chil­dren, v._lo­cal_­grads):

child.grad += lo­cal_­grad * v.grad

I re­al­ize that this is the most math­e­mat­i­cally and al­go­rith­mi­cally in­tense part and I have a 2.5 hour video on it: mi­cro­grad video. Briefly, a Value wraps a sin­gle scalar num­ber (.data) and tracks how it was com­puted. Think of each op­er­a­tion as a lit­tle lego block: it takes some in­puts, pro­duces an out­put (the for­ward pass), and it knows how its out­put would change with re­spect to each of its in­puts (the lo­cal gra­di­ent). That’s all the in­for­ma­tion au­to­grad needs from each block. Everything else is just the chain rule, string­ing the blocks to­gether.

Every time you do math with Value ob­jects (add, mul­ti­ply, etc.), the re­sult is a new Value that re­mem­bers its in­puts (_children) and the lo­cal de­riv­a­tive of that op­er­a­tion (_local_grads). For ex­am­ple, __mul__ records that \(\frac{\partial(a \cdot b)}{\par­tial a} = b\) and \(\frac{\partial(a \cdot b)}{\par­tial b} = a\). The full set of lego blocks:

The back­ward() method walks this graph in re­verse topo­log­i­cal or­der (starting from the loss, end­ing at the pa­ra­me­ters), ap­ply­ing the chain rule at each step. If the loss is \(L\) and a node \(v\) has a child \(c\) with lo­cal gra­di­ent \(\frac{\partial v}{\par­tial c}\), then:

\[\frac{\partial L}{\partial c} \mathrel{+}= \frac{\partial v}{\par­tial c} \cdot \frac{\partial L}{\partial v}\]

This looks a bit scary if you’re not com­fort­able with your cal­cu­lus, but this is lit­er­ally just mul­ti­ply­ing two num­bers in an in­tu­itive way. One way to see it looks as fol­lows: If a car trav­els twice as fast as a bi­cy­cle and the bi­cy­cle is four times as fast as a walk­ing man, then the car trav­els 2 x 4 = 8 times as fast as the man.” The chain rule is the same idea: you mul­ti­ply the rates of change along the path.

We kick things off by set­ting self.grad = 1 at the loss node, be­cause \(\frac{\partial L}{\partial L} = 1\): the loss’s rate of change with re­spect to it­self is triv­ially 1. From there, the chain rule just mul­ti­plies lo­cal gra­di­ents along every path back to the pa­ra­me­ters.

Note the += (accumulation, not as­sign­ment). When a value is used in mul­ti­ple places in the graph (i.e. the graph branches), gra­di­ents flow back along each branch in­de­pen­dently and must be summed. This is a con­se­quence of the mul­ti­vari­able chain rule: if \(c\) con­tributes to \(L\) through mul­ti­ple paths, the to­tal de­riv­a­tive is the sum of con­tri­bu­tions from each path.

After back­ward() com­pletes, every Value in the graph has a .grad con­tain­ing \(\frac{\partial L}{\partial v}\), which tells us how the fi­nal loss would change if we nudged that value.

Here’s a con­crete ex­am­ple. Note that a is used twice (the graph branches), so its gra­di­ent is the sum of both paths:

a = Value(2.0)

b = Value(3.0)

c = a * b # c = 6.0

L = c + a # L = 8.0

L.backward()

print(a.grad) # 4.0 (dL/da = b + 1 = 3 + 1, via both paths)

print(b.grad) # 2.0 (dL/db = a = 2)

This is ex­actly what PyTorch’s .backward() gives you:

This is the same al­go­rithm that PyTorch’s loss.back­ward() runs, just on scalars in­stead of ten­sors (arrays of scalars) - al­go­rith­mi­cally iden­ti­cal, sig­nif­i­cantly smaller and sim­pler, but of course a lot less ef­fi­cient.

Let’s spell what the .backward() gives us above. Autograd cal­cu­lated that if L = a*b + a, and a=2 and b=3, then a.grad = 4.0 is telling us about the lo­cal in­flu­ence of a on L. If you wig­gle the in­m­put a, in what di­rec­tion is L chang­ing? Here, the de­riv­a­tive of L w.r.t. a is 4.0, mean­ing that if we in­crease a by a tiny amount (say 0.001), L would in­crease by about 4x that (0.004). Similarly, b.grad = 2.0 means the same nudge to b would in­crease L by about 2x that (0.002). In other words, these gra­di­ents tell us the di­rec­tion (positive or neg­a­tive de­pend­ing on the sign), and the steep­ness (the mag­ni­tude) of the in­flu­ence of each in­di­vid­ual in­put on the fi­nal out­put (the loss). This then al­lows us to in­ter­ately nudge the pa­ra­me­ters of our neural net­work to lower the loss, and hence im­prove its pre­dic­tions.

The pa­ra­me­ters are the knowl­edge of the model. They are a large col­lec­tion of float­ing point num­bers (wrapped in Value for au­to­grad) that start out ran­dom and are it­er­a­tively op­ti­mized dur­ing train­ing. The ex­act role of each pa­ra­me­ter will make more sense once we de­fine the model ar­chi­tec­ture be­low, but for now we just need to ini­tial­ize them:

n_embd = 16 # em­bed­ding di­men­sion

n_­head = 4 # num­ber of at­ten­tion heads

n_layer = 1 # num­ber of lay­ers

block­_­size = 16 # max­i­mum se­quence length

head­_dim = n_embd // n_­head # di­men­sion of each head

ma­trix = lambda nout, nin, std=0.08: [[Value(random.gauss(0, std)) for _ in range(nin)] for _ in range(nout)]

state_­dict = {‘wte’: ma­trix(vo­cab_­size, n_embd), wpe’: ma­trix(block­_­size, n_embd), lm_head’: ma­trix(vo­cab_­size, n_embd)}

for i in range(n_layer):

state_­dict[f’layer{i}.at­tn_wq’] = ma­trix(n_embd, n_embd)

state_­dict[f’layer{i}.at­tn_wk’] = ma­trix(n_embd, n_embd)

state_­dict[f’layer{i}.at­tn_wv’] = ma­trix(n_embd, n_embd)

state_­dict[f’layer{i}.at­tn_­wo’] = ma­trix(n_embd, n_embd)

state_­dict[f’layer{i}.mlp_fc1′] = ma­trix(4 * n_embd, n_embd)

state_­dict[f’layer{i}.mlp_fc2′] = ma­trix(n_embd, 4 * n_embd)

params = [p for mat in state_­dict.val­ues() for row in mat for p in row]

print(f”num params: {len(params)}“)

...

Read the original on karpathy.github.io »

2 493 shares, 36 trendiness

Switch to Claude without starting over

Ask ques­tions about this page

Switch to Claude with­out start­ing over­Bring your pref­er­ences and con­text from other AI providers to Claude. With one copy-paste, Claude up­dates its mem­ory and picks up right where you left off. Memory is avail­able on all paid plans. Import what mat­ters in un­der a min­uteY­ou’ve spent months teach­ing an­other AI how you work. That con­text should­n’t dis­ap­pear be­cause you want to try some­thing new. Claude can im­port what mat­ters, so your first con­ver­sa­tion feels like your hun­dredth.Copy and paste the pro­vided prompt into a chat with any AI provider. It’s writ­ten specif­i­cally to help you get all of your con­text in one chat.Copy and paste the re­sults into Claude’s mem­ory set­tings. That’s it! Claude will up­date its mem­ory and you’re good to go.Mem­ory that un­der­stands how you work­Claude learns your pref­er­ences across con­ver­sa­tions, keeps pro­ject con­text sep­a­rate so noth­ing bleeds to­gether, and lets you see and edit every­thing it re­mem­bers. Your AI should know you from day on­eS­tart your Pro plan, im­port your mem­ory when you’re ready, and see for your­self.

...

Read the original on claude.com »

3 451 shares, 71 trendiness

Ghostty Docs

Ghostty is a fast, fea­ture-rich, and cross-plat­form ter­mi­nal em­u­la­tor that uses plat­form-na­tive UI and GPU ac­cel­er­a­tion.

Ghostty is a fast, fea­ture-rich, and cross-plat­form ter­mi­nal em­u­la­tor that uses plat­form-na­tive UI and GPU ac­cel­er­a­tion.

Install Ghostty and run!

Zero con­fig­u­ra­tion re­quired to get up and run­ning.

Ready-to-run bi­na­ries for ma­cOS. Packages or build from source for Linux.

...

Read the original on ghostty.org »

4 415 shares, 13 trendiness

Iran's Ayatollah Ali Khamenei is killed in Israeli strike, ending 36-year iron rule

Iran’s supreme leader, Ayatollah Ali Khamenei, was killed in Israeli at­tacks, with U. S. sup­port, on Saturday. He was 86 years old.

His death was con­firmed by President Trump, who joined Israeli lead­ers in call­ing for the over­throw of Khamenei’s au­thor­i­tar­ian regime as the U. S. and Israel launched airstrikes across Iran. The Israeli mil­i­tary said its forces killed Khamenei. The Iranian gov­ern­ment con­firmed the supreme lead­er’s death and an­nounced 40 days of mourn­ing.

During his 36-year rule, Khamenei was un­wa­ver­ing in his stead­fast an­tipa­thy to the U. S. and Israel and to any ef­forts to re­form and bring Iran into the 21st cen­tury.

Khamenei was born in July 1939 into a re­li­gious fam­ily in the Shia Muslim holy city of Mashhad in north­east­ern Iran and at­tended the­o­log­i­cal school. An out­spo­ken op­po­nent of the U. S.-backed Shah Mohammad Reza Pahlavi, Khamenei was ar­rested sev­eral times.

He was sur­rounded by other Iranian ac­tivists, in­clud­ing Ayatollah Ruhollah Khomeini, who be­came Iran’s first supreme leader fol­low­ing the coun­try’s Islamic Revolution in the late 1970s.

Khamenei sur­vived an as­sas­si­na­tion at­tempt in 1981 that cost him the use of his right arm. He served as Iran’s pres­i­dent be­fore suc­ceed­ing Khomeini as supreme leader in 1989.

Alex Vatanka, a se­nior fel­low at the Middle East Institute in Washington, D. C., says Khamenei was an un­likely can­di­date. Then a mi­dlevel cleric, Khamenei lacked re­li­gious cre­den­tials, which left him feel­ing vul­ner­a­ble, Vatanka says.

He knew him­self. He did­n’t have the pres­tige, the grav­i­tas to be … the suc­ces­sor to the founder of the Islamic Republic, Ayatollah Khomeini,” he says.

He spent the first few years in power be­ing very ner­vous,” says Vatanka. He re­ally lit­er­ally felt that some­body is go­ing to, you know, take him down from the po­si­tion of power.”

But Khamenei was cun­ning and able to out­wit other se­nior po­lit­i­cal fig­ures in the Islamic Republic, ac­cord­ing to Ali Vaez, di­rec­tor of the Iran Project at the International Crisis Group. He says that with the help of the for­mi­da­ble Islamic Revolutionary Guard Corps, Khamenei built up his power base to be­come the longest-serv­ing leader in the Middle East.

Ayatollah Khamenei was a man with strate­gic pa­tience and was able to cal­cu­late a few steps ahead,” he says. “That’s why I think he man­aged — on the back of the Revolutionary Guards — to in­creas­ingly ap­pro­pri­ate all the levers of power in his hands and side­line every­one else.”

Khamenei’s close ties to the Revolutionary Guards al­lowed Iran’s mil­i­tary to de­velop a vast com­mer­cial em­pire in con­trol of many parts of the econ­omy, while or­di­nary Iranians strug­gled to get by.

Vaez says Khamenei also be­gan to build up Iran’s de­fen­sive poli­cies, such as de­vel­op­ing prox­ies like Hezbollah in Lebanon and Hamas in the Gaza Strip to de­ter a di­rect at­tack on Iranian soil.

And then also be­com­ing self-re­liant in de­vel­op­ing a vi­able con­ven­tional de­ter­rence, which took the form of Iran’s bal­lis­tic mis­sile pro­gram,” Vaez says.

As supreme leader, Khamenei also had the fi­nal word on any­thing to do with Iran’s nu­clear pro­gram.

Over time, Khamenei in­creas­ingly in­jected him­self into pol­i­tics. Such was the case in 2009, when he in­ter­vened in the pres­i­den­tial elec­tion to en­sure that his fa­vored can­di­date, the con­tro­ver­sial con­ser­v­a­tive Mahmoud Ahmadinejad, won of­fice.

Iranians took to the streets to protest what was widely seen as a fraud­u­lent elec­tion. Khamenei bru­tally crushed those demon­stra­tions, trig­ger­ing both a back­lash and more protest move­ments over the years.

Iran killed thou­sands of its cit­i­zens un­der Khamenei’s rule, in­clud­ing more than 7,000 peo­ple killed dur­ing weeks of mass protests that started in late December 2025, ac­cord­ing to the Human Rights Activists News Agency, a U. S.-based or­ga­ni­za­tion that closely tracks rights abuses in Iran.

Khamenei had al­ways sup­ported and en­dorsed re­pres­sive gov­ern­ment crack­down, rec­og­niz­ing that these protests were dam­ag­ing to the sta­bil­ity and le­git­i­macy of the state,” says Sanam Vakil, an Iran ex­pert at Chatham House, a London-based think tank.

But Khamenei was un­con­cerned about get­ting to the root of the protests, says the Middle East Institute’s Vatanka, and re­mained stuck in an Islamic rev­o­lu­tion­ary mind­set against the West.

He on so many oc­ca­sions re­fused point-blank to ac­cept the ba­sic re­al­ity that where he was in terms of his world­view was not where the rest of his peo­ple were,” Vatanka says.

He adds that 75% of Iran’s 90 mil­lion peo­ple were born af­ter the rev­o­lu­tion and have watched other coun­tries in the re­gion mod­ern­ize and in­te­grate with the in­ter­na­tional com­mu­nity.

The 75% he should have catered to, lis­tened to and ad­dress[ed] poli­cies to sat­isfy their as­pi­ra­tions,” he says. He failed in that mis­er­ably.”

The International Crisis Group’s Vaez says af­ter the Arab Spring up­ris­ings in 2011, Khamenei did start wor­ry­ing about the sur­vival of his regime. Iran’s econ­omy was crum­bling, due in large part to strin­gent Western sanc­tions, fu­el­ing more un­rest.

In 2013, Khamenei agreed to se­cret ne­go­ti­a­tions with the U. S. about Iran’s nu­clear pro­gram, which even­tu­ally led to the 2015 Joint Comprehensive Plan of Action nu­clear agree­ment. Vaez says Khamenei deeply dis­trusted the U.S. and was skep­ti­cal about the deal.

His ar­gu­ment has al­ways been that the U. S. is al­ways look­ing for pre­texts, for putting pres­sure on Iran,” he says. And if Iran con­cedes on the nu­clear is­sue, then the U.S. would put pres­sure on Iran be­cause of its mis­siles pro­gram or be­cause of hu­man rights vi­o­la­tions or be­cause of its re­gional poli­cies.”

President Trump’s with­drawal from the nu­clear deal dur­ing his first term in of­fice gave some cre­dence to Khamenei’s cyn­i­cism. Analysts say Iran in­creased its nu­clear en­rich­ment af­ter that to a point where it was close to be­ing able to build a bomb.

In early 2025, when Trump reached out to Iran about a new deal, Khamenei dragged out ne­go­ti­a­tions un­til they be­gan in mid-April.

But time ran out. In June, Israel made good on its threat to neu­tral­ize Iran’s nu­clear pro­gram, launch­ing strikes on key fa­cil­i­ties and killing sci­en­tists and gen­er­als. Iran re­tal­i­ated, and the two sides ex­changed sev­eral days of mis­sile strikes.

On June 21, 2025, the U. S. launched ma­jor airstrikes on three of Iran’s nu­clear en­rich­ment sites. Trump said the fa­cil­i­ties had been completely and to­tally oblit­er­ated,” al­though there was de­bate among the White House and nu­clear ex­perts as to how se­ri­ous Iran’s nu­clear pro­gram had been set back.

Vakil, of Chatham House, says Khamenei un­der­es­ti­mated what Israel and the U. S. would do.

I think that Khamenei al­ways as­sumed that he could play for time, and what he re­ally did­n’t un­der­stand is that the world around Iran had very much changed,” she says. The world had tired of Khamenei and Iranian foot-drag­ging and an­tics … and so that was a mis­cal­cu­la­tion.”

But it was Iran’s use of proxy mili­tias across the re­gion that even­tu­ally led to Khamenei’s down­fall.

When Hamas — the Palestinian Islamist group backed by Iran — at­tacked Israel on Oct. 7, 2023, killing nearly 1,200 peo­ple and kid­nap­ping 251 oth­ers, it trig­gered a cas­cade of events that ul­ti­mately led to Israel’s at­tack on Iran.

The day af­ter the 2023 Hamas-led at­tack, Iran-backed Hezbollah in Lebanon started fir­ing rock­ets into Israel, trig­ger­ing a con­flict that led to the Shia mili­ti­a’s top brass be­ing dec­i­mated — in­clud­ing top leader Hassan Nasrallah.

Israel and Iran traded di­rect airstrikes for the first time in 2024 as part of that con­flict.

Israel’s bomb­ing of Iranian weapons ship­ments in Syria also helped weaken the regime of Syria’s then-dic­ta­tor, Bashar al-As­sad, an im­por­tant ally of Iran. Assad fell in December 2024 and fled to Russia in early January 2025.

By the time Khamenei died, his legacy was in tat­ters. Israel had hob­bled two key prox­ies, Hamas and Hezbollah, and had wiped out Iran’s air de­fenses. With U. S. help, it left Iran’s nu­clear pro­gram in sham­bles.

What re­mains is a ro­bust bal­lis­tic mis­sile pro­gram, the brain­child of Khamenei. It’s un­clear who will re­place him to lead a now weak­ened and vul­ner­a­ble Iran.

...

Read the original on www.npr.org »

5 368 shares, 45 trendiness

Ad-Supported AI Chat Demo — See Every Ad Type in Action

A satir­i­cal (but real!) demo of what AI chat could look like in an ad-sup­ported fu­ture. Chat with an AI while ex­pe­ri­enc­ing every mon­e­ti­za­tion pat­tern imag­in­able — ban­ners, in­ter­sti­tials, spon­sored re­sponses, freemium gates, and more.

Join 2 mil­lion pro­fes­sion­als who think faster, fo­cus bet­ter, and ac­com­plish more. AI-powered goal track­ing, habit build­ing, and mem­ory en­hance­ment. First 30 days FREE!Think 10x Faster with AI. First Month FREE! 🧠Did you know? The av­er­age per­son wastes $200/month on un­used sub­scrip­tions. Let MoneyMind’s AI find and can­cel them for you!Your AI as­sis­tant, proudly pow­ered by the finest ad­ver­tis­ing money can buy 💸⚠️ Warning: This AI may spon­ta­neously rec­om­mend prod­ucts at any time🏷️ This con­ver­sa­tion is proudly pow­ered by BrainBoost Pro™ • Ad-supported free tier • Remove adsStressed by all these ads? 10 min­utes of AI-guided med­i­ta­tion changes every­thing. AI-curated meal prep kits de­liv­ered weekly. $30 off your first box!🎨 Today’s chat theme spon­sored by BrainBoost Pro • Colors, fonts, and vibes cu­rated by our ad­ver­tis­ing team

This tool is a satir­i­cal but fully func­tional demon­stra­tion of what AI chat as­sis­tants could look like if they were mon­e­tized through ad­ver­tis­ing — sim­i­lar to how free apps, web­sites, and stream­ing ser­vices fund them­selves to­day. As AI chat be­comes main­stream, com­pa­nies face a fun­da­men­tal ques­tion: how do you make it free for users while cov­er­ing the sig­nif­i­cant com­pute costs? Advertising is one ob­vi­ous an­swer — and this demo shows every ma­jor ad pat­tern that could be ap­plied to a chat in­ter­face.We built this as an ed­u­ca­tional tool to help mar­keters, prod­uct man­agers, and de­vel­op­ers un­der­stand the land­scape of AI mon­e­ti­za­tion, and to give users a glimpse of the fu­ture they might want to avoid (or em­brace, de­pend­ing on your per­spec­tive).

This demo cov­ers the full spec­trum of ad­ver­tis­ing pat­terns that could ap­pear in an AI chat prod­uct.

This tool is ed­u­ca­tional and use­ful for a wide range of pro­fes­sion­als think­ing about the fu­ture of AI prod­ucts.

Are the ads in this demo real?No — all brands and ads are com­pletely fic­tional and cre­ated for this demo. BrainBoost Pro, QuickLearn Academy, ZenFocus, TaskMaster AI, ReadyMeal, and all other brands are made up. No ac­tual ad­ver­tis­ing rev­enue is be­ing gen­er­ated. Does this show what AI chat will ac­tu­ally look like?It shows one pos­si­ble fu­ture. Some ad-sup­ported AI prod­ucts al­ready ex­ist and use sev­eral of these pat­terns. Others are spec­u­la­tive. The goal is to make these pos­si­bil­i­ties con­crete and tan­gi­ble so peo­ple can have in­formed con­ver­sa­tions about what kind of AI fu­ture they want.Is the AI ac­tu­ally work­ing or is every­thing scripted?The AI is real — your mes­sages are processed by a live lan­guage model and you get gen­uine re­sponses. The ads are the scripted part. Some AI re­sponses will in­clude spon­sored prod­uct men­tions as part of the demon­stra­tion.What hap­pens to my chat data?Like all our free tools, con­ver­sa­tions are logged to im­prove the ser­vice. We do not sell this data to ad­ver­tis­ers — this is a demo, not an ac­tual ad net­work.How does the freemium gate work?Af­ter 5 free mes­sages, you can ei­ther watch an ad’ (a sim­u­lated 5-second count­down) to un­lock 5 more mes­sages, or you can up­grade to our ac­tual ad-free ser­vice. This mir­rors how real freemium prod­ucts work.

All of our tools are gen­uinely free — no ads, no pay­walls, no spon­sored re­sponses. Just AI that works.

Build Your Own AI Chatbot — No Ads RequiredNow that you’ve seen what ad-sup­ported AI looks like, imag­ine giv­ing your cus­tomers a clean, fo­cused AI ex­pe­ri­ence with zero in­ter­rup­tions. With 99helpers, you can de­ploy an AI chat­bot trained on your con­tent in min­utes. No credit card re­quired • Setup in min­utes • No ads, ever

...

Read the original on 99helpers.com »

6 362 shares, 48 trendiness

AI Made Writing Code Easier. It Made Engineering Harder.

Yes, writ­ing code is eas­ier than ever.

AI as­sis­tants au­to­com­plete your func­tions. Agents scaf­fold en­tire fea­tures. You can de­scribe what you want in plain English and watch work­ing code ap­pear in sec­onds. The bar­rier to pro­duc­ing code has never been lower.

And yet, the day-to-day life of soft­ware en­gi­neers has got­ten more com­plex, more de­mand­ing, and more ex­haust­ing than it was two years ago.

This is not a con­tra­dic­tion. It is the re­al­ity of what hap­pens when an in­dus­try adopts a pow­er­ful new tool with­out paus­ing to con­sider the sec­ond-or­der ef­fects on the peo­ple us­ing it.

If you are a soft­ware en­gi­neer read­ing this and feel­ing like your job qui­etly be­came harder while every­one around you cel­e­brates how easy every­thing is now, you are not imag­in­ing things. The job changed. The ex­pec­ta­tions changed. And no­body sent a memo.

There is a phe­nom­e­non hap­pen­ing right now that most en­gi­neers feel but strug­gle to ar­tic­u­late. The ex­pected out­put of a soft­ware en­gi­neer in 2026 is dra­mat­i­cally higher than it was in 2023. Not be­cause any­one held a meet­ing and an­nounced new tar­gets. Not be­cause your man­ager sat you down and ex­plained the new rules. The base­line just moved.

It moved be­cause AI tools made cer­tain tasks faster. And when tasks be­come faster, the as­sump­tion fol­lows im­me­di­ately: you should be do­ing more. Not in the fu­ture. Now.

A February 2026 study pub­lished in Harvard Business Review tracked 200 em­ploy­ees at a U. S. tech com­pany over eight months. The re­searchers found some­thing that will sound fa­mil­iar to any­one liv­ing through this shift. Workers did not use AI to fin­ish ear­lier and go home. They used it to do more. They took on broader tasks, worked at a faster pace, and ex­tended their hours, of­ten with­out any­one ask­ing them to. The re­searchers de­scribed a self-re­in­forc­ing cy­cle: AI ac­cel­er­ated cer­tain tasks, which raised ex­pec­ta­tions for speed. Higher speed made work­ers more re­liant on AI. Increased re­liance widened the scope of what work­ers at­tempted. And a wider scope fur­ther ex­panded the quan­tity and den­sity of work.

The num­bers tell the rest of the story. Eighty-three per­cent of work­ers in the study said AI in­creased their work­load. Burnout was re­ported by 62 per­cent of as­so­ci­ates and 61 per­cent of en­try-level work­ers. Among C-suite lead­ers? Just 38 per­cent. The peo­ple do­ing the ac­tual work are car­ry­ing the in­ten­sity. The peo­ple set­ting the ex­pec­ta­tions are not feel­ing it the same way.

This gap mat­ters enor­mously. If lead­er­ship be­lieves AI is mak­ing every­thing eas­ier while en­gi­neers are drown­ing in a new kind of com­plex­ity, the re­sult is a slow ero­sion of trust, morale, and even­tu­ally tal­ent.

A sep­a­rate sur­vey of over 600 en­gi­neer­ing pro­fes­sion­als found that nearly two-thirds of en­gi­neers ex­pe­ri­ence burnout de­spite their or­ga­ni­za­tions us­ing AI in de­vel­op­ment. Forty-three per­cent said lead­er­ship was out of touch with team chal­lenges. Over a third re­ported that pro­duc­tiv­ity had ac­tu­ally de­creased over the past year, even as their com­pa­nies in­vested more in AI tool­ing.

The base­line moved. The ex­pec­ta­tions rose. And for many en­gi­neers, no one ac­knowl­edged that the job they signed up for had fun­da­men­tally changed.

Here is some­thing that gets lost in all the ex­cite­ment about AI pro­duc­tiv­ity: most soft­ware en­gi­neers be­came en­gi­neers be­cause they love writ­ing code.

Not man­ag­ing code. Not re­view­ing code. Not su­per­vis­ing sys­tems that pro­duce code. Writing it. The act of think­ing through a prob­lem, de­sign­ing a so­lu­tion, and ex­press­ing it pre­cisely in a lan­guage that makes a ma­chine do ex­actly what you in­tended. That is what drew most of us to this pro­fes­sion. It is a cre­ative act, a form of crafts­man­ship, and for many en­gi­neers, the most sat­is­fy­ing part of their day.

Now they are be­ing told to stop.

Not ex­plic­itly, of course. Nobody walks into a standup and says stop writ­ing code.” But the mes­sage is there, sub­tle and per­sis­tent. Use AI to write it faster. Let the agent han­dle the im­ple­men­ta­tion. Focus on higher-level tasks. Your value is not in the code you write any­more, it is in how well you di­rect the sys­tems that write it for you.

For early adopters, this feels ex­cit­ing. It feels like evo­lu­tion. For a sig­nif­i­cant por­tion of work­ing en­gi­neers, it feels like be­ing told that the thing they spent years mas­ter­ing, the skill that de­fines their pro­fes­sional iden­tity, is sud­denly less im­por­tant.

One en­gi­neer cap­tured this shift per­fectly in a widely shared es­say, de­scrib­ing how AI trans­formed the en­gi­neer­ing role from builder to re­viewer. Every day felt like be­ing a judge on an as­sem­bly line that never stops. You just keep stamp­ing those pull re­quests. The pro­duc­tion vol­ume went up. The sense of crafts­man­ship went down.

This is not a mi­nor ad­just­ment. It is a fun­da­men­tal shift in pro­fes­sional iden­tity. Engineers who built their ca­reers around deep tech­ni­cal skill are be­ing asked to re­de­fine what they do and who they are, es­sen­tially overnight, with­out any tran­si­tion pe­riod, train­ing, or ac­knowl­edg­ment that some­thing sig­nif­i­cant was lost in the process.

Having led en­gi­neer­ing teams for over two decades, I have seen tech­nol­ogy shifts be­fore. New frame­works, new lan­guages, new method­olo­gies. Engineers adapt. They al­ways have. But this is dif­fer­ent be­cause it is not ask­ing en­gi­neers to learn a new way of do­ing what they do. It is ask­ing them to stop do­ing the thing that made them en­gi­neers in the first place and be­come some­thing else en­tirely.

That is not an up­grade. That is a ca­reer iden­tity cri­sis. And pre­tend­ing it is not hap­pen­ing does not make it go away.

While en­gi­neers are be­ing asked to write less code, they are si­mul­ta­ne­ously be­ing asked to do more of every­thing else.

More prod­uct think­ing. More ar­chi­tec­tural de­ci­sion-mak­ing. More code re­view. More con­text switch­ing. More plan­ning. More test­ing over­sight. More de­ploy­ment aware­ness. More risk as­sess­ment.

The scope of what it means to be a software en­gi­neer” ex­panded dra­mat­i­cally in the last two years, and it hap­pened with­out a pause to catch up.

This is partly a di­rect con­se­quence of AI ac­cel­er­a­tion. When code gets pro­duced faster, the bot­tle­neck shifts. It moves from im­ple­men­ta­tion to every­thing sur­round­ing im­ple­men­ta­tion: re­quire­ments clar­ity, ar­chi­tec­ture de­ci­sions, in­te­gra­tion test­ing, de­ploy­ment strat­egy, mon­i­tor­ing, and main­te­nance. These were al­ways part of the en­gi­neer­ing life­cy­cle, but they were dis­trib­uted across roles. Product man­agers han­dled re­quire­ments. QA han­dled test­ing. DevOps han­dled de­ploy­ment. Senior ar­chi­tects han­dled sys­tem de­sign.

Now, with AI col­laps­ing the im­ple­men­ta­tion phase, or­ga­ni­za­tions are qui­etly re­dis­trib­ut­ing those re­spon­si­bil­i­ties to the en­gi­neers them­selves. The Harvard Business Review study doc­u­mented this ex­act pat­tern. Product man­agers be­gan writ­ing code. Engineers took on prod­uct work. Researchers started do­ing en­gi­neer­ing tasks. Roles that once had clear bound­aries blurred as work­ers used AI to han­dle jobs that pre­vi­ously sat out­side their re­mit.

The in­dus­try is openly talk­ing about this as a pos­i­tive de­vel­op­ment. Engineers should be T-shaped” or full-stack” in a broader sense. Nearly 45 per­cent of en­gi­neer­ing roles now ex­pect pro­fi­ciency across mul­ti­ple do­mains. AI tools aug­ment gen­er­al­ists more ef­fec­tively, mak­ing it eas­ier for one per­son to han­dle mul­ti­ple com­po­nents of a sys­tem.

On pa­per, this sounds em­pow­er­ing. In prac­tice, it means that a mid-level back­end en­gi­neer is now ex­pected to un­der­stand prod­uct strat­egy, re­view AI-generated fron­tend code they did not write, think about de­ploy­ment in­fra­struc­ture, con­sider se­cu­rity im­pli­ca­tions of code they can­not fully trace, and main­tain a big-pic­ture ar­chi­tec­tural aware­ness that used to be some­one else’s job.

That is not em­pow­er­ment. That is scope creep with­out a cor­re­spond­ing in­crease in com­pen­sa­tion, au­thor­ity, or time.

From my ex­pe­ri­ence build­ing and scal­ing teams in fin­tech and high-traf­fic plat­forms, I can tell you that role ex­pan­sion with­out clear bound­aries al­ways leads to the same out­come: peo­ple try to do every­thing, noth­ing gets done with the depth it re­quires, and burnout fol­lows. The en­gi­neers who sur­vive are the ones who learn to say no, to pri­or­i­tize ruth­lessly, and to push back when the scope of their role qui­etly dou­bles with­out any­one ac­knowl­edg­ing it.

There is an irony at the cen­ter of the AI-assisted en­gi­neer­ing work­flow that no­body wants to talk about: re­view­ing AI-generated code is of­ten harder than writ­ing the code your­self.

When you write code, you carry the con­text of every de­ci­sion in your head. You know why you chose this data struc­ture, why you han­dled this edge case, why you struc­tured the mod­ule this way. The code is an ex­pres­sion of your think­ing, and re­view­ing it later is straight­for­ward be­cause the rea­son­ing is al­ready stored in your mem­ory.

When AI writes code, you in­herit the out­put with­out the rea­son­ing. You see the code, but you do not see the de­ci­sions. You do not know what trade­offs were made, what as­sump­tions were baked in, what edge cases were con­sid­ered or ig­nored. You are re­view­ing some­one else’s work, ex­cept that some­one is not a col­league you can ask ques­tions. It is a sta­tis­ti­cal model that pro­duces plau­si­ble-look­ing code with­out any un­der­stand­ing of your sys­tem’s spe­cific con­straints.

A sur­vey by Harness found that 67 per­cent of de­vel­op­ers re­ported spend­ing more time de­bug­ging AI-generated code, and 68 per­cent spent more time re­view­ing it than they did with hu­man-writ­ten code. This is not a fail­ure of the tools. It is a struc­tural prop­erty of the work­flow. Code re­view with­out shared con­text is in­her­ently more de­mand­ing than re­view­ing code you par­tic­i­pated in cre­at­ing.

Yet the ex­pec­ta­tion from man­age­ment is that AI should be mak­ing every­thing faster. So en­gi­neers find them­selves in a bind: they are pro­duc­ing more code than ever, but the qual­ity as­sur­ance bur­den has in­creased, the con­text-per-line-of-code has de­creased, and the cog­ni­tive load of main­tain­ing a sys­tem they only par­tially built is grow­ing with every sprint.

This is the su­per­vi­sion para­dox. The faster AI gen­er­ates code, the more hu­man at­ten­tion is re­quired to en­sure that code ac­tu­ally works in the con­text of a real sys­tem with real users and real busi­ness con­straints. The pro­duc­tion bot­tle­neck did not dis­ap­pear. It moved from writ­ing to un­der­stand­ing, and un­der­stand­ing is harder to speed up.

What makes all of this es­pe­cially dif­fi­cult is the self-re­in­forc­ing na­ture of the cy­cle.

AI makes cer­tain tasks faster. Faster tasks cre­ate the per­cep­tion of more avail­able ca­pac­ity. More per­ceived ca­pac­ity leads to more work be­ing as­signed. More work leads to more AI re­liance. More AI re­liance leads to more code that needs re­view, more con­text that needs to be main­tained, more sys­tems that need to be un­der­stood, and more cog­ni­tive load on en­gi­neers who are al­ready stretched thin.

The Harvard Business Review re­searchers de­scribed this as workload creep.” Workers did not con­sciously de­cide to work harder. The ex­pan­sion hap­pened nat­u­rally, al­most in­vis­i­bly. Each in­di­vid­ual step felt rea­son­able. In ag­gre­gate, it pro­duced an un­sus­tain­able pace.

Before AI, there was a nat­ural ceil­ing on how much you could pro­duce in a day. That ceil­ing was set by think­ing speed, typ­ing speed, and the time it takes to look things up. It was frus­trat­ing some­times, but it was also a gov­er­nor. A nat­ural speed limit that pre­vented you from out­run­ning your own abil­ity to main­tain qual­ity.

AI re­moved the gov­er­nor. Now the only limit is your cog­ni­tive en­durance. And most peo­ple do not know their cog­ni­tive lim­its un­til they have al­ready blown past them.

This is where many en­gi­neers find them­selves right now. Shipping more code than any quar­ter in their ca­reer. Feeling more drained than any quar­ter in their ca­reer. The two facts are not un­re­lated.

The trap is that it looks like pro­duc­tiv­ity from the out­side. Metrics go up. Velocity charts look great. More fea­tures shipped. More pull re­quests merged. But un­der­neath the num­bers, qual­ity is qui­etly erod­ing, tech­ni­cal debt is ac­cu­mu­lat­ing faster than it can be ad­dressed, and the peo­ple do­ing the work are run­ning on fumes.

If the pic­ture is dif­fi­cult for ex­pe­ri­enced en­gi­neers, it is even harder for those start­ing their ca­reers.

Junior en­gi­neers have tra­di­tion­ally learned by do­ing the sim­pler, more task-ori­ented work. Fixing small bugs. Writing straight­for­ward fea­tures. Implementing well-de­fined tick­ets. This hands-on work built the foun­da­tional un­der­stand­ing that even­tu­ally al­lowed them to take on more com­plex chal­lenges.

AI is rapidly con­sum­ing that train­ing ground. If an agent can han­dle the rou­tine API hookup, the boil­er­plate mod­ule, the straight­for­ward CRUD end­point, what is left for a ju­nior en­gi­neer to learn from? The ex­pec­ta­tion is shift­ing to­ward need­ing to con­tribute at a higher level al­most from day one, with­out the grad­ual ramp-up that pre­vi­ous gen­er­a­tions of en­gi­neers re­lied on.

Entry-level hir­ing at the 15 largest tech firms fell 25 per­cent from 2023 to 2024. The HackerRank 2025 Developer Skills Report con­firmed that ex­pec­ta­tions are ris­ing faster than pro­duc­tiv­ity gains, and that early-ca­reer hir­ing re­mains slug­gish com­pared to se­nior-level roles. Companies are pri­or­i­tiz­ing ex­pe­ri­enced tal­ent, but the pipeline that pro­duces ex­pe­ri­enced tal­ent is be­ing qui­etly dis­man­tled.

This is a prob­lem that ex­tends be­yond in­di­vid­ual ca­reer con­cerns. If ju­nior en­gi­neers do not get the op­por­tu­nity to build foun­da­tional skills through hands-on work, the in­dus­try will even­tu­ally face a short­age of se­nior en­gi­neers who truly un­der­stand the sys­tems they over­see. You can­not su­per­vise what you never learned to build.

As I have writ­ten be­fore, code is for hu­mans to read. If the next gen­er­a­tion of en­gi­neers never de­vel­ops the flu­ency to read, un­der­stand, and rea­son about code at a deep level, no amount of AI tool­ing will com­pen­sate for that gap.

If you lead en­gi­neer­ing teams, the most im­por­tant thing you can do right now is ac­knowl­edge that this tran­si­tion is gen­uinely dif­fi­cult. Not the­o­ret­i­cally. Not ab­stractly. For the ac­tual peo­ple on your team.

The ca­reer they signed up for changed fast. The skills they were hired for are be­ing repo­si­tioned. The ex­pec­ta­tions they are work­ing un­der shifted with­out a clear an­nounce­ment. Acknowledging this re­al­ity is not a sign of weak­ness. It is a pre­req­ui­site for main­tain­ing a team that trusts you.

Start with em­pa­thy, but do not stop there.

Give your team real train­ing. Not a lunch-and-learn about prompt en­gi­neer­ing. Real in­vest­ment in the skills that the new en­gi­neer­ing land­scape ac­tu­ally re­quires: sys­tem de­sign, ar­chi­tec­tural think­ing, prod­uct rea­son­ing, se­cu­rity aware­ness, and the abil­ity to crit­i­cally eval­u­ate code they did not write. These are not triv­ial skills. They take time to de­velop, and your team needs struc­tured sup­port to build them.

Give them space to ex­per­i­ment with­out the pres­sure of im­me­di­ate pro­duc­tiv­ity gains. The en­gi­neers who will thrive in this en­vi­ron­ment are the ones who have room to fig­ure out how AI fits into their work­flow with­out be­ing pe­nal­ized for the learn­ing curve. Every ex­pe­ri­enced tech­nol­o­gist I know who has suc­cess­fully in­te­grated AI tools went through an ad­just­ment pe­riod where they were less pro­duc­tive be­fore they be­came more pro­duc­tive. That ad­just­ment pe­riod is nor­mal, and it needs to be pro­tected.

Set ex­plicit bound­aries around role scope. If you are ask­ing en­gi­neers to take on prod­uct think­ing, plan­ning, and risk as­sess­ment in ad­di­tion to their tech­ni­cal work, name it. Define it. Compensate for it. Do not let it hap­pen silently and then won­der why your team is burned out.

Rethink your met­rics. If your en­gi­neer­ing suc­cess met­rics are still cen­tered on ve­loc­ity, tick­ets closed, and lines of code, you are mea­sur­ing the wrong things in an AI-assisted world. System sta­bil­ity, code qual­ity, de­ci­sion qual­ity, cus­tomer out­comes, and team health are bet­ter in­di­ca­tors of whether your en­gi­neer­ing or­ga­ni­za­tion is ac­tu­ally pro­duc­ing value or just pro­duc­ing vol­ume.

Protect the ju­nior pipeline. If you have stopped hir­ing ju­nior en­gi­neers be­cause AI can han­dle en­try-level tasks, you are solv­ing a short-term ef­fi­ciency prob­lem by cre­at­ing a long-term tal­ent cri­sis. The se­nior en­gi­neers you rely on to­day were ju­nior en­gi­neers who learned by do­ing the work that AI is now con­sum­ing. That path still mat­ters.

And fi­nally, keep chal­leng­ing your team. I have never met a good en­gi­neer who did not love a good chal­lenge. The en­gi­neers on your team are not frag­ile. They are ca­pa­ble, in­tel­li­gent peo­ple who signed up for hard prob­lems. They can han­dle this tran­si­tion. Just make sure they are set up to meet it.

If you are an en­gi­neer nav­i­gat­ing this shift, here is what I would tell you based on two decades of watch­ing tech­nol­ogy cy­cles re­shape this pro­fes­sion.

First, do not aban­don your fun­da­men­tals. The pres­sure to be­come an AI-first” en­gi­neer is real, but the en­gi­neers who will be most valu­able in five years are the ones who deeply un­der­stand the sys­tems they work on. AI is a tool. Understanding ar­chi­tec­ture, de­bug­ging com­plex sys­tems, rea­son­ing about per­for­mance and se­cu­rity: these skills are not be­com­ing less im­por­tant. They are be­com­ing more im­por­tant be­cause some­one needs to be the adult in the room when AI-generated code breaks in pro­duc­tion at 2 AM.

Second, learn to set bound­aries with the ac­cel­er­a­tion trap. Just be­cause you can pro­duce more does not mean you should. Sustainable pace mat­ters. The en­gi­neers who burn out try­ing to match the the­o­ret­i­cal max­i­mum out­put AI makes pos­si­ble are not the ones who build last­ing ca­reers. The ones who learn to work with AI de­lib­er­ately, choos­ing when to use it and when to think in­de­pen­dently, are the ones who will still be thriv­ing in this pro­fes­sion a decade from now.

Third, em­brace the parts of the ex­panded role that gen­uinely in­ter­est you. If the en­gi­neer­ing role now in­cludes more prod­uct think­ing, more ar­chi­tec­tural de­ci­sion-mak­ing, more cross-func­tional com­mu­ni­ca­tion, treat that as an op­por­tu­nity rather than an im­po­si­tion. These are skills that se­nior en­gi­neers and tech­ni­cal lead­ers need. You are be­ing given ac­cess to a broader set of ca­pa­bil­i­ties ear­lier in your ca­reer than any pre­vi­ous gen­er­a­tion of en­gi­neers. That is not a bur­den. It is a head start.

Fourth, talk about what you are ex­pe­ri­enc­ing. The iso­la­tion of feel­ing like you are the only one strug­gling with this tran­si­tion is one of the most dam­ag­ing as­pects of the cur­rent mo­ment. You are not the only one. The data con­firms it. Two-thirds of en­gi­neers re­port burnout. The ex­pec­ta­tion gap be­tween lead­er­ship and en­gi­neer­ing teams is well doc­u­mented. Talking openly about these chal­lenges, with your team, with your man­ager, with your broader net­work, is not com­plain­ing. It is pro­fes­sional hon­esty.

And fifth, re­mem­ber that this pro­fes­sion has sur­vived every pre­dic­tion of its demise. COBOL was sup­posed to elim­i­nate pro­gram­mers. Expert sys­tems were sup­posed to re­place them. Fourth-generation lan­guages, CASE tools, vi­sual pro­gram­ming, no-code plat­forms, out­sourc­ing. Every decade brings a new tech­nol­ogy that promises to make soft­ware en­gi­neers ob­so­lete, and every decade the de­mand for skilled en­gi­neers grows. AI will not be dif­fer­ent. The tools change. The fun­da­men­tals en­dure.

AI made writ­ing code eas­ier and made be­ing an en­gi­neer harder. Both of these things are true at the same time, and pre­tend­ing that only the first one mat­ters is how or­ga­ni­za­tions lose their best peo­ple.

The en­gi­neers who are strug­gling right now are not strug­gling be­cause they are bad at their jobs. They are strug­gling be­cause their jobs changed un­der­neath them while the in­dus­try cel­e­brated the part that got eas­ier and ig­nored the parts that got harder.

Expectations rose with­out an­nounce­ment. Roles ex­panded with­out bound­aries. Output de­mands in­creased with­out cor­re­spond­ing in­creases in sup­port, train­ing, or ac­knowl­edg­ment. And the en­gi­neers who raised con­cerns were told, im­plic­itly or ex­plic­itly, that they just needed to adapt faster.

That is not how you build a sus­tain­able en­gi­neer­ing cul­ture. That is how you build a burnout ma­chine.

The in­dus­try needs to name this para­dox hon­estly. AI is an in­cred­i­ble tool. It is also plac­ing enor­mous new de­mands on the peo­ple us­ing it. Both things can be true. Both things need to be ad­dressed.

The or­ga­ni­za­tions that get this right, that in­vest in their peo­ple along­side their tools, that ac­knowl­edge the hu­man cost of rapid tech­no­log­i­cal change while still push­ing for­ward, those are the or­ga­ni­za­tions that will at­tract and re­tain the best en­gi­neer­ing tal­ent in the years ahead.

The ones that do not will dis­cover some­thing that every tech­nol­ogy cy­cle even­tu­ally teaches: tools do not build prod­ucts. People do. And peo­ple have lim­its that no amount of AI can au­to­mate away.

If this res­onated with you, I would love to hear your per­spec­tive. What has changed most about your en­gi­neer­ing role in the last year? Drop me a mes­sage or con­nect with me on LinkedIn. I write reg­u­larly about the in­ter­sec­tion of AI, soft­ware en­gi­neer­ing, and lead­er­ship at ivan­turkovic.com. Follow along if you want hon­est, ex­pe­ri­ence-dri­ven per­spec­tives on how tech­nol­ogy is ac­tu­ally chang­ing this pro­fes­sion.

...

Read the original on www.ivanturkovic.com »

7 322 shares, 29 trendiness

Decision Trees

Let’s pre­tend we’re farm­ers with a new plot of land. Given only the Diameter and Height of a tree trunk, we must de­ter­mine if it’s an Apple, Cherry, or Oak tree. To do this, we’ll use a Decision Tree. Almost every tree with a Diameter ≥ 0.45 is an Oak tree! Thus, we can prob­a­bly as­sume that any other trees we find in that re­gion will also be one.

This first de­ci­sion node will act as our root node. We’ll draw a ver­ti­cal line at this Diameter and clas­sify every­thing above it as Oak (our first leaf node), and con­tinue to par­ti­tion our re­main­ing data on the left. We con­tinue along, hop­ing to split our plot of land in the most fa­vor­able man­ner. We see that cre­at­ing a new de­ci­sion node at Height ≤ 4.88 leads to a nice sec­tion of Cherry trees, so we par­ti­tion our data there.

Our Decision Tree up­dates ac­cord­ingly, adding a new leaf node for Cherry. And Some More After this sec­ond split we’re left with an area con­tain­ing many Apple and some Cherry trees. No prob­lem: a ver­ti­cal di­vi­sion can be drawn to sep­a­rate the Apple trees a bit bet­ter.

Once again, our Decision Tree up­dates ac­cord­ingly. And Yet Some More The re­main­ing re­gion just needs a fur­ther hor­i­zon­tal di­vi­sion and boom - our job is done! We’ve ob­tained an op­ti­mal set of nested de­ci­sions.

That said, some re­gions still en­close a few mis­clas­si­fied points. Should we con­tinue split­ting, par­ti­tion­ing into smaller sec­tions?

Hmm… If we do, the re­sult­ing re­gions would start be­com­ing in­creas­ingly com­plex, and our tree would be­come un­rea­son­ably deep. Such a Decision Tree would learn too much from the noise of the train­ing ex­am­ples and not enough gen­er­al­iz­able rules.

Does this ring fa­mil­iar? It is the well known trade­off that we have ex­plored in our ex­plainer on The Bias Variance Tradeoff! In this case, go­ing too deep re­sults in a tree that over­fits our data, so we’ll stop here.

We’re done! We can sim­ply pass any new data point’s Height and Diameter val­ues through the newly cre­ated Decision Tree to clas­sify them as ei­ther an Apple, Cherry, or Oak tree!

Decision Trees are su­per­vised ma­chine learn­ing al­go­rithms used for both re­gres­sion and clas­si­fi­ca­tion prob­lems. They’re pop­u­lar for their ease of in­ter­pre­ta­tion and large range of ap­pli­ca­tions. Decision Trees con­sist of a se­ries of de­ci­sion nodes on some dataset’s fea­tures, and make pre­dic­tions at leaf nodes.

Scroll on to learn more! Decision Trees are widely used al­go­rithms for su­per­vised ma­chine learn­ing. They’re pop­u­lar for their ease of in­ter­pre­ta­tion and large range of ap­pli­ca­tions. They work for both re­gres­sion and clas­si­fi­ca­tion prob­lems. A Decision Tree con­sists of a se­ries of se­quen­tial de­ci­sions, or de­ci­sion nodes, on some data set’s fea­tures. The re­sult­ing flow-like struc­ture is nav­i­gated via con­di­tional con­trol state­ments, or if-then rules, which split each de­ci­sion node into two or more subn­odes. Leaf nodes, also known as ter­mi­nal nodes, rep­re­sent pre­dic­tion out­puts for the model. To train a Decision Tree from data means to fig­ure out the or­der in which the de­ci­sions should be as­sem­bled from the root to the leaves. New data may then be passed from the top down un­til reach­ing a leaf node, rep­re­sent­ing a pre­dic­tion for that data point.

We just saw how a Decision Tree op­er­ates at a high-level: from the top down, it cre­ates a se­ries of se­quen­tial rules that split the data into well-sep­a­rated re­gions for clas­si­fi­ca­tion. But given the large num­ber of po­ten­tial op­tions, how ex­actly does the al­go­rithm de­ter­mine where to par­ti­tion the data? Before we learn how that works, we need to un­der­stand Entropy.

Entropy mea­sures the amount of in­for­ma­tion of some vari­able or event. We’ll make use of it to iden­tify re­gions con­sist­ing of a large num­ber of sim­i­lar (pure) or dis­sim­i­lar (impure) el­e­ments. Given a cer­tain set of events that oc­cur with prob­a­bil­i­ties , the to­tal en­tropy can be writ­ten as the neg­a­tive sum of weighted prob­a­bil­i­ties: The quan­tity has a num­ber of in­ter­est­ing prop­er­ties: only if all but one of the are zero, this one hav­ing the value of 1. Thus the en­tropy van­ishes only when there is no un­cer­tainty in the out­come, mean­ing that the sam­ple is com­pletely un­sur­pris­ing. is max­i­mum when all the are equal. This is the most un­cer­tain, or impure’, sit­u­a­tion. Any change to­wards the equal­iza­tion of the prob­a­bil­i­ties in­creases . The en­tropy can be used to quan­tify the im­pu­rity of a col­lec­tion of la­beled data points: a node con­tain­ing mul­ti­ple classes is im­pure whereas a node in­clud­ing only one class is pure. Above, you can com­pute the en­tropy of a col­lec­tion of la­beled data points be­long­ing to two classes, which is typ­i­cal for bi­nary clas­si­fi­ca­tion prob­lems. Click on the Add and Remove but­tons to mod­ify the com­po­si­tion of the bub­ble. Did you no­tice that pure sam­ples have zero en­tropy whereas im­pure ones have larger en­tropy val­ues? This is what en­tropy is do­ing for us: mea­sur­ing how pure (or im­pure) a set of sam­ples is. We’ll use it in the al­go­rithm to train Decision Trees by defin­ing the Information Gain.

With the in­tu­ition gained with the above an­i­ma­tion, we can now de­scribe the logic to train Decision Trees. As the name im­plies, in­for­ma­tion gain mea­sures an amount the in­for­ma­tion that we gain. It does so us­ing en­tropy. The idea is to sub­tract from the en­tropy of our data be­fore the split the en­tropy of each pos­si­ble par­ti­tion there­after. We then se­lect the split that yields the largest re­duc­tion in en­tropy, or equiv­a­lently, the largest in­crease in in­for­ma­tion.

The core al­go­rithm to cal­cu­late in­for­ma­tion gain is called ID3. It’s a re­cur­sive pro­ce­dure that starts from the root node of the tree and it­er­ates top-down on all non-leaf branches in a greedy man­ner, cal­cu­lat­ing at each depth the dif­fer­ence in en­tropy:

To be spe­cific, the al­go­rith­m’s steps are as fol­lows: Calculate the en­tropy as­so­ci­ated to every fea­ture of the data set. Partition the data set into sub­sets us­ing dif­fer­ent fea­tures and cut­off val­ues. For each, com­pute the in­for­ma­tion gain as the dif­fer­ence in en­tropy be­fore and af­ter the split us­ing the for­mula above. For the to­tal en­tropy of all chil­dren nodes af­ter the split, use the weighted av­er­age tak­ing into ac­count , i.e. how many of the sam­ples end up on each child branch. Identify the par­ti­tion that leads to the max­i­mum in­for­ma­tion gain. Create a de­ci­sion node on that fea­ture and split value. When no fur­ther splits can be done on a sub­set, cre­ate a leaf node and la­bel it with the most com­mon class of the data points within it if do­ing clas­si­fi­ca­tion or with the av­er­age value if do­ing re­gres­sion. Recurse on all sub­sets. Recursion stops if af­ter a split all el­e­ments in a child node are of the same type. Additional stop­ping con­di­tions may be im­posed, such as re­quir­ing a min­i­mum num­ber of sam­ples per leaf to con­tinue split­ting, or fin­ish­ing when the trained tree has reached a given max­i­mum depth. Of course, read­ing the steps of an al­go­rithm is­n’t al­ways the most in­tu­itive thing. To make things eas­ier to un­der­stand, let’s re­visit how in­for­ma­tion gain was used to de­ter­mine the first de­ci­sion node in our tree. Recall our first de­ci­sion node split on Diameter ≤ 0.45. How did we choose this con­di­tion? It was the re­sult of max­i­miz­ing in­for­ma­tion gain.

Each of the pos­si­ble splits of the data on its two fea­tures (Diameter and Height) and cut­off val­ues yields a dif­fer­ent value of the in­for­ma­tion gain.

The line chart dis­plays the dif­fer­ent split val­ues for the Diameter fea­ture. Move the de­ci­sion bound­ary your­self to see how the data points in the top chart are as­signed to the left or right chil­dren nodes ac­cord­ingly. On the bot­tom you can see the cor­re­spond­ing en­tropy val­ues of both chil­dren nodes as well as the to­tal in­for­ma­tion gain.

The ID3 al­go­rithm will se­lect the split point with the largest in­for­ma­tion gain, shown as the peak of the black line in the bot­tom chart of 0.574 at Diameter = 0.45. Recall our first de­ci­sion node split on Diameter ≤ 0.45. How did we choose this con­di­tion? It was the re­sult of max­i­miz­ing in­for­ma­tion gain.

Each of the pos­si­ble splits of the data on its two fea­tures (Diameter and Height) and cut­off val­ues yields a dif­fer­ent value of the in­for­ma­tion gain.

The vi­su­al­iza­tion on the right al­lows to try dif­fer­ent split val­ues for the Diameter fea­ture. Move the de­ci­sion bound­ary your­self to see how the data points in the top chart are as­signed to the left or right chil­dren nodes ac­cord­ingly. On the bot­tom you can see the cor­re­spond­ing en­tropy val­ues of both chil­dren nodes as well as the to­tal in­for­ma­tion gain.

The ID3 al­go­rithm will se­lect the split point with the largest in­for­ma­tion gain, shown as the peak of the black line in the bot­tom chart of 0.574 at Diameter = 0.45. An al­ter­na­tive to the en­tropy for the con­struc­tion of Decision Trees is the Gini im­pu­rity. This quan­tity is also a mea­sure of in­for­ma­tion and can be seen as a vari­a­tion of Shannon’s en­tropy. Decision trees trained us­ing en­tropy or Gini im­pu­rity are com­pa­ra­ble, and only in a few cases do re­sults dif­fer con­sid­er­ably. In the case of im­bal­anced data sets, en­tropy might be more pru­dent. Yet Gini might train faster as it does not make use of log­a­rithms.

Another Look At Our Decision Tree Let’s re­cap what we’ve learned so far. First, we saw how a Decision Tree clas­si­fies data by re­peat­edly par­ti­tion­ing the fea­ture space into re­gions ac­cord­ing to some con­di­tional se­ries of rules. Second, we learned about en­tropy, a pop­u­lar met­ric used to mea­sure the pu­rity (or lack thereof) of a given sam­ple of data. Third, we learned how Decision Trees use en­tropy in in­for­ma­tion gain and the ID3 al­go­rithm to de­ter­mine the ex­act con­di­tional se­ries of rules to se­lect. Taken to­gether, the three sec­tions de­tail the typ­i­cal Decision Tree al­go­rithm.

To re­in­force con­cepts, let’s look at our Decision Tree from a slightly dif­fer­ent per­spec­tive.

The tree be­low maps ex­actly to the tree we showed in How to Build a Decision Tree sec­tion above. However, in­stead of show­ing the par­ti­tioned fea­ture space along­side our trees struc­ture, let’s look at the par­ti­tioned data points and their cor­re­spond­ing en­tropy at each node it­self:

From the top down, our sam­ple of data points to clas­sify shrinks as it gets par­ti­tioned to dif­fer­ent de­ci­sion and leaf nodes. In this man­ner, we could trace the full path taken by a train­ing data point if we so de­sired. Note also that not every leaf node is pure: as dis­cussed pre­vi­ously (and in the next sec­tion), we don’t want the struc­ture of our Decision Trees to be too deep, as such a model likely won’t gen­er­al­ize well to un­seen data.

Without ques­tion, Decision Trees have a lot of things go­ing for them. They’re sim­ple mod­els that are easy to in­ter­pret. They’re fast to train and re­quire min­i­mal data pre­pro­cess­ing. And they hand out­liers with ease. Yet they suf­fer from a ma­jor lim­i­ta­tion, and that is their in­sta­bil­ity com­pared with other pre­dic­tors. They can be ex­tremely sen­si­tive to small per­tur­ba­tions in the data: a mi­nor change in the train­ing ex­am­ples can re­sult in a dras­tic change in the struc­ture of the Decision Tree. Check for your­self how small ran­dom Gaussian per­tur­ba­tions on just 5% of the train­ing ex­am­ples cre­ate a set of com­pletely dif­fer­ent Decision Trees:

Why Is This A Problem? In their vanilla form, Decision Trees are un­sta­ble.

If left unchecked, the ID3 al­go­rithm to train Decision Trees will work end­lessly to min­i­mize en­tropy. It will con­tinue split­ting the data un­til all leaf nodes are com­pletely pure - that is, con­sist­ing of only one class. Such a process may yield very deep and com­plex Decision Trees. In ad­di­tion, we just saw that Decision Trees are sub­ject to high vari­ance when ex­posed to small per­tur­ba­tions of the train­ing data.

Both is­sues are un­de­sir­able, as they lead to pre­dic­tors that fail to clearly dis­tin­guish be­tween per­sis­tent and ran­dom pat­terns in the data, a prob­lem known as over­fit­ting. This is prob­lem­atic be­cause it means that our model won’t per­form well when ex­posed to new data. There are ways to pre­vent ex­ces­sive growth of Decision Trees by prun­ing them, for in­stance con­strain­ing their max­i­mum depth, lim­it­ing the num­ber of leaves that can be cre­ated, or set­ting a min­i­mum size for the amount of items in each leaf and not al­low­ing leaves with too few items in them.

As for the is­sue of high vari­ance? Well, un­for­tu­nately it’s an in­trin­sic char­ac­ter­is­tic when train­ing a sin­gle Decision Tree.

...

Read the original on mlu-explain.github.io »

8 294 shares, 12 trendiness

Block the “Upgrade to Tahoe” alerts and System Settings indicator

Although I have to have a ma­chine run­ning ma­cOS Tahoe to sup­port our cus­tomers, I per­son­ally don’t like the look of Liquid Glass, nor do I like some of the func­tional changes Apple has made in ma­cOS Tahoe.

So I have ma­cOS Tahoe on my lap­top, but I’m keep­ing my desk­top Mac on ma­cOS Sequoia for now. Which means I have the joy of see­ing things like this won­der­ful no­ti­fi­ca­tion on a reg­u­lar ba­sis.

Or I did, un­til I found a way to block them, at least in 90 day chunks. Now when I open System Settings → General → Software Update, I see this:

The se­cret? Using de­vice man­age­ment pro­files, which let you en­force poli­cies on Macs in your or­ga­ni­za­tion, even if that organization” is one Mac on your desk. One of the avail­able poli­cies is the abil­ity to block ac­tiv­i­ties re­lated to ma­jor ma­cOS up­dates for up to 90 days at a time (the max the pol­icy al­lows), which seems like ex­actly what I needed.

Not be­ing any­where near an ex­pert on de­vice pro­files, I went look­ing to see what I could find, and stum­bled on the Stop Tahoe Update pro­ject. The even­tual goals of this pro­ject are quite im­pres­sive, but what they’ve done so far is ex­actly what I needed: A con­fig­u­ra­tion pro­file that blocks Tahoe up­date ac­tiv­i­ties for 90 days.

I first tried to get things work­ing by fol­low­ing the Read Me, but it’s miss­ing some key steps. After some fum­bling about, I man­aged to get it work­ing by us­ing these mod­i­fied in­struc­tions:

Clone the repo and switch to its di­rec­tory in Terminal; run the two com­mands as shown in the pro­jec­t’s Read Me:$ git clone https://​github.com/​trav­isvn/​stop-tahoe-up­date.git

$ cd stop-tahoe-up­date­Set all the scripts to ex­e­cutable (not in the in­struc­tions):$ chmod 755 ./scripts/*.shCreate and in­sert two UUIDs into the pro­file (not in the in­struc­tions). To do this, use your fa­vorite text ed­i­tor to edit the file named in the folder. Look for two lines like this:You need to re­place that text with two dis­tinct UUIDs; the eas­i­est way to do that is to run twice in Terminal, then copy and paste each UUID, re­plac­ing each text with the UUID. Save the changes and quit the ed­i­tor, un­less you want to make the fol­low­ing op­tional change…Op­tional step: I did­n’t want to de­fer nor­mal up­dates, just the ma­jor OS up­date, so I changed the sec­tion to look like this:

This way, I’ll still get no­ti­fi­ca­tions for up­dates other than the ma­jor OS up­date, in case Apple re­leases any­thing fur­ther for ma­cOS Sequoia. Remember to save your changes, then quit the ed­i­tor. Run the script as de­scribed in the pro­jec­t’s Read Me:./scripts/install-profile.sh pro­files/​de­fer­ral-90­days.mo­bile­con­fig­When run, you’ll see out­put in Terminal in­di­cat­ing that you’re not done yet:In­stalling pro­file: pro­files/​de­fer­ral-90­days.mo­bile­con­fig

pro­files tool no longer sup­ports in­stalls. Use System Settings Profiles to add con­fig­u­ra­tion pro­files.

Done. You may need to open System Settings → Privacy & Security → Profiles to ap­prove.You’ll also get an on­screen alert say­ing ba­si­cally the same thing.To fin­ish the in­stal­la­tion, open System Settings and click on the Profile Downloaded en­try in the side­bar. This will take you to a screen show­ing the pro­file you just added. Double-click on that pro­file, and a di­a­log ap­pears show­ing the set­tings; here’s how mine looked, re­flect­ing the changes I made to re­move mi­nor up­dates from the pol­icy:Click the Install but­ton, which will lead you to Yet Another Dialog; again click Install and you’ll fi­nally be done. Quit and re­launch System Settings, and you should see a mes­sage like mine at the top of the Software Update panel.

As I’ve just done all this to­day, I’m not sure ex­actly what hap­pens in 90 days. I imag­ine I may be no­ti­fied that the pol­icy has ex­pired, or maybe I’ll just see a ma­cOS Tahoe up­date no­ti­fi­ca­tion. Either way, you can re­in­stall the pol­icy again by just run­ning the com­mand again. Alternatively, and to make things much sim­pler, here’s what I’ve done…

I copied my mod­i­fied pro­file (the file in the folder) to one of my util­ity fold­ers, so I could re­move the repo as I won’t need it any more. Then I looked at the in­stall script, which tries to in­stall the pro­file us­ing the com­mand, and if that fails, it then opens the pro­file to in­stall it. In Sequoia, you can’t use to in­stall a pro­file, so only the part of the com­mand is needed.

Once I fig­ured out I only needed to use the com­mand, I added a sim­ple in my con­fig­u­ra­tion file:

# Reinstall the no-Tahoe 90 day pol­icy

alias no­ta­hoe=‘open /path/to/deferral-90days.mobileconfig”; sleep 2; open x-apple.systempreferences:com.apple.preferences.configurationprofiles”’

Now I just have to type every 90 days, and the pro­file will be re­in­stalled, and System Settings will open to the pro­files panel, where a few clicks will fin­ish ac­ti­vat­ing the in­stalled pro­file. We’ll see how that goes in April :).

I am so much hap­pier now, not be­ing in­ter­rupted with the Tahoe up­date no­ti­fi­ca­tion, and not hav­ing the glar­ing red 1” on the System Settings icon.

...

Read the original on robservatory.com »

9 220 shares, 9 trendiness

MinIO Is Dead, Long Live MinIO

MinIO’s open-source repo has been of­fi­cially archived. No more main­te­nance. End of an era — but open source does­n’t die that eas­ily.

I cre­ated a MinIO fork, re­stored the ad­min con­sole, re­built the bi­nary dis­tri­b­u­tion pipeline, and brought it back to life.

If you’re run­ning MinIO, swap minio/​minio for pgsty/​minio. Everything else stays the same. (CVE fixed, and the con­sole GUI is back)

On December 3, 2025, MinIO an­nounced maintenance mode” on GitHub. I wrote about it in MinIO Is Dead.

On February 12, 2026, MinIO up­dated the repo sta­tus from maintenance mode” to no longer main­tained”, then of­fi­cially archived the repos­i­tory. Read-only. No PRs, no is­sues, no con­tri­bu­tions ac­cepted. A pro­ject with 60k stars and over a bil­lion Docker pulls be­came a dig­i­tal tomb­stone.

If December was the clin­i­cal death, this February com­mit was the death cer­tifi­cate.

Today (Feb 14), a widely cir­cu­lated ar­ti­cle ti­tled How MinIO went from open source dar­ling to cau­tion­ary tale laid out the full time­line.

Percona founder Peter Zaitsev also raised con­cerns about open-source in­fra­struc­ture sus­tain­abil­ity on LinkedIn. The con­sen­sus in the in­ter­na­tional com­mu­nity is clear:

Looking back at the time­line over the past years, this was­n’t a sud­den death. It was a slow, de­lib­er­ate wind-down:

A com­pany that raised $126M at a bil­lion-dol­lar val­u­a­tion spent five years me­thod­i­cally dis­man­tling the open-source ecosys­tem it built.

Normally this is where the story ends — a col­lec­tive sigh, and every­one moves on.

But I want to tell a dif­fer­ent story. Not an obit­u­ary — a res­ur­rec­tion.

MinIO Inc. can archive a repo, but they can’t archive the rights that the AGPL grants to the com­mu­nity.

Ironically, AGPL was MinIO’s own choice. They switched from Apache 2.0 to AGPL to use it as lever­age in their dis­putes with Nutanix and Weka — keep­ing the open source” la­bel while adding en­force­ment teeth. But open-source li­censes cut both ways — the same li­cense now guar­an­tees the com­mu­ni­ty’s right to fork.

Once code is re­leased un­der AGPL, the li­cense is ir­rev­o­ca­ble. You can set a repo to read-only, but you can’t claw back a granted li­cense. That’s the beauty of open-source li­cens­ing by de­sign: a com­pany can aban­don a pro­ject, but it can’t take the code with it.

So — MinIO is dead, but MinIO can live again.

That said, fork­ing is the easy part. Anyone can click the Fork but­ton. The real ques­tion is­n’t can we fork it” but can some­one ac­tu­ally main­tain it as a pro­duc­tion com­po­nent?”

I did­n’t set out to take this on. But af­ter MinIO en­tered main­te­nance mode, I waited a cou­ple of weeks for some­one in the com­mu­nity to step up.

But I did­n’t find one. So I did it my­self.

Some back­ground: I main­tain Pigsty — a bat­ter­ies-in­cluded PostgreSQL dis­tri­b­u­tion with 460+ ex­ten­sions, cross-built for 14 Linux dis­tros. I also main­tain build pipelines for 290 PG ex­ten­sions, sev­eral PG forks, and dozens of Go Projects (Victoria, Prometheus, etc.) pack­ag­ing across all ma­jor plat­forms. Adding one more to the pipeline was a piece of cake.

I’m not new to MinIO ei­ther. Back in 2018, we ran an in­ter­nal MinIO fork at TanTan (back when it was still Apache 2.0), man­ag­ing ~25 PB of data — one of the ear­li­est and largest MinIO de­ploy­ments in China at the time.

More im­por­tantly, MinIO is an op­tional mod­ule in Pigsty. Many users run it as the de­fault backup repos­i­tory for PostgreSQL in pro­duc­tion. We did con­sider sev­eral al­ter­na­tives, but none were a drop-in re­place­ment for MinIO-based work­flows.

We use MinIO our­selves, so keep­ing the sup­ply chain alive was not op­tional — it had to be done.

As early as December 2025, when MinIO an­nounced main­te­nance mode, I had al­ready built CVE-patched bi­na­ries and switched to them.

As of to­day, three things.

This was the change that frus­trated the com­mu­nity the most.

In May 2025, MinIO stripped the full ad­min con­sole from the com­mu­nity edi­tion, leav­ing be­hind a bare-bones ob­ject browser. User man­age­ment, bucket poli­cies, ac­cess con­trol, life­cy­cle man­age­ment — all gone overnight. Want them back? Pay for the en­ter­prise edi­tion. (~$100,000)

The ironic part: this did­n’t even re­quire re­verse en­gi­neer­ing. You just re­vert the minio/​con­sole sub­mod­ule to the pre­vi­ous ver­sion. They swapped a de­pen­dency ver­sion to re­place the full con­sole with a stripped-down one. The code was al­ways there.

In October 2025, MinIO stopped dis­trib­ut­ing pre-built bi­na­ries and Docker im­ages, leav­ing only source code. Use go in­stall to build it your­self” — that was their an­swer.

For the vast ma­jor­ity of users, the value of open-source soft­ware is­n’t just a copy of the source — sup­ply chain sta­bil­ity is what mat­ters.

You need a sta­ble ar­ti­fact you can put in a Dockerfile, an Ansible play­book, or a CI/CD pipeline — not a re­quire­ment to in­stall a Go com­piler be­fore every de­ploy­ment.

If you’re us­ing Docker, just swap minio/​minio for pgsty/​minio.

For na­tive Linux in­stalls, grab RPM/DEB pack­ages from the GitHub Release page. You can also use pig (the PG ex­ten­sion pack­age man­ager) for easy in­stal­la­tion, or con­fig­ure the pigsty-in­fra APT/DNF repo to in­stall from it:

curl https://​repo.pigsty.io/​pig | bash;

pig repo add in­fra -u; pig in­stall minio

MinIO’s of­fi­cial doc­u­men­ta­tion was also at risk — links had started redi­rect­ing to their com­mer­cial prod­uct, AIStor.

We forked minio/​docs, fixed bro­ken links, re­stored re­moved con­sole doc­u­men­ta­tion, and de­ployed it here.

The docs use the same CC Attribution 4.0 li­cense as the orig­i­nal, with nec­es­sary main­te­nance.

Some things worth stat­ing up front to set ex­pec­ta­tions.

MinIO as an S3-compatible ob­ject store is al­ready fea­ture-com­plete. It’s a fin­ished soft­ware. It does­n’t need more bells and whis­tles — it needs a sta­ble, re­li­able, con­tin­u­ously avail­able build. (I al­ready have PostgreSQL for these, so I don’t need some­thing like S3 table or S3 vec­tor. A sta­ble S3 core is all I need)

What we’re do­ing: mak­ing sure you can get a work­ing, com­plete MinIO bi­nary, with the ad­min con­sole in­cluded and CVE fixed.

RPM, DEB, Docker im­ages — built au­to­mat­i­cally via CI/CD, drop-in com­pat­i­ble with your ex­ist­ing minio. We keep the ex­ist­ing minio nam­ing and be­hav­ior where legally and tech­ni­cally fea­si­ble.

We run these builds our­selves and have been dog­food­ing them in pro­duc­tion for three months. If some­thing breaks, we de­tect it early and patch it quickly.

I build this pri­mar­ily for Pigsty and our own us­age, but I hope it helps oth­ers too.

If you run into is­sues, feel free to re­port them at pgsty/​minio. I’ll do my best to fix these — but please don’t treat this as a com­mer­cial SLA.

Given that AI cod­ing tools have made bug fix­ing dra­mat­i­cally cheaper, and that we’re ex­plic­itly not adding any new fea­tures, I be­lieve the main­te­nance work­load is man­age­able. (how of­ten do you see one?)

Trademark Notice: MinIO® is a reg­is­tered trade­mark of MinIO, Inc. This pro­ject (pgsty/minio) is an in­de­pen­dently main­tained com­mu­nity fork un­der the AGPL li­cense. It has no af­fil­i­a­tion with, en­dorse­ment by, or con­nec­tion to MinIO, Inc. Use of MinIO” in this post refers solely to the open-source soft­ware pro­ject it­self and im­plies no com­mer­cial as­so­ci­a­tion.

AGPLv3 gives us clear rights to fork and dis­trib­ute, but trade­mark law is a sep­a­rate do­main. We’ve marked this clearly every­where as an in­de­pen­dent com­mu­nity-main­tained build.

If MinIO Inc. raises trade­mark con­cerns, we’ll co­op­er­ate and re­name (probably some­thing like silo or stow). Until then, we think de­scrip­tive use of the orig­i­nal name in an AGPL fork is rea­son­able — and re­nam­ing all the minio ref­er­ences does­n’t serve users.

You might ask: can one per­son re­ally main­tain this?

It’s 2026. Things are dif­fer­ent now.

AI cod­ing tools are chang­ing the eco­nom­ics of open-source main­te­nance.

With tools like Claude Code & Codex, the cost of lo­cat­ing and fix­ing bugs in a com­plex Go pro­ject has dropped by more than an or­der of mag­ni­tude. What used to re­quire a ded­i­cated team to main­tain a com­plex in­fra pro­ject can now be han­dled by one ex­pe­ri­enced en­gi­neer with an AI copi­lot.

Maintaining a MinIO build with­out adding new fea­tures is a man­age­able task. The key re­quire­ment is test­ing and val­i­da­tion. and we al­ready have that sce­nario, which lets us ver­ify com­pat­i­bil­ity, re­li­a­bil­ity, and se­cu­rity in prac­tice.

Consider: Elon cut X/Twitter’s en­gi­neer­ing team down to ~30 peo­ple and the sys­tem still runs. Maintaining a MinIO fork with­out new fea­tures is con­sid­er­ably less daunt­ing

MinIO Inc. can archive a GitHub repo, but they can’t archive the de­mand be­hind 60k stars, or the de­pen­dency graph be­hind a bil­lion Docker pulls. That de­mand does­n’t dis­ap­pear — it just finds its way out.

HashiCorp’s Terraform got forked into OpenTofu, and it’s do­ing fine. MinIO’s sit­u­a­tion is ac­tu­ally more fa­vor­able —

AGPL is more per­mis­sive for forks than BSL, with no le­gal gray area for com­mu­nity forks.

A com­pany can aban­don a pro­ject, but open-source li­censes are specif­i­cally de­signed so the code can’t die.

Fork is the most pow­er­ful spell in open source. When a com­pany de­cides to shut the door, the com­mu­nity only needs two words:

Disclaimer: This ar­ti­cle is pol­ished and trans­lated from zh-cn by Claude.

...

Read the original on blog.vonng.com »

10 208 shares, 2 trendiness

iPhone Top Games & Apps

Can you fig­ure it out?

...

Read the original on apps.apple.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.