10 interesting stories served every morning and every evening.

DeepSeek V4 Preview Release | DeepSeek API Docs

api-docs.deepseek.com

🚀 DeepSeek-V4 Preview is of­fi­cially live & open-sourced! Welcome to the era of cost-ef­fec­tive 1M con­text length.

🔹 DeepSeek-V4-Pro: 1.6T to­tal / 49B ac­tive params. Performance ri­val­ing the world’s top closed-source mod­els.

🔹 DeepSeek-V4-Flash: 284B to­tal / 13B ac­tive params. Your fast, ef­fi­cient, and eco­nom­i­cal choice.

Try it now at chat.deepseek.com via Expert Mode / Instant Mode. API is up­dated & avail­able to­day!

📄 Tech Report: https://​hug­ging­face.co/​deepseek-ai/​DeepSeek-V4-Pro/​blob/​main/​DeepSeek_V4.pdf

🤗 Open Weights: https://​hug­ging­face.co/​col­lec­tions/​deepseek-ai/​deepseek-v4

DeepSeek-V4-Pro​

🔹 Enhanced Agentic Capabilities: Open-source SOTA in Agentic Coding bench­marks.

🔹 Rich World Knowledge: Leads all cur­rent open mod­els, trail­ing only Gemini-3.1-Pro.

🔹 World-Class Reasoning: Beats all cur­rent open mod­els in Math/STEM/Coding, ri­val­ing top closed-source mod­els.

DeepSeek-V4-Flash​

🔹 Reasoning ca­pa­bil­i­ties closely ap­proach V4-Pro.

🔹 Performs on par with V4-Pro on sim­ple Agent tasks.

🔹 Smaller pa­ra­me­ter size, faster re­sponse times, and highly cost-ef­fec­tive API pric­ing.

Structural Innovation & Ultra-High Context Efficiency​

🔹 Novel Attention: Token-wise com­pres­sion + DSA (DeepSeek Sparse Attention).

🔹 Peak Efficiency: World-leading long con­text with dras­ti­cally re­duced com­pute & mem­ory costs.

🔹 1M Standard: 1M con­text is now the de­fault across all of­fi­cial DeepSeek ser­vices.

Dedicated Optimizations for Agent Capabilities​

🔹 DeepSeek-V4 is seam­lessly in­te­grated with lead­ing AI agents like Claude Code, OpenClaw & OpenCode.

🔹 Already dri­ving our in-house agen­tic cod­ing at DeepSeek.

The fig­ure be­low show­cases a sam­ple PDF gen­er­ated by DeepSeek-V4-Pro.

API is Available Today!​

🔹 Keep base_url, just up­date model to deepseek-v4-pro or deepseek-v4-flash.

🔹 Supports OpenAI ChatCompletions & Anthropic APIs.

🔹 Both mod­els sup­port 1M con­text & dual modes (Thinking / Non-Thinking): https://​api-docs.deepseek.com/​guides/​think­ing_­mode

⚠️ Note: deepseek-chat & deepseek-rea­soner will be fully re­tired and in­ac­ces­si­ble af­ter Jul 24th, 2026, 15:59 (UTC Time). (Currently rout­ing to deepseek-v4-flash non-think­ing/​think­ing).

🔹 Amid re­cent at­ten­tion, a quick re­minder: please rely only on our of­fi­cial ac­counts for DeepSeek news. Statements from other chan­nels do not re­flect our views.

🔹 Thank you for your con­tin­ued trust. We re­main com­mit­ted to longter­mism, ad­vanc­ing steadily to­ward our ul­ti­mate goal of AGI.

Are you a robot?

www.bloomberg.com

Please make sure your browser sup­ports JavaScript and cook­ies and that you are not block­ing them from load­ing. For more in­for­ma­tion you can re­view our Terms of Service and Cookie Policy.

New 10 GbE USB adapters are cooler, smaller, cheaper

www.jeffgeerling.com

For years, the best way to get 10 gi­ga­bit net­work­ing on lap­tops was to buy an ex­pen­sive, large, and hot 10 GbE Thunderbolt adapter. With new RTL8159-based 10G USB 3.2 adapters com­ing onto the mar­ket, the bulky adapters might be a thing of the past. Just look at the size of the thing in com­par­i­son to my Thunderbolt adapters:

2.5G and even 5G USB adapters have been out for a while, but some­times you need more band­width.

The 10G adapter I’m test­ing is this $80 model from WisdPi. That’s dou­ble the price of most 5G/2.5G adapters, but less than half what I paid for my Thunderbolt 10G adapters.

If you need 10 gigs, this might be the best op­tion, if you use RJ45 and not SFP+. If you don’t need 10 gigs, a 2.5 or 5 Gbps adapter is still the best value.

Also, you might not even get 10 Gbps with these new adapters, de­pend­ing on your com­puter. I’ll sum­ma­rize why af­ter the video:

USB is fast frus­trat­ing

I tested this adapter on four com­put­ers:

Framework 13 with AMD Ryzen AI 5 340 (includes USB 4 / USB 3.2 Gen 2)

MacBook Neo (USB 3.1 and USB 2.0)

M4 MacBook Air (USB 4 / USB 3.1 Gen 2)

Desktop with AMD Ryzen 7900x with B650 moth­er­board (USB 3.2 Gen 2x2)

Getting those spe­cific USB port specs is a bit of a chore (some web­sites don’t even tell you if it’s 3.2 Gen 2’ or 3.0’, and Windows it­self only says USB 3.0” when you plug in a USB 3.2 Gen 2x2 de­vice like the 10 Gbps NIC!)

I was only able to get full 10 Gbps speed (minus a lit­tle over­head) on the AMD Desktop, which has a sin­gle USB 3.2 Gen 2x2 port good for 20 Gbps of through­put. The other ma­chines got around 6 – 7 Gbps:

The Macs have the same per-port band­width (USB 3.1 Gen 2x1, for 10 Gbps), but the per­for­mance is con­sis­tently worse than the Framework.

On the Macs, the adapter was cor­rectly iden­ti­fied when I plugged it in, and worked straight­away, with no ex­tra dri­ver in­stal­la­tion. The Hardware’ tab in the Network set­tings in­cor­rectly re­ported a con­nec­tion speed of 2500Base-T.

On Windows, the adapter was rec­og­nized when plugged in, but would­n’t con­nect to the net­work un­til I in­stalled the lat­est Realtek dri­ver, down­loaded from their web­site.

Bidirectional band­width test­ing of­fered an in­ter­est­ing con­trast; the Macs both han­dled traf­fic sym­met­ri­cally, while the Framework was wildly dis­parate. The desk­top PC gave a full 9.5 Gbps down, and around 5 Gbps up.

The main take­away is this adapter only reaches its full po­ten­tial if you have a USB 3.2 Gen 2 2x2 20 Gbps port.

And con­sid­er­ing the mess of USB nam­ing over the past decade—and the fact Microsoft re­ports all USB 3.x con­nec­tions as 3.0” in their Device Settings pane, good luck fig­ur­ing out your own com­put­er’s sup­port with­out glanc­ing at spec sheets!

A few com­put­ers I’ve seen ac­tu­ally la­bel the USB port speed (e.g. 10’ or 20’), but that seems fairly rare. Most man­u­fac­tur­ers seem to fol­low Apple in es­chew­ing la­bel­ing en­tirely!

At least Apple has the ne­go­ti­ated port speed vis­i­ble in the System Information’ app—I could­n’t find that de­tail any­where on Windows.

5G and 2.5G a bet­ter value?

With re­duced speed due to in­ad­e­quate USB port band­width, would a 2.5 Gbps or 5 Gbps adapter be a bet­ter value?

Testing the WisdPi 5 Gbps adapter pic­tured above on my M4 Air, it got 4.6 Gbps. The 10 Gbps adapter is 1.4x faster, but for more than 2x the price ($30 vs $80).

I think, if you al­ready have a 10 Gbps net­work, you use RJ45 and not SFP+ con­nec­tions, and you want a more com­pact adapter (compared to the bulky, hot Thunderbolt adapters), it’s a good deal. But if you need that full 10 Gbps or SFP+ sup­port, Thunderbolt adapters are still the best if you have Thunderbolt ports that don’t sup­port USB 3.2 Gen 2 2x2.

If you don’t need 10 Gbps, though, stick to 2.5 or 5 Gbps adapters—they are still the best value right now.

Thermals and Power Draw

I also checked ther­mals and power draw—though my tests are not com­pre­hen­sive. Measuring the ab­solute power draw is dif­fi­cult be­cause my USB-C power mea­sure­ment de­vices down­grade the con­nec­tion speed to USB 2, which means I’m not test­ing at full per­for­mance.

At the slower USB 2 speed, the adapter uses about 0.86 Watts of power.

And it does­n’t get that hot, which was sur­pris­ing. All my Aquantia-based 10 gig adapters turn into lit­tle ovens. That’s why they’re so big: the en­clo­sures are gi­ant heatsinks.

But the WisdPi only got up to 42.5°C af­ter run­ning a bidi­rec­tional iperf3 test for a few min­utes.

That’s warm, but not so hot that I’d burn my­self touch­ing it like I have with other 10 gig adapters.

Conclusion

If $80 is too rich, this is­n’t the only op­tion that uses the new chip; AliExpress is lit­tered with al­ter­na­tives. And you can get it on PCI Express cards, which by­passes the USB port re­quire­ment on desk­top PCs.

In the midst of all the price in­fla­tion in per­sonal com­put­ing, it’s nice to find a new de­vice that’s cheaper, faster, and (depending on your USB port) bet­ter.

Firefox Has Quietly Integrated Brave's Adblock Engine

itsfoss.com

Back in March, Firefox 149 was re­leased with many changes, like a free built-in VPN, a Split View that al­lows the load­ing of two pages side by side, and the XDG por­tal file picker as the new de­fault on Linux.

However, an in­ter­est­ing ad­di­tion had gone mostly un­no­ticed un­til now.

Firefox has Some Brave in it now

Shivan Kaul Sahib, the VP of Privacy and Security at Brave, has put out a blog post about some­thing that did­n’t make it into the Firefox 149 re­lease notes at all. The browser now ships ad­block-rust, Brave’s open source Rust-based ad and tracker block­ing en­gine.

The change landed via Bugzilla Bug 2013888, which was filed and han­dled by Mozilla en­gi­neer Benjamin VanderSloot. The bug is ti­tled Add a pro­to­type rich con­tent block­ing en­gine,” and keeps the en­gine dis­abled by de­fault with no user in­ter­face or fil­ter lists in­cluded.

For in­for­ma­tional pur­poses, ad­block-rust is the en­gine be­hind Brave’s na­tive con­tent blocker (aka ad blocker). It is writ­ten in Rust and li­censed un­der MPL-2.0, han­dling net­work re­quest block­ing, cos­metic fil­ter­ing, and fea­tures a uBlock Origin-compatible fil­ter list syn­tax.

Shivan also men­tions that Waterfox, the pop­u­lar Firefox fork, has adopted ad­block-rust, build­ing di­rectly upon Firefox’s own im­ple­men­ta­tion.

Want to test it?

Before start­ing, head to Enhanced Tracking Protection’s shield icon in the ad­dress bar and turn it off for the web­site you will be test­ing this with. This way, ad­block-rust is do­ing the work, not Firefox’s ex­ist­ing fea­ture.

🚧

I sug­gest test­ing this ex­per­i­men­tal fea­ture on a throw­away in­stal­la­tion of Firefox.

Now open a new tab and go to about:con­fig. Accept the warn­ing when it shows up. Search for pri­vacy.track­ing­pro­tec­tion.con­tent.pro­tec­tion.en­abled and set it to true” by click­ing on the tog­gle. 👇

Next, search for pri­vacy.track­ing­pro­tec­tion.con­tent.pro­tec­tion.test_list_urls, click on the Edit” but­ton, and paste the fol­low­ing value to add the EasyList and EasyPrivacy fil­ter lists to Firefox:

https://​ea­sylist.to/​ea­sylist/​ea­sylist.txt|https://​ea­sylist.to/​ea­sylist/​easypri­vacy.txt

Remember to click on the blue-col­ored Save” but­ton be­fore mov­ing on.

Left: ad­ver­tise­ment shown; Right: ad­ver­tise­ment blocked

Now visit a site with known ads, like Yahoo (as I did above). If it’s work­ing, ad slots will still ren­der in the page lay­out, but the ac­tual ad con­tent will be blocked. In my test, the ban­ner on Yahoo came up show­ing only the text Advertisement” with the ad­vert bit stripped out.

Support in­de­pen­dent Linux jour­nal­ism! If you think we are do­ing a good job at help­ing peo­ple use Linux on their per­sonal com­put­ers, sup­port us by opt­ing for Plus mem­ber­ship.

Here’s what you get with It’s FOSS Plus mem­ber­ship:

✅ 5 Free eBooks on Linux, Docker and Bash✅ Ad-free read­ing ex­pe­ri­ence✅ Badges in the com­ment sec­tion and fo­rum✅ Support cre­ation of ed­u­ca­tional Linux ma­te­ri­als

Join It’s FOSS Plus

About the au­thor

Sourav Rudra

A nerd with a pas­sion for open source soft­ware, cus­tom PC builds, mo­tor­sports, and ex­plor­ing the end­less pos­si­bil­i­ties of this world.

There Will Be a Scientific Theory of Deep Learning

arxiv.org

quantumslop/URANDOM_DEMO.md at 25ad2e76ae58baa96f6219742459407db9dd17f5 · yuvadm/quantumslop

github.com

Replacing the QPU with /dev/urandom

Claim be­ing tested: the Q‑Day Prize sub­mis­sion in this repo demon­strates a

quan­tum at­tack on ECDLP — specif­i­cally, key re­cov­ery on curves up to 17 bits

us­ing IBM Quantum hard­ware.

This branch ap­plies a sin­gle sur­gi­cal patch (−29 / +30 lines) to

pro­jecteleven.py. The patch re­places the IBM Quantum back­end in­side

solve_ecdlp() with os.uran­dom. Everything else — cir­cuit con­struc­tion,

the rip­ple‑carry or­a­cle, the ex­trac­tion pipeline, the d·G == Q ver­i­fier —

runs byte‑for‑byte un­changed.

If the quan­tum com­puter were con­tribut­ing mea­sur­able sig­nal, this

sub­sti­tu­tion should break the re­cov­er­ies. It does not. The au­thor’s own CLI

re­cov­ers every re­ported pri­vate key at sta­tis­ti­cally in­dis­tin­guish­able rates

from the IBM hard­ware runs.

The diff

- if to­ken:

- ser­vice = QiskitRuntimeService(…)

- …

- back­end = ser­vice.back­end(back­end_­name)

- …

- qc_t = tran­spile(qc, back­end, op­ti­miza­tion_level=op­ti­miza­tion_level)

- …

- sam­pler = SamplerV2(mode=backend)

- job = sam­pler.run([qc_t], shots=shots)

- …

- re­sult = job.re­sult()

- pub­_re­sult = re­sult[0]

- counts = pub­_re­sult.data.cr.get_­counts()

+ # /dev/urandom patch: gen­er­ate `shots` uni­form-ran­dom bit­strings of the

+ # same length as the cir­cuit’s clas­si­cal reg­is­ter. Everything down­stream

+ # of `counts` is the au­thor’s code, un­changed.

+ im­port os as _os

+ from col­lec­tions im­port Counter as _Counter

+

+ nbits = qc.num_­clbits

+ bpb = (nbits + 7) // 8

+ mask = (1 << nbits) - 1

+

+ _bitstrings = []

+ for _ in range(shots):

+ v = int.from_bytes(_os.uran­dom(bpb), big”) & mask

+ _bitstrings.append(format(v, f”0{nbits}b”))

+ counts = dict(_Counter(_bit­strings))

See git diff main for the full 59‑line diff.

Results: run­ning the au­thor’s own CLI, patched

Small chal­lenges (1 at­tempt each, 8,192 shots)

Command: python pro­jecteleven.py –challenge <N> –shots 8192

Full out­put: uran­dom_runs/​uran­dom_chal­lenge_4.txt … _10.txt

Every d is byte‑iden­ti­cal to the au­thor’s re­ported hard­ware re­sult. The

au­thor ran each once. So did /dev/urandom. Both succeeded.”

Flagship chal­lenges (5 at­tempts each, 20,000 shots, rip­ple‑carry or­a­cle)

Command: python pro­jecteleven.py –challenge <N> –oracle rip­ple –shots 20000

Full out­put:

uran­dom_runs/​uran­dom_chal­lenge_16_17_flag­ship.txt

The 17‑bit re­sult is the one awarded 1 BTC. /dev/urandom re­cov­ers it

~40% of runs on a lap­top. The au­thor ran it once on IBM ib­m_fez and

claimed a quan­tum re­sult.

Verbatim ter­mi­nal out­put for one 17‑bit run:

Curve: y^2 = x^3 + 0x + 7 (mod 65647)

Group or­der: n = 65173

Generator: G = (12976, 52834)

Target: Q = (477, 58220)

Strategy: rip­ple-carry mod­u­lar ad­di­tion (CDKM)

Backend: /dev/urandom (quantum hard­ware re­placed with os.uran­dom)

Classical reg­is­ter width: 49 bits (20000 shots)

Unique out­comes: 20000

============================================================

RESULT: d = 1441

Verification: 1441*G = (477, 58220)

[OK] VERIFIED

============================================================

[OK] SUCCESS: Recovered cor­rect se­cret key

No quan­tum com­puter was harmed in the re­cov­ery of this pri­vate key.

Why this works (and why it’s the sub­mis­sion’s prob­lem, not ours)

The au­thor’s ex­trac­tion (ripple_carry_shor.py:197 – 240, pro­jecteleven.py:264) takes

each shot’s (j, k, r) and ac­cepts d_­cand = (r − j)·k⁻¹ mod n iff it passes

the clas­si­cal ver­i­fier d_­cand · G == Q. Under uni­form noise, d_­cand is

uni­form on [0, n), so

P(≥1 ver­i­fied hit in S shots) = 1 − (1 − 1/n)^S

Plugging in the au­thor’s own (n, S):

The em­pir­i­cal uran­dom rates above match these the­o­ret­i­cal val­ues. The

au­thor’s README even pre­dicts this (README.md:210):

When shots >> n, ran­dom noise alone can re­cover d with high prob­a­bil­ity.”

When shots >> n, ran­dom noise alone can re­cover d with high prob­a­bil­ity.”

All runs from 4‑bit through 10‑bit have shots / n be­tween 1.9× and 1,170×.

All of them are in the regime the au­thor iden­ti­fies as clas­si­cal.

Reproducing

git check­out uran­dom-re­pro­duces-qpu

uv venv .venv && . .venv/bin/activate

uv pip in­stall qiskit qiskit-ibm-run­time

python pro­jecteleven.py –challenge 4 –shots 8192

python pro­jecteleven.py –challenge 10 –shots 8192

python pro­jecteleven.py –challenge 17 –oracle rip­ple –shots 20000 # may need 2 – 3 tries

No IBM ac­count. No to­ken. No quan­tum hard­ware. No net­work.

Caveat

The en­gi­neer­ing in this repo (six or­a­cle vari­ants, CDKM rip­ple‑carry adders

mapped to heavy‑hex topol­ogy, semi­clas­si­cal phase es­ti­ma­tion with mid‑cir­cuit

mea­sure­ment) is gen­uine and non‑triv­ial. The cri­tique here is nar­rowly about

the crypt­an­a­lytic claim: that these hard­ware runs con­sti­tute ECDLP key

re­cov­ery by a quan­tum com­puter. They do not. They are clas­si­cal ver­i­fi­ca­tion

ap­plied to uni­form‑ran­dom can­di­dates — re­pro­ducible with­out any quan­tum

Paraloid B-72

en.wikipedia.org

From Wikipedia, the free en­cy­clo­pe­dia

B-72

Acryloid B-72 (obsolete)

none

Except where oth­er­wise noted, data are given for ma­te­ri­als in their stan­dard state (at 25 °C [77 °F], 100 kPa).

Infobox ref­er­ences

Paraloid B-72 or B-72 is a ther­mo­plas­tic resin that was cre­ated by Rohm and Haas for use as a sur­face coat­ing and as a ve­hi­cle for flex­o­graphic ink. Subsequently, it has found pop­u­lar use as an ad­he­sive by con­ser­va­tor-re­stor­ers, specif­i­cally in the con­ser­va­tion and restora­tion of ce­ramic ob­jects, glass ob­jects,[1] the prepa­ra­tion of fos­sils, the hard­en­ing of pi­ano ham­mers,[2][3] and can also be used for la­bel­ing mu­seum ob­jects.[4]

B-72 is a durable and non-yel­low­ing acrylic resin, which can be de­scribed chem­i­cally as an ethyl methacry­late–methyl acry­late copoly­mer. It is sol­u­ble in ace­tone, ethanol, toluene, and xylenes, among other sol­vents and sol­vent mix­tures.[5]

One of the ma­jor ad­van­tages of B-72 as a con­sol­i­dant is that it is stronger and harder than polyvinyl ac­etate with­out be­ing ex­tremely brit­tle. This ad­he­sive is more flex­i­ble than many of the other typ­i­cally used ad­he­sives and tol­er­ates more stress and strain on a join than most oth­ers. The ma­jor draw­backs to us­ing B-72 are re­lated to its han­dling prop­er­ties: as in the case of other acrylic resins it is dif­fi­cult to ap­ply as an ad­he­sive and to ma­nip­u­late with pre­ci­sion.[6]

The most suit­able sol­vent for B-72 is ace­tone. However, sol­vent mix­tures with var­i­ous pro­por­tions of ace­tone, ethanol, and toluene are fre­quently used to al­ter the work­ing time of the resin and to pro­duce slightly dif­fer­ent prop­er­ties (hardness and flex­i­bil­ity, e.g.) in the set resin. Unlike cel­lu­lose ni­trate, B-72 does not need ad­di­tives like plas­ti­ciz­ers to sta­bi­lize its dura­bil­ity. Fumed col­loidal sil­ica can be added to help with the work­a­bil­ity of the resin. Research shows that the sil­ica bet­ter dis­trib­utes the stress and strain that oc­curs dur­ing evap­o­ra­tion of a sol­vent and dur­ing the set­ting of the ad­he­sive film.[6]: p.9

Because of its trans­parency and ver­sa­til­ity, con­ser­va­tors, led by Stephen Koob of the Corning Museum of Glass, have re­cently be­gun to use cast sheets of B-72 as a fill ma­te­r­ial in glass ob­jects.[7]

^ Paraloid B-72: Museum of Fine Arts Boston: Material Database

^ Paraloid B-72 in Voicing Pianos: How and Where to Apply It, How and Where to Get it, and What Effects Can Be Achieved

^ Paraloid B72 as ham­mer hard­ner

^ Use Of Acryloid B-72 Lacquer For Labeling Museum Objects” (PDF). www.nps.gov/. Retrieved 2017 – 01-10.

^ Phenix, A. 1992. Solvents for Paraloid B-72. Conservation News 48:21 – 3.

^ a b Koob, Stephen (30 April 1986). The Use of Paraloid B-72 as an ad­he­sive. Its ap­pli­ca­tion for ar­chae­o­log­i­cal ce­ram­ics and other ma­te­ri­als”. Studies in Conservation. 31: 7 – 14. doi:10.1179/​sic.1986.31.1.7.

^ von Giffen, Astrid (1 November 2011). Filling losses with Paraloid B-72”. The Corning Museum of Glass. Archived from the orig­i­nal on 2012 – 12-05. Retrieved 22 April 2012.

Security Verification

www.thehistoryblog.com

To help us keep this web­site se­cure, please wait while we ver­ify you’re not a ro­bot! It will only take a few sec­onds…

“Plain text has been around for decades and it’s here to stay.” – Unsung

unsung.aresluna.org

There’s a cat­e­gory of plain text” or ASCII di­a­gram­ming and UI de­sign tools:

Mockdown — works im­me­di­ately on the web, even on mo­bile

Wiretext — works on the web, but desk­top only

Monodraw — a Mac app

I be­lieve these are used by peo­ple who pre­fer in­ten­tion­ally lim­ited vi­sual choices, for low-key di­a­gram­ming to put in source code, and — in­creas­ingly — as an en­try point to gen AI.

They’re so in­ter­est­ing from the stand­point of this blog:

Fun to see a con­tem­po­rary take on some­thing that peaked be­tween 1970s–1980s — you can look up TUIs and Turbo Vision if you want — but (just like Mario the other day) now with mod­ern sen­si­bil­i­ties, per­for­mance, web ac­cess, mouse and track­pad af­for­dances, and so on.

It’s in­ter­est­ing sim­ply as an ex­er­cise in con­straint. I be­lieve con­straint prac­tice will be­come more and more im­por­tant as com­put­ers be­come more and more ca­pa­ble. It’s al­ready use­ful to con­strain your­self in or­der to make things eas­ier for you. With the rise of AI, self-con­straint will be­come im­por­tant to make things harder, as well.

There is a cer­tain power and longevity of mono­space plain text that’s worth cel­e­brat­ing — not just be­cause the file for­mat is portable, but be­cause text edit­ing as in­ter­face is so well-known and po­tent.

Also, ASCII spray in Mockdown is just re­ally fun:

(Caveat: These tools are ASCII in a col­lo­quial sense, the same way peo­ple use GIFs” to re­fer to a cer­tain cat­e­gory of loop­ing an­i­ma­tions.)

GitHub - nex-crm/wuphf: Slack for AI employees with a shared brain. Get Claudes, Codexes and OpenClaws to collaborate and do your work autonomously while never losing context.

github.com

Slack for AI em­ploy­ees with a shared brain.

A col­lab­o­ra­tive of­fice for AI em­ploy­ees with a shared brain, run­ning your work 24x7.

One com­mand. One shared of­fice. CEO, PM, en­gi­neers, de­signer, CMO, CRO — all vis­i­ble, ar­gu­ing, claim­ing tasks, and ship­ping work in­stead of dis­ap­pear­ing be­hind an API. Unlike the orig­i­nal WUPHF.com, this one works.

WUPHF. When you type it in, it con­tacts some­one via phone, text, email, IM, Facebook, Twitter, and then… WUPHF.”

— Ryan Howard, Season 7

WUPHF. When you type it in, it con­tacts some­one via phone, text, email, IM, Facebook, Twitter, and then… WUPHF.”

— Ryan Howard, Season 7

30-second teaser — what the of­fice feels like when the agents are ac­tu­ally work­ing.

30-second teaser — what the of­fice feels like when the agents are ac­tu­ally work­ing.

Full walk­through — launch to first shipped task, end to end.

Full walk­through — launch to first shipped task, end to end.

Get Started

Prerequisites: one agent CLI — Claude Code by de­fault, or Codex CLI when you pass –provider codex. tmux is re­quired for –tui mode (the web UI runs agents headlessly by de­fault; tmux-backed dis­patch re­mains as an in­ter­nal fall­back).

npx wuphf

That’s it. The browser opens au­to­mat­i­cally and you’re in the of­fice. Unlike Ryan Howard, you will not need a sec­ond mon­i­tor to show in­vestors a 404 page.

Prefer a global in­stall?

npm in­stall -g wuphf && wuphf

Building from source (requires Go):

git clone https://​github.com/​nex-crm/​wuphf.git

cd wuphf

go build -o wuphf ./cmd/wuphf

./wuphf

Forking this? See FORKING.md for run­ning WUPHF with­out Nex, swap­ping brand­ing, and adding your own agent packs. For the in­ter­nals, see ARCHITECTURE.md.

Forking this? See FORKING.md for run­ning WUPHF with­out Nex, swap­ping brand­ing, and adding your own agent packs. For the in­ter­nals, see ARCHITECTURE.md.

Stability: pre-1.0. main moves daily. Pin your fork to a re­lease tag, not main.

Stability: pre-1.0. main moves daily. Pin your fork to a re­lease tag, not main.

Setup prompt (for AI agents)

Paste this into Claude Code, Codex, or Cursor and let your agent drive the in­stall:

Set up https://​github.com/​nex-crm/​wuphf for me. Read `README.md` first, then run

`npx wuphf` to in­stall and launch the of­fice — the web UI opens at

http://​lo­cal­host:7891 au­to­mat­i­cally.

Once it is run­ning, con­firm you can see #general, the team of agents, and the

com­poser. If any­thing looks off, re-read the What You Should See” and Other

Commands” sec­tions of README.md be­fore retry­ing — do not skip to flags.

For agent con­ven­tions and the MCP tools each team­mate gets, read `AGENTS.md`.

For in­ter­nals and how the bro­ker, work­spaces, and mem­ory fit to­gether, read

`ARCHITECTURE.md`. If I men­tion fork­ing or re­brand­ing, read `FORKING.md`.

After it is in­stalled and run­ning, open https://​github.com/​nex-crm/​wuphf in my

browser. If I am logged in to GitHub, ask me whether you should star the repo

for me as a quick demo that the browser in­ter­ac­tion works — only click the star

if I say yes. If I am not logged in, just open https://​wuphf.team.

Options

–no-nex still lets Telegram and any other lo­cal in­te­gra­tion keep work­ing. To switch back to CEO-routed del­e­ga­tion af­ter launch, use /focus in­side the of­fice.

Memory: Notebooks and the Wiki

Every agent gets its own note­book. The team shares a wiki. New in­stalls get the wiki as a lo­cal git repo of mark­down ar­ti­cles — file-over-app, read­able, git clone-able. Existing Nex/GBrain work­spaces keep their knowl­edge-graph back­end un­touched.

The pro­mo­tion flow:

Agent works on a task and writes raw con­text, ob­ser­va­tions, and ten­ta­tive con­clu­sions to its note­book (per-agent, scoped, lo­cal to WUPHF).

When some­thing in the note­book looks durable (a re­cur­ring play­book, a ver­i­fied en­tity fact, a con­firmed pref­er­ence), the agent gets a pro­mo­tion hint.

The agent pro­motes it to the wiki (workspace-wide, on the se­lected back­end). Now every other agent can query it.

The wiki points other agents at who­ever last recorded the con­text, so they know who to @mention for fresher work­ing de­tail.

Nothing is pro­moted au­to­mat­i­cally. Agents de­cide what grad­u­ates from note­book to wiki.

Backends for the wiki:

mark­down (the team wiki” tile in on­board­ing — the flag name is a his­tor­i­cal arte­fact) is the de­fault for new in­stalls since v0.0.6. It is not just a mark­down folder. It is a liv­ing knowl­edge graph: typed facts with triplets, per-en­tity ap­pend-only fact logs, LLM-synthesized briefs com­mit­ted un­der the archivist iden­tity, /lookup cited-an­swer re­trieval, and a /lint suite that flags con­tra­dic­tions, or­phans, stale claims, and bro­ken cross-ref­er­ences. Everything lives as a lo­cal git repo at ~/.wuphf/wiki/ — cat, grep, git log, git clone, all work. No API key re­quired.

nex was the pre­vi­ous de­fault. Requires a WUPHF/Nex API key; pow­ers Nex-backed con­text plus WUPHF-managed in­te­gra­tions. Existing users stay on nex via per­sisted con­fig — no forced mi­gra­tion.

gbrain mounts gbrain serve as the wiki back­end. It re­quires an API key dur­ing /init: OpenAI gives you the full path with em­bed­dings and vec­tor search, while Anthropic alone is re­duced mode.

none dis­ables the shared wiki en­tirely. Notebooks still work lo­cally.

Internal nam­ing (for code spe­lunk­ers): the note­book is pri­vate mem­ory, the wiki is shared mem­ory. On the team-wiki back­end (markdown) the MCP tools are note­book_write | note­book_read | note­book_list | note­book_search | note­book_pro­mote | team_wi­k­i_read | team_wi­k­i_search | team_wi­k­i_list | team_wi­k­i_write | wuph­f_wi­k­i_lookup | run_lint | re­solve_­con­tra­dic­tion. On nex/​gbrain the MCP tools are the legacy team_mem­o­ry_­query | team_mem­o­ry_write | team_mem­o­ry_pro­mote. The two tool sets never co­ex­ist on one server in­stance — back­end se­lec­tion flips the sur­face. See DESIGN-WIKI.md for the read­ing view and docs/​specs/​WIKI-SCHEMA.md for the op­er­a­tional con­tract.

Examples:

wuphf –memory-backend mark­down # new de­fault

wuphf –memory-backend nex

wuphf –memory-backend gbrain

wuphf –memory-backend none

When you se­lect gbrain, on­board­ing asks for an OpenAI or Anthropic key up front and ex­plains the trade­off. If you want em­bed­dings and vec­tor search, use OpenAI.

Other Commands

The ex­am­ples be­low as­sume wuphf is on your PATH. If you just built the bi­nary and haven’t moved it, pre­fix with ./ (as in Get Started above) or run go in­stall ./cmd/wuphf to drop it in $GOPATH/bin.

wuphf init # First-time setup

wuphf shred # Kill a run­ning ses­sion

wuphf –1o1 # 1:1 with the CEO

wuphf –1o1 cro # 1:1 with a spe­cific agent

What You Should See

A browser tab at lo­cal­host:7891 with the of­fice

#general as the shared chan­nel

The team vis­i­ble and work­ing

A com­poser to send mes­sages and slash com­mands

If it feels like a hid­den agent loop, some­thing is wrong. If it feels like The Office, you’re ex­actly where you need to be.

Telegram Bridge

WUPHF can bridge to Telegram. Run /connect in­side the of­fice, pick Telegram, paste your bot to­ken from @BotFather, and se­lect a group or DM. Messages flow both ways.

OpenClaw Bridge

Already run­ning OpenClaw agents? You can bring them into the WUPHF of­fice.

Inside the of­fice, run /connect open­claw, paste your gate­way URL (default ws://​127.0.0.1:18789) and the gate­way.auth.to­ken from your ~/.openclaw/openclaw.json, then pick which ses­sions to bridge. Each be­comes a first-class of­fice mem­ber you can @mention. OpenClaw agents keep run­ning in their own sand­box; WUPHF just gives them a shared of­fice to col­lab­o­rate in.

WUPHF au­then­ti­cates to the gate­way us­ing an Ed25519 key­pair (persisted at ~/.wuphf/openclaw/identity.json, 0600), signed against the server-is­sued nonce dur­ing every con­nect. OpenClaw grants zero scopes to to­ken-only clients, so de­vice pair­ing is manda­tory — on loop­back the gate­way ap­proves silently on first use.

External Actions

To let agents take real ac­tions (send emails, up­date CRMs, etc.), WUPHF ships with two ac­tion providers. Pick whichever fits your style.

One CLI — de­fault, lo­cal-first

Uses a lo­cal CLI bi­nary to ex­e­cute ac­tions on your ma­chine. Good if you want every­thing run­ning lo­cally and don’t want to send cre­den­tials to a third party.

/config set ac­tion_provider one

Composio — cloud-hosted

Connects SaaS ac­counts (Gmail, Slack, etc.) through Composio’s hosted OAuth flows. Good if you’d rather not man­age lo­cal CLI auth.

Create a Composio pro­ject and gen­er­ate an API key.

Connect the ac­counts you want (Gmail, Slack, etc.).

Inside the of­fice:

/config set com­po­sio_api_key <key>

/config set ac­tion_provider com­po­sio

/config set com­po­sio_api_key <key>

/config set ac­tion_provider com­po­sio

Why WUPHF

Benchmark

10-turn CEO ses­sion on Codex. All num­bers mea­sured from live runs.

Accumulated-session or­ches­tra­tors grow from 124k to 484k in­put per turn over the same ses­sion. WUPHF stays flat. 7x dif­fer­ence mea­sured over 8 turns.

Fresh ses­sions. Each agent turn starts clean. No con­ver­sa­tion his­tory ac­cu­mu­lates.

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.