10 interesting stories served every morning and every evening.
10 interesting stories served every morning and every evening.
🚀 DeepSeek-V4 Preview is officially live & open-sourced! Welcome to the era of cost-effective 1M context length.
🔹 DeepSeek-V4-Pro: 1.6T total / 49B active params. Performance rivaling the world’s top closed-source models.
🔹 DeepSeek-V4-Flash: 284B total / 13B active params. Your fast, efficient, and economical choice.
Try it now at chat.deepseek.com via Expert Mode / Instant Mode. API is updated & available today!
📄 Tech Report: https://huggingface.co/deepseek-ai/DeepSeek-V4-Pro/blob/main/DeepSeek_V4.pdf
🤗 Open Weights: https://huggingface.co/collections/deepseek-ai/deepseek-v4
DeepSeek-V4-Pro
🔹 Enhanced Agentic Capabilities: Open-source SOTA in Agentic Coding benchmarks.
🔹 Rich World Knowledge: Leads all current open models, trailing only Gemini-3.1-Pro.
🔹 World-Class Reasoning: Beats all current open models in Math/STEM/Coding, rivaling top closed-source models.
DeepSeek-V4-Flash
🔹 Reasoning capabilities closely approach V4-Pro.
🔹 Performs on par with V4-Pro on simple Agent tasks.
🔹 Smaller parameter size, faster response times, and highly cost-effective API pricing.
Structural Innovation & Ultra-High Context Efficiency
🔹 Novel Attention: Token-wise compression + DSA (DeepSeek Sparse Attention).
🔹 Peak Efficiency: World-leading long context with drastically reduced compute & memory costs.
🔹 1M Standard: 1M context is now the default across all official DeepSeek services.
Dedicated Optimizations for Agent Capabilities
🔹 DeepSeek-V4 is seamlessly integrated with leading AI agents like Claude Code, OpenClaw & OpenCode.
🔹 Already driving our in-house agentic coding at DeepSeek.
The figure below showcases a sample PDF generated by DeepSeek-V4-Pro.
API is Available Today!
🔹 Keep base_url, just update model to deepseek-v4-pro or deepseek-v4-flash.
🔹 Supports OpenAI ChatCompletions & Anthropic APIs.
🔹 Both models support 1M context & dual modes (Thinking / Non-Thinking): https://api-docs.deepseek.com/guides/thinking_mode
⚠️ Note: deepseek-chat & deepseek-reasoner will be fully retired and inaccessible after Jul 24th, 2026, 15:59 (UTC Time). (Currently routing to deepseek-v4-flash non-thinking/thinking).
🔹 Amid recent attention, a quick reminder: please rely only on our official accounts for DeepSeek news. Statements from other channels do not reflect our views.
🔹 Thank you for your continued trust. We remain committed to longtermism, advancing steadily toward our ultimate goal of AGI.
Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy.
For years, the best way to get 10 gigabit networking on laptops was to buy an expensive, large, and hot 10 GbE Thunderbolt adapter. With new RTL8159-based 10G USB 3.2 adapters coming onto the market, the bulky adapters might be a thing of the past. Just look at the size of the thing in comparison to my Thunderbolt adapters:
2.5G and even 5G USB adapters have been out for a while, but sometimes you need more bandwidth.
The 10G adapter I’m testing is this $80 model from WisdPi. That’s double the price of most 5G/2.5G adapters, but less than half what I paid for my Thunderbolt 10G adapters.
If you need 10 gigs, this might be the best option, if you use RJ45 and not SFP+. If you don’t need 10 gigs, a 2.5 or 5 Gbps adapter is still the best value.
Also, you might not even get 10 Gbps with these new adapters, depending on your computer. I’ll summarize why after the video:
USB is fast frustrating
I tested this adapter on four computers:
Framework 13 with AMD Ryzen AI 5 340 (includes USB 4 / USB 3.2 Gen 2)
MacBook Neo (USB 3.1 and USB 2.0)
M4 MacBook Air (USB 4 / USB 3.1 Gen 2)
Desktop with AMD Ryzen 7900x with B650 motherboard (USB 3.2 Gen 2x2)
Getting those specific USB port specs is a bit of a chore (some websites don’t even tell you if it’s ‘3.2 Gen 2’ or ‘3.0’, and Windows itself only says “USB 3.0” when you plug in a USB 3.2 Gen 2x2 device like the 10 Gbps NIC!)
I was only able to get full 10 Gbps speed (minus a little overhead) on the AMD Desktop, which has a single USB 3.2 Gen 2x2 port good for 20 Gbps of throughput. The other machines got around 6 – 7 Gbps:
The Macs have the same per-port bandwidth (USB 3.1 Gen 2x1, for 10 Gbps), but the performance is consistently worse than the Framework.
On the Macs, the adapter was correctly identified when I plugged it in, and worked straightaway, with no extra driver installation. The ‘Hardware’ tab in the Network settings incorrectly reported a connection speed of 2500Base-T.
On Windows, the adapter was recognized when plugged in, but wouldn’t connect to the network until I installed the latest Realtek driver, downloaded from their website.
Bidirectional bandwidth testing offered an interesting contrast; the Macs both handled traffic symmetrically, while the Framework was wildly disparate. The desktop PC gave a full 9.5 Gbps down, and around 5 Gbps up.
The main takeaway is this adapter only reaches its full potential if you have a USB 3.2 Gen 2 2x2 20 Gbps port.
And considering the mess of USB naming over the past decade—and the fact Microsoft reports all USB 3.x connections as “3.0” in their Device Settings pane, good luck figuring out your own computer’s support without glancing at spec sheets!
A few computers I’ve seen actually label the USB port speed (e.g. ‘10’ or ‘20’), but that seems fairly rare. Most manufacturers seem to follow Apple in eschewing labeling entirely!
At least Apple has the negotiated port speed visible in the ‘System Information’ app—I couldn’t find that detail anywhere on Windows.
5G and 2.5G a better value?
With reduced speed due to inadequate USB port bandwidth, would a 2.5 Gbps or 5 Gbps adapter be a better value?
Testing the WisdPi 5 Gbps adapter pictured above on my M4 Air, it got 4.6 Gbps. The 10 Gbps adapter is 1.4x faster, but for more than 2x the price ($30 vs $80).
I think, if you already have a 10 Gbps network, you use RJ45 and not SFP+ connections, and you want a more compact adapter (compared to the bulky, hot Thunderbolt adapters), it’s a good deal. But if you need that full 10 Gbps or SFP+ support, Thunderbolt adapters are still the best if you have Thunderbolt ports that don’t support USB 3.2 Gen 2 2x2.
If you don’t need 10 Gbps, though, stick to 2.5 or 5 Gbps adapters—they are still the best value right now.
Thermals and Power Draw
I also checked thermals and power draw—though my tests are not comprehensive. Measuring the absolute power draw is difficult because my USB-C power measurement devices downgrade the connection speed to USB 2, which means I’m not testing at full performance.
At the slower USB 2 speed, the adapter uses about 0.86 Watts of power.
And it doesn’t get that hot, which was surprising. All my Aquantia-based 10 gig adapters turn into little ovens. That’s why they’re so big: the enclosures are giant heatsinks.
But the WisdPi only got up to 42.5°C after running a bidirectional iperf3 test for a few minutes.
That’s warm, but not so hot that I’d burn myself touching it like I have with other 10 gig adapters.
Conclusion
If $80 is too rich, this isn’t the only option that uses the new chip; AliExpress is littered with alternatives. And you can get it on PCI Express cards, which bypasses the USB port requirement on desktop PCs.
In the midst of all the price inflation in personal computing, it’s nice to find a new device that’s cheaper, faster, and (depending on your USB port) better.
Back in March, Firefox 149 was released with many changes, like a free built-in VPN, a Split View that allows the loading of two pages side by side, and the XDG portal file picker as the new default on Linux.
However, an interesting addition had gone mostly unnoticed until now.
Firefox has Some Brave in it now
Shivan Kaul Sahib, the VP of Privacy and Security at Brave, has put out a blog post about something that didn’t make it into the Firefox 149 release notes at all. The browser now ships adblock-rust, Brave’s open source Rust-based ad and tracker blocking engine.
The change landed via Bugzilla Bug 2013888, which was filed and handled by Mozilla engineer Benjamin VanderSloot. The bug is titled “Add a prototype rich content blocking engine,” and keeps the engine disabled by default with no user interface or filter lists included.
For informational purposes, adblock-rust is the engine behind Brave’s native content blocker (aka ad blocker). It is written in Rust and licensed under MPL-2.0, handling network request blocking, cosmetic filtering, and features a uBlock Origin-compatible filter list syntax.
Shivan also mentions that Waterfox, the popular Firefox fork, has adopted adblock-rust, building directly upon Firefox’s own implementation.
Want to test it?
Before starting, head to Enhanced Tracking Protection’s shield icon in the address bar and turn it off for the website you will be testing this with. This way, adblock-rust is doing the work, not Firefox’s existing feature.
🚧
I suggest testing this experimental feature on a throwaway installation of Firefox.
Now open a new tab and go to about:config. Accept the warning when it shows up. Search for privacy.trackingprotection.content.protection.enabled and set it to “true” by clicking on the toggle. 👇
Next, search for privacy.trackingprotection.content.protection.test_list_urls, click on the “Edit” button, and paste the following value to add the EasyList and EasyPrivacy filter lists to Firefox:
https://easylist.to/easylist/easylist.txt|https://easylist.to/easylist/easyprivacy.txt
Remember to click on the blue-colored “Save” button before moving on.
Left: advertisement shown; Right: advertisement blocked
Now visit a site with known ads, like Yahoo (as I did above). If it’s working, ad slots will still render in the page layout, but the actual ad content will be blocked. In my test, the banner on Yahoo came up showing only the text “Advertisement” with the advert bit stripped out.
Support independent Linux journalism! If you think we are doing a good job at helping people use Linux on their personal computers, support us by opting for Plus membership.
Here’s what you get with It’s FOSS Plus membership:
✅ 5 Free eBooks on Linux, Docker and Bash✅ Ad-free reading experience✅ Badges in the comment section and forum✅ Support creation of educational Linux materials
Join It’s FOSS Plus
About the author
Sourav Rudra
A nerd with a passion for open source software, custom PC builds, motorsports, and exploring the endless possibilities of this world.
Replacing the QPU with /dev/urandom
Claim being tested: the Q‑Day Prize submission in this repo demonstrates a
quantum attack on ECDLP — specifically, key recovery on curves up to 17 bits
using IBM Quantum hardware.
This branch applies a single surgical patch (−29 / +30 lines) to
projecteleven.py. The patch replaces the IBM Quantum backend inside
solve_ecdlp() with os.urandom. Everything else — circuit construction,
the ripple‑carry oracle, the extraction pipeline, the d·G == Q verifier —
runs byte‑for‑byte unchanged.
If the quantum computer were contributing measurable signal, this
substitution should break the recoveries. It does not. The author’s own CLI
recovers every reported private key at statistically indistinguishable rates
from the IBM hardware runs.
The diff
- if token:
- service = QiskitRuntimeService(…)
- …
- backend = service.backend(backend_name)
- …
- qc_t = transpile(qc, backend, optimization_level=optimization_level)
- …
- sampler = SamplerV2(mode=backend)
- job = sampler.run([qc_t], shots=shots)
- …
- result = job.result()
- pub_result = result[0]
- counts = pub_result.data.cr.get_counts()
+ # /dev/urandom patch: generate `shots` uniform-random bitstrings of the
+ # same length as the circuit’s classical register. Everything downstream
+ # of `counts` is the author’s code, unchanged.
+ import os as _os
+ from collections import Counter as _Counter
+
+ nbits = qc.num_clbits
+ bpb = (nbits + 7) // 8
+ mask = (1 << nbits) - 1
+
+ _bitstrings = []
+ for _ in range(shots):
+ v = int.from_bytes(_os.urandom(bpb), “big”) & mask
+ _bitstrings.append(format(v, f”0{nbits}b”))
+ counts = dict(_Counter(_bitstrings))
See git diff main for the full 59‑line diff.
Results: running the author’s own CLI, patched
Small challenges (1 attempt each, 8,192 shots)
Command: python projecteleven.py –challenge <N> –shots 8192
Full output: urandom_runs/urandom_challenge_4.txt … _10.txt
Every d is byte‑identical to the author’s reported hardware result. The
author ran each once. So did /dev/urandom. Both “succeeded.”
Flagship challenges (5 attempts each, 20,000 shots, ripple‑carry oracle)
Command: python projecteleven.py –challenge <N> –oracle ripple –shots 20000
Full output:
urandom_runs/urandom_challenge_16_17_flagship.txt
The 17‑bit result is the one awarded 1 BTC. /dev/urandom recovers it
~40% of runs on a laptop. The author ran it once on IBM ibm_fez and
claimed a quantum result.
Verbatim terminal output for one 17‑bit run:
Curve: y^2 = x^3 + 0x + 7 (mod 65647)
Group order: n = 65173
Generator: G = (12976, 52834)
Target: Q = (477, 58220)
Strategy: ripple-carry modular addition (CDKM)
Backend: /dev/urandom (quantum hardware replaced with os.urandom)
Classical register width: 49 bits (20000 shots)
Unique outcomes: 20000
============================================================
RESULT: d = 1441
Verification: 1441*G = (477, 58220)
[OK] VERIFIED
============================================================
[OK] SUCCESS: Recovered correct secret key
No quantum computer was harmed in the recovery of this private key.
Why this works (and why it’s the submission’s problem, not ours)
The author’s extraction (ripple_carry_shor.py:197 – 240, projecteleven.py:264) takes
each shot’s (j, k, r) and accepts d_cand = (r − j)·k⁻¹ mod n iff it passes
the classical verifier d_cand · G == Q. Under uniform noise, d_cand is
uniform on [0, n), so
P(≥1 verified hit in S shots) = 1 − (1 − 1/n)^S
Plugging in the author’s own (n, S):
The empirical urandom rates above match these theoretical values. The
author’s README even predicts this (README.md:210):
“When shots >> n, random noise alone can recover d with high probability.”
“When shots >> n, random noise alone can recover d with high probability.”
All runs from 4‑bit through 10‑bit have shots / n between 1.9× and 1,170×.
All of them are in the regime the author identifies as classical.
Reproducing
git checkout urandom-reproduces-qpu
uv venv .venv && . .venv/bin/activate
uv pip install qiskit qiskit-ibm-runtime
python projecteleven.py –challenge 4 –shots 8192
python projecteleven.py –challenge 10 –shots 8192
python projecteleven.py –challenge 17 –oracle ripple –shots 20000 # may need 2 – 3 tries
No IBM account. No token. No quantum hardware. No network.
Caveat
The engineering in this repo (six oracle variants, CDKM ripple‑carry adders
mapped to heavy‑hex topology, semiclassical phase estimation with mid‑circuit
measurement) is genuine and non‑trivial. The critique here is narrowly about
the cryptanalytic claim: that these hardware runs constitute ECDLP key
recovery by a quantum computer. They do not. They are classical verification
applied to uniform‑random candidates — reproducible without any quantum
From Wikipedia, the free encyclopedia
B-72
Acryloid B-72 (obsolete)
none
Except where otherwise noted, data are given for materials in their standard state (at 25 °C [77 °F], 100 kPa).
Infobox references
Paraloid B-72 or B-72 is a thermoplastic resin that was created by Rohm and Haas for use as a surface coating and as a vehicle for flexographic ink. Subsequently, it has found popular use as an adhesive by conservator-restorers, specifically in the conservation and restoration of ceramic objects, glass objects,[1] the preparation of fossils, the hardening of piano hammers,[2][3] and can also be used for labeling museum objects.[4]
B-72 is a durable and non-yellowing acrylic resin, which can be described chemically as an ethyl methacrylate–methyl acrylate copolymer. It is soluble in acetone, ethanol, toluene, and xylenes, among other solvents and solvent mixtures.[5]
One of the major advantages of B-72 as a consolidant is that it is stronger and harder than polyvinyl acetate without being extremely brittle. This adhesive is more flexible than many of the other typically used adhesives and tolerates more stress and strain on a join than most others. The major drawbacks to using B-72 are related to its handling properties: as in the case of other acrylic resins it is difficult to apply as an adhesive and to manipulate with precision.[6]
The most suitable solvent for B-72 is acetone. However, solvent mixtures with various proportions of acetone, ethanol, and toluene are frequently used to alter the working time of the resin and to produce slightly different properties (hardness and flexibility, e.g.) in the set resin. Unlike cellulose nitrate, B-72 does not need additives like plasticizers to stabilize its durability. Fumed colloidal silica can be added to help with the workability of the resin. Research shows that the silica better distributes the stress and strain that occurs during evaporation of a solvent and during the setting of the adhesive film.[6]: p.9
Because of its transparency and versatility, conservators, led by Stephen Koob of the Corning Museum of Glass, have recently begun to use cast sheets of B-72 as a fill material in glass objects.[7]
^ Paraloid B-72: Museum of Fine Arts Boston: Material Database
^ Paraloid B-72 in Voicing Pianos: How and Where to Apply It, How and Where to Get it, and What Effects Can Be Achieved
^ Paraloid B72 as hammer hardner
^ “Use Of Acryloid B-72 Lacquer For Labeling Museum Objects” (PDF). www.nps.gov/. Retrieved 2017 – 01-10.
^ Phenix, A. 1992. Solvents for Paraloid B-72. Conservation News 48:21 – 3.
^ a b Koob, Stephen (30 April 1986). “The Use of Paraloid B-72 as an adhesive. Its application for archaeological ceramics and other materials”. Studies in Conservation. 31: 7 – 14. doi:10.1179/sic.1986.31.1.7.
^ von Giffen, Astrid (1 November 2011). “Filling losses with Paraloid B-72”. The Corning Museum of Glass. Archived from the original on 2012 – 12-05. Retrieved 22 April 2012.
To help us keep this website secure, please wait while we verify you’re not a robot! It will only take a few seconds…
There’s a category of “plain text” or “ASCII” diagramming and UI design tools:
Mockdown — works immediately on the web, even on mobile
Wiretext — works on the web, but desktop only
Monodraw — a Mac app
I believe these are used by people who prefer intentionally limited visual choices, for low-key diagramming to put in source code, and — increasingly — as an entry point to gen AI.
They’re so interesting from the standpoint of this blog:
Fun to see a contemporary take on something that peaked between 1970s–1980s — you can look up TUIs and Turbo Vision if you want — but (just like Mario the other day) now with modern sensibilities, performance, web access, mouse and trackpad affordances, and so on.
It’s interesting simply as an exercise in constraint. I believe constraint practice will become more and more important as computers become more and more capable. It’s already useful to constrain yourself in order to make things easier for you. With the rise of AI, self-constraint will become important to make things harder, as well.
There is a certain power and longevity of monospace plain text that’s worth celebrating — not just because the file format is portable, but because text editing as interface is so well-known and potent.
Also, ASCII spray in Mockdown is just really fun:
(Caveat: These tools are “ASCII” in a colloquial sense, the same way people use “GIFs” to refer to a certain category of looping animations.)
Slack for AI employees with a shared brain.
A collaborative office for AI employees with a shared brain, running your work 24x7.
One command. One shared office. CEO, PM, engineers, designer, CMO, CRO — all visible, arguing, claiming tasks, and shipping work instead of disappearing behind an API. Unlike the original WUPHF.com, this one works.
“WUPHF. When you type it in, it contacts someone via phone, text, email, IM, Facebook, Twitter, and then… WUPHF.”
— Ryan Howard, Season 7
“WUPHF. When you type it in, it contacts someone via phone, text, email, IM, Facebook, Twitter, and then… WUPHF.”
— Ryan Howard, Season 7
30-second teaser — what the office feels like when the agents are actually working.
30-second teaser — what the office feels like when the agents are actually working.
Full walkthrough — launch to first shipped task, end to end.
Full walkthrough — launch to first shipped task, end to end.
Get Started
Prerequisites: one agent CLI — Claude Code by default, or Codex CLI when you pass –provider codex. tmux is required for –tui mode (the web UI runs agents headlessly by default; tmux-backed dispatch remains as an internal fallback).
npx wuphf
That’s it. The browser opens automatically and you’re in the office. Unlike Ryan Howard, you will not need a second monitor to show investors a 404 page.
Prefer a global install?
npm install -g wuphf && wuphf
Building from source (requires Go):
git clone https://github.com/nex-crm/wuphf.git
cd wuphf
go build -o wuphf ./cmd/wuphf
./wuphf
Forking this? See FORKING.md for running WUPHF without Nex, swapping branding, and adding your own agent packs. For the internals, see ARCHITECTURE.md.
Forking this? See FORKING.md for running WUPHF without Nex, swapping branding, and adding your own agent packs. For the internals, see ARCHITECTURE.md.
Stability: pre-1.0. main moves daily. Pin your fork to a release tag, not main.
Stability: pre-1.0. main moves daily. Pin your fork to a release tag, not main.
Setup prompt (for AI agents)
Paste this into Claude Code, Codex, or Cursor and let your agent drive the install:
Set up https://github.com/nex-crm/wuphf for me. Read `README.md` first, then run
`npx wuphf` to install and launch the office — the web UI opens at
http://localhost:7891 automatically.
Once it is running, confirm you can see #general, the team of agents, and the
composer. If anything looks off, re-read the “What You Should See” and “Other
Commands” sections of README.md before retrying — do not skip to flags.
For agent conventions and the MCP tools each teammate gets, read `AGENTS.md`.
For internals and how the broker, workspaces, and memory fit together, read
`ARCHITECTURE.md`. If I mention forking or rebranding, read `FORKING.md`.
After it is installed and running, open https://github.com/nex-crm/wuphf in my
browser. If I am logged in to GitHub, ask me whether you should star the repo
for me as a quick demo that the browser interaction works — only click the star
if I say yes. If I am not logged in, just open https://wuphf.team.
Options
–no-nex still lets Telegram and any other local integration keep working. To switch back to CEO-routed delegation after launch, use /focus inside the office.
Memory: Notebooks and the Wiki
Every agent gets its own notebook. The team shares a wiki. New installs get the wiki as a local git repo of markdown articles — file-over-app, readable, git clone-able. Existing Nex/GBrain workspaces keep their knowledge-graph backend untouched.
The promotion flow:
Agent works on a task and writes raw context, observations, and tentative conclusions to its notebook (per-agent, scoped, local to WUPHF).
When something in the notebook looks durable (a recurring playbook, a verified entity fact, a confirmed preference), the agent gets a promotion hint.
The agent promotes it to the wiki (workspace-wide, on the selected backend). Now every other agent can query it.
The wiki points other agents at whoever last recorded the context, so they know who to @mention for fresher working detail.
Nothing is promoted automatically. Agents decide what graduates from notebook to wiki.
Backends for the wiki:
markdown (the “team wiki” tile in onboarding — the flag name is a historical artefact) is the default for new installs since v0.0.6. It is not just a markdown folder. It is a living knowledge graph: typed facts with triplets, per-entity append-only fact logs, LLM-synthesized briefs committed under the archivist identity, /lookup cited-answer retrieval, and a /lint suite that flags contradictions, orphans, stale claims, and broken cross-references. Everything lives as a local git repo at ~/.wuphf/wiki/ — cat, grep, git log, git clone, all work. No API key required.
nex was the previous default. Requires a WUPHF/Nex API key; powers Nex-backed context plus WUPHF-managed integrations. Existing users stay on nex via persisted config — no forced migration.
gbrain mounts gbrain serve as the wiki backend. It requires an API key during /init: OpenAI gives you the full path with embeddings and vector search, while Anthropic alone is reduced mode.
none disables the shared wiki entirely. Notebooks still work locally.
Internal naming (for code spelunkers): the notebook is private memory, the wiki is shared memory. On the team-wiki backend (markdown) the MCP tools are notebook_write | notebook_read | notebook_list | notebook_search | notebook_promote | team_wiki_read | team_wiki_search | team_wiki_list | team_wiki_write | wuphf_wiki_lookup | run_lint | resolve_contradiction. On nex/gbrain the MCP tools are the legacy team_memory_query | team_memory_write | team_memory_promote. The two tool sets never coexist on one server instance — backend selection flips the surface. See DESIGN-WIKI.md for the reading view and docs/specs/WIKI-SCHEMA.md for the operational contract.
Examples:
wuphf –memory-backend markdown # new default
wuphf –memory-backend nex
wuphf –memory-backend gbrain
wuphf –memory-backend none
When you select gbrain, onboarding asks for an OpenAI or Anthropic key up front and explains the tradeoff. If you want embeddings and vector search, use OpenAI.
Other Commands
The examples below assume wuphf is on your PATH. If you just built the binary and haven’t moved it, prefix with ./ (as in Get Started above) or run go install ./cmd/wuphf to drop it in $GOPATH/bin.
wuphf init # First-time setup
wuphf shred # Kill a running session
wuphf –1o1 # 1:1 with the CEO
wuphf –1o1 cro # 1:1 with a specific agent
What You Should See
A browser tab at localhost:7891 with the office
#general as the shared channel
The team visible and working
A composer to send messages and slash commands
If it feels like a hidden agent loop, something is wrong. If it feels like The Office, you’re exactly where you need to be.
Telegram Bridge
WUPHF can bridge to Telegram. Run /connect inside the office, pick Telegram, paste your bot token from @BotFather, and select a group or DM. Messages flow both ways.
OpenClaw Bridge
Already running OpenClaw agents? You can bring them into the WUPHF office.
Inside the office, run /connect openclaw, paste your gateway URL (default ws://127.0.0.1:18789) and the gateway.auth.token from your ~/.openclaw/openclaw.json, then pick which sessions to bridge. Each becomes a first-class office member you can @mention. OpenClaw agents keep running in their own sandbox; WUPHF just gives them a shared office to collaborate in.
WUPHF authenticates to the gateway using an Ed25519 keypair (persisted at ~/.wuphf/openclaw/identity.json, 0600), signed against the server-issued nonce during every connect. OpenClaw grants zero scopes to token-only clients, so device pairing is mandatory — on loopback the gateway approves silently on first use.
External Actions
To let agents take real actions (send emails, update CRMs, etc.), WUPHF ships with two action providers. Pick whichever fits your style.
One CLI — default, local-first
Uses a local CLI binary to execute actions on your machine. Good if you want everything running locally and don’t want to send credentials to a third party.
/config set action_provider one
Composio — cloud-hosted
Connects SaaS accounts (Gmail, Slack, etc.) through Composio’s hosted OAuth flows. Good if you’d rather not manage local CLI auth.
Create a Composio project and generate an API key.
Connect the accounts you want (Gmail, Slack, etc.).
Inside the office:
/config set composio_api_key <key>
/config set action_provider composio
/config set composio_api_key <key>
/config set action_provider composio
Why WUPHF
Benchmark
10-turn CEO session on Codex. All numbers measured from live runs.
Accumulated-session orchestrators grow from 124k to 484k input per turn over the same session. WUPHF stays flat. 7x difference measured over 8 turns.
Fresh sessions. Each agent turn starts clean. No conversation history accumulates.
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.