10 interesting stories served every morning and every evening.
10 interesting stories served every morning and every evening.
🚀 DeepSeek-V4 Preview is officially live & open-sourced! Welcome to the era of cost-effective 1M context length.
🔹 DeepSeek-V4-Pro: 1.6T total / 49B active params. Performance rivaling the world’s top closed-source models.
🔹 DeepSeek-V4-Flash: 284B total / 13B active params. Your fast, efficient, and economical choice.
Try it now at chat.deepseek.com via Expert Mode / Instant Mode. API is updated & available today!
📄 Tech Report: https://huggingface.co/deepseek-ai/DeepSeek-V4-Pro/blob/main/DeepSeek_V4.pdf
🤗 Open Weights: https://huggingface.co/collections/deepseek-ai/deepseek-v4
DeepSeek-V4-Pro
🔹 Enhanced Agentic Capabilities: Open-source SOTA in Agentic Coding benchmarks.
🔹 Rich World Knowledge: Leads all current open models, trailing only Gemini-3.1-Pro.
🔹 World-Class Reasoning: Beats all current open models in Math/STEM/Coding, rivaling top closed-source models.
DeepSeek-V4-Flash
🔹 Reasoning capabilities closely approach V4-Pro.
🔹 Performs on par with V4-Pro on simple Agent tasks.
🔹 Smaller parameter size, faster response times, and highly cost-effective API pricing.
Structural Innovation & Ultra-High Context Efficiency
🔹 Novel Attention: Token-wise compression + DSA (DeepSeek Sparse Attention).
🔹 Peak Efficiency: World-leading long context with drastically reduced compute & memory costs.
🔹 1M Standard: 1M context is now the default across all official DeepSeek services.
Dedicated Optimizations for Agent Capabilities
🔹 DeepSeek-V4 is seamlessly integrated with leading AI agents like Claude Code, OpenClaw & OpenCode.
🔹 Already driving our in-house agentic coding at DeepSeek.
The figure below showcases a sample PDF generated by DeepSeek-V4-Pro.
API is Available Today!
🔹 Keep base_url, just update model to deepseek-v4-pro or deepseek-v4-flash.
🔹 Supports OpenAI ChatCompletions & Anthropic APIs.
🔹 Both models support 1M context & dual modes (Thinking / Non-Thinking): https://api-docs.deepseek.com/guides/thinking_mode
⚠️ Note: deepseek-chat & deepseek-reasoner will be fully retired and inaccessible after Jul 24th, 2026, 15:59 (UTC Time). (Currently routing to deepseek-v4-flash non-thinking/thinking).
🔹 Amid recent attention, a quick reminder: please rely only on our official accounts for DeepSeek news. Statements from other channels do not reflect our views.
🔹 Thank you for your continued trust. We remain committed to longtermism, advancing steadily toward our ultimate goal of AGI.
Prefix Note
As this post is gaining more attention than expected on HN, I want to make some things clear - although I thought they are obvious.
It sounds like a rant about Claude’s quality in general - if you don’t read to the end. But my concerns are more focused on the support performance and token issues - fully aware of the challenges a company this size faces and assuming the people at Anthropic are working hard to make things better. I am pointing at some “bad design decisions” they probably made. The “quality issues” are just the cherry on top of the cake.
After all, Claude Code is delivering and I use it to build stuff. Still, I experienced a degradation in quality. It just takes longer. Which is a relative observation. However I know that this is highly subjective, too. That’s what that comment in a paragraph means: “The failure usually appears in front of the screen”. Of course the agent is only as good as the operator and their instructions.
I am coding for a couple of decades now, I like to get my hands dirty. Three years ago I implemented AI into my workflow. It started with code completion and now I am at the point where I barely write code. For me “software engineeering” is not constituted by the simple act of writing code. It’s about conducting tools, being creative, understanding the problem and delivering a solution. Still - and that what this comment in another paragraph means: “While I was browsing the model’s thinking log - which I strongly suggest doing not only occasionally” - I am escorting the agent while it’s working. I still have to figure out a concept, think about a data model and verify it’s implementation. That’s what software engineering is about. It’s not n LOC…
Having said that - enjoy this post and stay happy, fellow developers.
First enthusiasm
A couple of weeks ago I subscribed to Claude Code, and during the first few weeks I had a really nice experience. It was fast, the token allowance was fair, and the quality was good.
I learned they had
raised the token allowance for non-rush hours
, and since they opposed some governmental rules, it felt good to support the right cause.
(づ  ̄ ³ ̄)づ
However… for about three weeks now my initial enthusiasm has been rapidly waning.
It began with an issue three weeks ago. I started working in the morning after about a ten-hour break; enough time for my tokens to refresh.
I sent two small questions to Claude Haiku. They were simple questions, not even related to the repository.
Suddenly, token usage spiked to 100%.
Have a nice break…
I contacted their “AI support bot”, which returned some default support nonsense and didn’t really understand the problem. So I asked for human support. A couple of days later a - what appeared to be - human support person sent a reply. It began like this:
“Our systems are detecting your inquiry is regarding usage limits on your Pro or Max plan.”
Yeah, well — it’s the Pro plan. Seems like your systems weren’t actually queried; it was just a default intro and probably a default answer, because:
This was followed by an extensive what seems to be copy-and-paste answer from their docs explaining how daily and weekly limits work.
And it closed with the typically frustrating line, that no customer likes to read at the end of an e-mail and which is just the classical middle-finger of customer support - we don’t care if your problem is solved or not, we declared it closed.
“Note that further replies to this ticket may not be monitored. If your request is not regarding usage limits on your Pro or Max plan, or you need additional support, please visit our help page at”
Great! Sending an automated e-mail that does not refer to the actual problem and then closing the channel. Thanks for nothing, I guess? Or was I wrong. I asked Claude Haiku:
@Haiku:
See the customer’s request here and the response from the AI and later W***** - did they answer the concern/question of the customer?
See the customer’s request here and the response from the AI and later W***** - did they answer the concern/question of the customer?
(╯°_°)╯︵ ┻━┻
Declining quality
In the following days and weeks, the quality was far from satisfying my needs or matching my initial experience. While I used to be able to work on up to three projects at once, now the token limit was exhausted after two hours on a single project.
And the quality was degrading. I am fully aware this is quite subjective and that the quality of the agent is always heavily impacted by the operator. The failure usually appears in front of the screen. But hey, I also develop using Github’s Copilot, OpenAI’s Codex and I am running my own inference with OMLX and Continue using Qwen3.5 – 9B. I’m not the expert, I’m lazy sometimes but I probably know a thing or two.
Let me give you this wonderful example: yesterday I asked Claude Opus to refactor a project.
While I was browsing the model’s thinking log - which I strongly suggest doing not only occasionally - I found this:
Rather than editing every slider in JSX, I’ll add a generic initializer in ui-events.js that auto-injects value displays for all range inputs that lack one.
Rather than editing every slider in JSX, I’ll add a generic initializer in ui-events.js that auto-injects value displays for all range inputs that lack one.
This is clearly bad practice. It’s a cheap workaround you wouldn’t expect even from a junior dev; it reads like someone who just doesn’t want to deliver a good result. My response:
“you can’t be serious — is this how you fix things? just WORKAROUNDS????”
At least Opus admitted:
“You’re right, that was lazy. Let me do it properly — add the labels directly in the JSX and wire them explicitly.”
Needless to say, this shortcut cost me around 50% of my five-hour token allowance.
(ง •̀_•́)ง
And even more…
Now this cache topic comes up
-
among others
. at least they are talking about it openly. The problem was: when you get back to work after some time, your conversation cache is gone and the model starts reading your codebase again. Cost-wise this is smart. But experience-wise? It means you paid tokens for the initial load and, after a forced break because the five-hour token window hit its limit, you pay again for the same load.
Think that’s all? Wait, I also got this funny anecdote: all of a sudden the weekly window changed from today to Monday. OK, I was thankful because it came with a reset to zero. But still: what is going on, Anthropic? Not only that — while I was working on my project, watching token usage with Argus-eyed vigilance, this little warning popped up:
Wait, what? I’m neither part of an organization nor do I see any hint why I suddenly have to worry about a “monthly usage limit” — also the hourly and weekly limits were still not exceeded. What is happening right now?
Turns out — two hours later - it allowed me to continue working. The warning was gone.
At least
this documentation
does not mention a monthly usage limit. And the settings page only lists the limits for the current session and week.
So… what is this monthly limit all about, Anthropic?
Sorry to let you down, Anthropic
I am a huge fan of the product. Theoretically everything just works like a charm; it offers so many opportunities. I built my
Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy.
Hi friends,
I’ll be attending Babashka Conf on May 8 and Dutch Clojure Days on May 9.
If you’re attending either (or just visiting Amsterdam), drop me a line!
When I have an idea for a project, it tends to go in one of these two directions:
I just do it. Maybe I make a few minor revisions, but often it turns out exactly how I’d imagined and I’m happy.
I just do it. Maybe I make a few minor revisions, but often it turns out exactly how I’d imagined and I’m happy.
I think, “I should look for prior art”. There’s a lot of prior art, dealing with a much broader scope than I’d originally imagined. I start to wonder if I should incorporate that scope. Or perhaps try to build my thing on top of the existing sorta-nearby-solutions. Or maybe I should just use the popular thing. Although I could do a better job than that thing, if I put a bunch of time into it. But actually, I don’t want to maintain a big popular project, nor do I want to put that much time into this project. Uh oh, now I’ve spent a bunch of time, having neither addressed the original issue nor experienced the joy of creating something.
I think, “I should look for prior art”. There’s a lot of prior art, dealing with a much broader scope than I’d originally imagined. I start to wonder if I should incorporate that scope. Or perhaps try to build my thing on top of the existing sorta-nearby-solutions. Or maybe I should just use the popular thing. Although I could do a better job than that thing, if I put a bunch of time into it. But actually, I don’t want to maintain a big popular project, nor do I want to put that much time into this project. Uh oh, now I’ve spent a bunch of time, having neither addressed the original issue nor experienced the joy of creating something.
I prefer the first outcome, and I think the pivotal factor is how well I’ve internalized my own success criteria.
For example, last weekend I hosted my friend Marcin and we decided it’d be fun to do some woodworking, so we threw together this shelf and 3d-printed hangers for my kitchen:
Absolute banger of a project:
brainstormed the design over coffee
did a few 3d-print iterations for the Ikea bin hangers (OnShape CAD, if you want to print your own)
used material leftover from my workbench
rounded the corner by eye with a palm sander
sealed the raw plywood edge with some leftover paint from a friend
done in a weekend
The main success criteria was to jam on woodworking with a friend, and that helped me not overthink the object-level success criteria: Just make a shelf for my exact kitchen!
In contrast, this past Friday I noticed difftastic did a poor job, so I decided to shop around for structural/semantic diff tools and related workflows (a topic I’ve never studied, that I’m increasingly interested in as I’m reviewing more and more LLM-generated code).
I spent 4 hours over the weekend researching existing tools (see my notes below), going through dark periods of both “semantic tree diffing is a PhD-level complex problem” and “why do all of these have MCP servers? I don’t want an MCP server”, before I came to my senses and remembered my original success criteria: I just want a nicer diffing workflow for myself in Emacs, I should just build it myself — should take about 4 hours.
I’m cautiously optimistic that, having had this realization and committing myself to a minimal scope, I’ll be able to knock out a prototype before running out of motivation.
However, other long-running interests of mine:
interfaces for prototyping hardware (discussed September 2023)
a programming language that fuses what I like about Clojure and Rust (November 2023)
a programming language for CAD (constraints, bidirectional editing, other dubious ideas)
seem to be deep in the well of outcome #2.
That is, I’ve spent hundreds of hours on background research and little prototypes, but haven’t yet synthesized anything that addresses the original motivating issue.
It’s not quite that I regret that time — I do love learning by reading — but I have a nagging sense of unease that my inner critic (fear of failure?) is silencing my generative tendencies, keeping me from the much more enjoyable (and productive!) learning by doing.
I think in these cases the success criteria has been much fuzzier: Am I trying to replace my own usage of Rust/Clojure?
Only for some subset of problems?
Or is it that I actually just need a playground to learn about language design/implementation, and it’s fine if I don’t end up using it?
Ditto for CAD: Am I trying to replace my commercial CAD tool in favor of my own?
Only for some subset of simple or particularly parametric parts?
Do I care if it’s useful for others?
Does my tool need to be legibly different from existing open-source tools?
It’s worth considering these questions, sure.
But at the end of the day, I’d much rather have done a lot than have only considered a lot.
So I’m trying to embrace my inner clueless 20-year-old and just do things — even if some turn out to be “obviously bad” in hindsight, I’ll still be coming out ahead on net =D
Conservation of scope creep
Of course, there’s only so much time to “just do things”, and there’s a balance to be had. I’m not sure how many times I’ll re-learn YAGNI (“you ain’t gonna need it”) in my career, but I was reminded of it again after writing a bunch of code with an LLM agent, then eventually coming to my senses and throwing it all out.
I wanted a Finda-style filesystem-wide fuzzy path search for Emacs.
Since I’ve built (by hand, typing the code myself!) this exact functionality before (walk filesystem to collect paths, index them by trigram, do fast fuzzy queries via bitmap intersections), I figured it’d only take a few hours to supervise an LLM to write all the code.
I started with a “plan mode” chat, and the LLM suggested a library, Nucleo, which turned up since I wrote Finda (10 years ago, eek!).
I read through it, found it quite well-designed and documented, and decided to use it so I’d get its smart case and Unicode normalization functionality.
(E.g., query foo matches Foo and foo, whereas query Foo won’t match foo; similarly for cafe and café.)
Finding a great library wasn’t the problem, the problem was that Nucleo also supported some extra functionality: anchors (^foo only matches at the beginning of a line).
This got me thinking about what that might mean in a corpus that consists entirely of file paths.
Anchoring to the beginning of a line isn’t useful (everything starts with /), so I decided to try and interpret the anchors with respect to the path segments.
E.g., ^foo would match /root/foobar/ but not /root/barfoo/.
But to do this efficiently, the index needs to keep track of segment boundaries so that the query can be checked against each segment quickly.
But then we also need to handle a slash occurring in an anchored query (e.g., ^foo/bar) since that wouldn’t get matched when only looking at segments individually (root, foo, bar, and baz of a matching path /root/foo/bar/baz/).
Working through this took several hours: first throwing around design ideas with an LLM, having it write code to wrap Nucleo’s types, then realizing its code was bloated and didn’t spark joy, so finally writing my own (smaller) wrapper.
Then, after a break, I realized:
I can’t think of a situation where I’d ever wished Finda had anchor functionality
In a corpus of paths, I can anchor by just adding / to the start or end of a query (this works for everything except anchoring to the end of a filename).
So I tossed all of the anchoring code.
I’m pretty sure I still came out ahead compared to if I’d tried to write everything myself sans LLM or discussion with others, but I’m not certain.
Perhaps there’s some kind of conservation law here: Any increases in programming speed will be offset by a corresponding increase in unnecessary features, rabbit holes, and diversions.
Structural diffing
Speaking of unnecessary diversions, let me tell you everything I’ve learned about structural diffing recently — if you have thoughts/feelings/references in this space, I’d love to hear about ’em!
When we’re talking about code, a “diff” usually means a summary of the line-by-line changes between two versions of a file.
This might be rendered as a “unified” view, where changed lines are prefixed with + or - to indicate whether they’re additions or deletions.
For example:
We’ve removed coffee and added apple.
The same diff might also be rendered in a side-by-side view, which can be easier to read when there are more complex changes:
The problem with these line-by-line diffs is that they’re not aware of higher-level structure like functions, types, etc. — if some braces match up somehow between versions, they might not be shown at all, even if the braces “belong” to different functions.
There’s a wonderful tool, difftastic, which tries to address this by calculating diffs using treesitter-provided concrete syntax trees.
It’s a huge improvement over line-based diffs, but unfortunately it doesn’t always do a great job matching entities between versions.
Here’s the diff that motivated this entire foray:
Note that it doesn’t match up struct PendingClick, it shows it deleted on the left and added on the right.
I haven’t dug into why difftastic fails to match here, but I do feel like it’s wrong — even if the overall diff would be longer, I’d still rather see PendingClickRequest and PendingClick matched up between both sides.
Here’s a summary of tools / references in the space:
The most “baked” and thoughtful semantic diff tool I found is, perhaps unsurprisingly, semanticdiff.com, a small German company with a free VSCode plugin and web app that shows diffs for github PRs. Unfortunately they don’t have any code libraries I can use as a foundation for the workflow I want.
this semanticdiff vs. difftastic blog post covers a lot of great details (including that difftastic doesn’t even show semantically meaningful indentation changes in python !!!)
one of the authors has great HN comments with hard-won background knowledge. E.g., they moved away from treesitter because it’s unreliable for semantics:
Context-sensitive keywords in particular were a constant source of annoyance. The grammar looks correct, but it will fail to parse because of the way the lexer works. You don’t want your tool to abort just because someone named their parameter “async”.
The most “baked” and thoughtful semantic diff tool I found is, perhaps unsurprisingly, semanticdiff.com, a small German company with a free VSCode plugin and web app that shows diffs for github PRs. Unfortunately they don’t have any code libraries I can use as a foundation for the workflow I want.
this semanticdiff vs. difftastic blog post covers a lot of great details (including that difftastic doesn’t even show semantically meaningful indentation changes in python !!!)
one of the authors has great HN comments with hard-won background knowledge. E.g., they moved away from treesitter because it’s unreliable for semantics:
Context-sensitive keywords in particular were a constant source of annoyance. The grammar looks correct, but it will fail to parse because of the way the lexer works. You don’t want your tool to abort just because someone named their parameter “async”.
Context-sensitive keywords in particular were a constant source of annoyance. The grammar looks correct, but it will fail to parse because of the way the lexer works. You don’t want your tool to abort just because someone named their parameter “async”.
diffsitter
built on treesitter, has MCP server. README includes list of similar projects.
lots of github stars, but doesn’t seem particularly well-documented; I couldn’t find an explanation of how it works, but the difftastic wiki says it “runs longest-common-subsequence on the leaves of the tree”
diffsitter
built on treesitter, has MCP server. README includes list of similar projects.
lots of github stars, but doesn’t seem particularly well-documented; I couldn’t find an explanation of how it works, but the difftastic wiki says it “runs longest-common-subsequence on the leaves of the tree”
gumtree
research / academic origin in 2014
requires Java, so no-go for my use case of a quick tool I can use via Emacs
gumtree
research / academic origin in 2014
requires Java, so no-go for my use case of a quick tool I can use via Emacs
mergiraf: treesitter-based merge-driver written in rust
very nice architecture overview; tool uses Gumtree algorithm
docs and adorable illustrations indicate this project was clearly written by a thoughtful human
semanticdiff.com author in HN comments:
> GumTree is good at returning a result quickly, but there are quite a few cases where it always returned bad matches for us, no matter how many follow-up papers with improvements we tried to implement. In the end we switched over to a dijkstra based approach that tries to minimize the cost of the mapping
mergiraf: treesitter-based merge-driver written in rust
very nice architecture overview; tool uses Gumtree algorithm
Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy.
For years, the best way to get 10 gigabit networking on laptops was to buy an expensive, large, and hot 10 GbE Thunderbolt adapter. With new RTL8159-based 10G USB 3.2 adapters coming onto the market, the bulky adapters might be a thing of the past. Just look at the size of the thing in comparison to my Thunderbolt adapters:
2.5G and even 5G USB adapters have been out for a while, but sometimes you need more bandwidth.
The 10G adapter I’m testing is this $80 model from WisdPi. That’s double the price of most 5G/2.5G adapters, but less than half what I paid for my Thunderbolt 10G adapters.
If you need 10 gigs, this might be the best option, if you use RJ45 and not SFP+. If you don’t need 10 gigs, a 2.5 or 5 Gbps adapter is still the best value.
Also, you might not even get 10 Gbps with these new adapters, depending on your computer. I’ll summarize why after the video:
USB is fast frustrating
I tested this adapter on four computers:
Framework 13 with AMD Ryzen AI 5 340 (includes USB 4 / USB 3.2 Gen 2)
MacBook Neo (USB 3.1 and USB 2.0)
M4 MacBook Air (USB 4 / USB 3.1 Gen 2)
Desktop with AMD Ryzen 7900x with B650 motherboard (USB 3.2 Gen 2x2)
Getting those specific USB port specs is a bit of a chore (some websites don’t even tell you if it’s ‘3.2 Gen 2’ or ‘3.0’, and Windows itself only says “USB 3.0” when you plug in a USB 3.2 Gen 2x2 device like the 10 Gbps NIC!)
I was only able to get full 10 Gbps speed (minus a little overhead) on the AMD Desktop, which has a single USB 3.2 Gen 2x2 port good for 20 Gbps of throughput. The other machines got around 6 – 7 Gbps:
The Macs have the same per-port bandwidth (USB 3.1 Gen 2x1, for 10 Gbps), but the performance is consistently worse than the Framework.
On the Macs, the adapter was correctly identified when I plugged it in, and worked straightaway, with no extra driver installation. The ‘Hardware’ tab in the Network settings incorrectly reported a connection speed of 2500Base-T.
On Windows, the adapter was recognized when plugged in, but wouldn’t connect to the network until I installed the latest Realtek driver, downloaded from their website.
Bidirectional bandwidth testing offered an interesting contrast; the Macs both handled traffic symmetrically, while the Framework was wildly disparate. The desktop PC gave a full 9.5 Gbps down, and around 5 Gbps up.
The main takeaway is this adapter only reaches its full potential if you have a USB 3.2 Gen 2 2x2 20 Gbps port.
And considering the mess of USB naming over the past decade—and the fact Microsoft reports all USB 3.x connections as “3.0” in their Device Settings pane, good luck figuring out your own computer’s support without glancing at spec sheets!
A few computers I’ve seen actually label the USB port speed (e.g. ‘10’ or ‘20’), but that seems fairly rare. Most manufacturers seem to follow Apple in eschewing labeling entirely!
At least Apple has the negotiated port speed visible in the ‘System Information’ app—I couldn’t find that detail anywhere on Windows.
5G and 2.5G a better value?
With reduced speed due to inadequate USB port bandwidth, would a 2.5 Gbps or 5 Gbps adapter be a better value?
Testing the WisdPi 5 Gbps adapter pictured above on my M4 Air, it got 4.6 Gbps. The 10 Gbps adapter is 1.4x faster, but for more than 2x the price ($30 vs $80).
I think, if you already have a 10 Gbps network, you use RJ45 and not SFP+ connections, and you want a more compact adapter (compared to the bulky, hot Thunderbolt adapters), it’s a good deal. But if you need that full 10 Gbps or SFP+ support, Thunderbolt adapters are still the best if you have Thunderbolt ports that don’t support USB 3.2 Gen 2 2x2.
If you don’t need 10 Gbps, though, stick to 2.5 or 5 Gbps adapters—they are still the best value right now.
Thermals and Power Draw
I also checked thermals and power draw—though my tests are not comprehensive. Measuring the absolute power draw is difficult because my USB-C power measurement devices downgrade the connection speed to USB 2, which means I’m not testing at full performance.
At the slower USB 2 speed, the adapter uses about 0.86 Watts of power.
And it doesn’t get that hot, which was surprising. All my Aquantia-based 10 gig adapters turn into little ovens. That’s why they’re so big: the enclosures are giant heatsinks.
But the WisdPi only got up to 42.5°C after running a bidirectional iperf3 test for a few minutes.
That’s warm, but not so hot that I’d burn myself touching it like I have with other 10 gig adapters.
Conclusion
If $80 is too rich, this isn’t the only option that uses the new chip; AliExpress is littered with alternatives. And you can get it on PCI Express cards, which bypasses the USB port requirement on desktop PCs.
In the midst of all the price inflation in personal computing, it’s nice to find a new device that’s cheaper, faster, and (depending on your USB port) better.
Back in March, Firefox 149 was released with many changes, like a free built-in VPN, a Split View that allows the loading of two pages side by side, and the XDG portal file picker as the new default on Linux.
However, an interesting addition had gone mostly unnoticed until now.
Firefox has Some Brave in it now
Shivan Kaul Sahib, the VP of Privacy and Security at Brave, has put out a blog post about something that didn’t make it into the Firefox 149 release notes at all. The browser now ships adblock-rust, Brave’s open source Rust-based ad and tracker blocking engine.
The change landed via Bugzilla Bug 2013888, which was filed and handled by Mozilla engineer Benjamin VanderSloot. The bug is titled “Add a prototype rich content blocking engine,” and keeps the engine disabled by default with no user interface or filter lists included.
For informational purposes, adblock-rust is the engine behind Brave’s native content blocker (aka ad blocker). It is written in Rust and licensed under MPL-2.0, handling network request blocking, cosmetic filtering, and features a uBlock Origin-compatible filter list syntax.
Shivan also mentions that Waterfox, the popular Firefox fork, has adopted adblock-rust, building directly upon Firefox’s own implementation.
Want to test it?
Before starting, head to Enhanced Tracking Protection’s shield icon in the address bar and turn it off for the website you will be testing this with. This way, adblock-rust is doing the work, not Firefox’s existing feature.
🚧
I suggest testing this experimental feature on a throwaway installation of Firefox.
Now open a new tab and go to about:config. Accept the warning when it shows up. Search for privacy.trackingprotection.content.protection.enabled and set it to “true” by clicking on the toggle. 👇
Next, search for privacy.trackingprotection.content.protection.test_list_urls, click on the “Edit” button, and paste the following value to add the EasyList and EasyPrivacy filter lists to Firefox:
https://easylist.to/easylist/easylist.txt|https://easylist.to/easylist/easyprivacy.txt
Remember to click on the blue-colored “Save” button before moving on.
Left: advertisement shown; Right: advertisement blocked
Now visit a site with known ads, like Yahoo (as I did above). If it’s working, ad slots will still render in the page layout, but the actual ad content will be blocked. In my test, the banner on Yahoo came up showing only the text “Advertisement” with the advert bit stripped out.
Support independent Linux journalism! If you think we are doing a good job at helping people use Linux on their personal computers, support us by opting for Plus membership.
Here’s what you get with It’s FOSS Plus membership:
✅ 5 Free eBooks on Linux, Docker and Bash✅ Ad-free reading experience✅ Badges in the comment section and forum✅ Support creation of educational Linux materials
Join It’s FOSS Plus
About the author
Sourav Rudra
A nerd with a passion for open source software, custom PC builds, motorsports, and exploring the endless possibilities of this world.
last year i bought a Rodecaster Duo to solve some audio woes to allow myself and my girlfriend to have microphones to our respective computers when gaming together and talking on discord in the same room without any echo, and to be able to swap that to my work pc easily. the rodecaster is really nice, it’s pretty effortless to use and works great for our home. I would gladly recommend it to anyone looking for a similar solution.
as is usual for any device in my house, i try to ensure when it’s time to update the firmware I have enough tooling in place to capture how firmware updates work, or to at a minimum capture a firmware blob to try and reverse engineer it and poke around for fun and/or to see the often horrific reality that is the industry we work in.
fw update
I was feeling pretty lazy and assumed that rode would dump the firmware somewhere on my computer before flashing the device, so i set up Instruments on macos to capture disk activity, and found where the fw was dumped, and surprisingly it was just a gzipped tarball. The device I did this update on happened to have the ability to write to USB disks disabled, so the update actually failed.
Poking around a bit, i found the binaries of the stuff that actually runs on the device, as well as a shell script that handles the updates themselves. there are two partitions on the disk, so that if you brick one it boots from the other. It also doesn’t have any signature checks or anything on the incoming firmware. I’m used to many vendors of this style of device requiring signed firmwares or something nowadays, kind of nice to actually own a device I can modify. I also noticed that ssh seemed to be enabled by default, and plugged in an ethernet cable and saw that ssh indeed is enabled w/ pubkey auth only. Here are the keys that are added by default:
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCX/bCFTDgViuPvdFL/VMMVRrw9b5S8HcDQk17qoCEYwmI+IIG8rEAsLiaeCOwyhf9IN+8/LRaN0Z5ZfU3WMbmsKEg8zd1Yvqq74nFbhO47vbtzmCi9S4ucIKkBEVOyvyN5lt9hWf5t5nZSmlfldZK3Pem5y8wHM5A+K/gSnzp4gwQ1QYfFb068uQ+ciIdOhb8SkUs8CwzotglIbp19I6ZmXmsNj/TmpbUf5rMfUAf1gysZ5j1UdRWrvWVh5daqvZRsBBPbXEeJfDU3Nr3HR14XYt9mgexrz/5oyKSj/lQYLmh9cDfsxvkGNIQ8fF9l+n2L1KZM4lLgiGk4KFBjQHaIBZx9OebCiiZCO4NTJUBDk9a+SZpiDiipADV07s7vTInYyFA6GrmKtnq3M6upT4WJBvVuL/BMnK5yY1RZtoqox2/pcCg2rH5S1GIy0v0HFJisl7kWInlaG2mdsaCx19wAjCFe/qT9LyxjQ6+0rArI55/JJFDkNeMjrewRQwNdASjCox8vqXCBfjvsR9qv70/ywcymgsnLAnq2LuYg5FYwMMDYOvVnhACC+BYTdNDTn5oeMIjQCUenY/DPCHpJkf4YOf3YCMUTEU9tExhtwW/X+m21hS3+STLtTfqbUeg9CeuPQZgfl9vc65n3tMxAdlEGEDoTaNMAgr2TzJv92Ka9iQ==
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDaNyzPfIcEeQsfzyQs/wyX6mX52kiS+4eNHfCaxFlgj
since our update failed, i swapped to a windows pc and set up wireshark with USBPcap and then ran the update thru the rodecaster app. I roughly glanced at the pcap file and looked at where the update started, since it was a lot of traffic as I was also using it for audio on another computer. I wrote down the packet numbers I thought were interesting and threw them to claude code to dig thru the pcap while I was doing other stuff.
A bit later (cc only took 10 minutes or so but I was busy for a while) I came back to a breakdown of the structure, and a python script to manually update a device. The RODECaster App sends some HID control stuff to the device, one command to enter the update mode (the ‘M’ command), and then another (the U command) to trigger the update. Both are just single ASCII characters sent over HID report 1.
I am but a yaml-writing slave and sometimes a below-average ghidra user, and don’t often interact with hardware devices, so getting some help from CC in discovery was useful, as well as pointing me to resources to actually learn more about hid devices.
The structure was pretty simple, you send the m command, and then copy archive.tar.gz and archive.md5 (obviously just with md5sum of the archive) onto the newly exposed disk. then you send the U command to trigger the flashing itself.
so the flow is:
plug in the rodecaster and power it on (or vice versa)
send the ‘M’ command
mount the disk and copy archive.tar.gz and archive.md5 to it
chmod 777 both of them because i dont care to figure out how to do it properly
unmount the disk
send the ‘U’ command
wait for the thing to reboot into your new firmware
custom firmware
I was still working from my mac, and wanted to create some cfw to be able to ssh into the device, so i just used a container to enable password authentication for ssh (don’t shoot me) as well as add my own pubkey to the authorized keys, and dump out an archive for me to flash. you don’t really need much to actually flash the device, see here (example of the functions its not really much to add the rest.)
run your script to flash the thing and bingo bongo you can ssh to it
conclusion
I was really surprised that I could actually flash firmware so easily to this, and it is really nice to own a device. It’s a really nice piece of kit and just kinda blends into the background and I never have to think about it. I don’t really know why ssh was enabled, or why it had this key added by default, but I submitted a ticket to RODE for this as I could not find an obvious security email to report to. I did not hear back, but I will watch to see if future firmware updates change anything.
It’s been a few months since i’ve done anything with this, and I am trying to just dump out my thoughts into a notepad and only very lightly edit it and then just poast. I really love all of the RODE stuff I have, and yet again just want to buy more gear.
if you want to ask me questions about this or have any questions, you can reach me with the primary letter of this domain, at this domain.
thanks computer, until next time
and others
added 30 commits
Seeking breaks otherwise. We might be able to just fflush() before or seeking
instead?
Turns out DosBox-X was having trouble with the Sound Blaster or something;
standard DosBox works correctly directly from the interrupt handler, and
without doubling the buffer size.
This is MUCH faster than just leaving buffering disabled, and also works
around getting bogus reads after an fseek. SDL_LoadWAV on test/sample.wav
no longer takes several seconds to finish, and comes up with the correct
data.
I wonder if we’re triggering this in LoadWAV because we’re malloc’ing data
between seeks/reads, and it’s causing the djgpp transfer buffer to change. Or
maybe the Fat DS trick is confusing it? I don’t know, I haven’t had time to
debug it, it might just be a legit libc bug in djgpp too, for all I know.
This uses an old trick we used in SDL 1.2 for MacOS Classic, which did its
audio callback in a hardware interrupt. If the audio is locked when the
interrupt fires, make a note of it and return immediately. When the lock is
released, if the interrupt has been fired, run the audio device iteration
right then.
Since there isn’t a big device lock in SDL3 (available to the app, at least),
this keeps a counter of when any SDL_AudioStream is locked, which is probably
good enough.
This uses VESA interfaces to manage the display and works with the software
renderer.
Events aren’t hooked up yet, so prepare to close DosBox on each run. :)
…upport.
This gets most of the rendering examples, which use SDL_GetBasePath() to
find textures to load, working.
Of course Quake 1 solved this better, haha. It’s smart: less memory, dirt
simple, and you don’t even have to worry about synchronizing with the
interrupt handler, because it’s safe for both sides no matter when an
interrupt fires.
[sdl-ci-filter djgpp]
[sdl-ci-artifacts]
- SDL_runapp.c: Add SDL_PLATFORM_DOS to the exclusion list so the
generic
SDL_RunApp() is disabled when the DOS-specific one is compiled.
- SDL.c: Exclude SDL_Gtk_Quit() on DOS. DJGPP defines __unix__ which
sets
SDL_PLATFORM_UNIX, but DOS has no GTK/display server. The GTK source
is not compiled (CMake UNIX is false for DOS) so this was a link
error.
- sdlplatform.cmake: Add DOS case to SDL_DetectCMakePlatform so the
platform is properly detected from CMAKE_SYSTEM_NAME=DOS.
- i586-pc-msdosdjgpp.cmake: Add i386-pc-msdosdjgpp-gcc as a fallback
compiler name, since some DJGPP toolchain builds use the i386 prefix.
- Implement double-buffered page-flipping for VBE modes with >1 image
page
- Save and restore full VBE state on video init/quit for clean mode
switching
- Improve DOS keyboard handling: support extended scancodes and Pause
key
- Lock ISR code/data to prevent page faults during interrupts
- Always vsync when blitting in single-buffered modes to reduce tearing
Move audio mixing out of IRQ handler to main loop for improved
stability and to avoid reentrancy issues. Add SDL_DOS_PumpAudio
function, update DMA buffer handling, and adjust sample rate to 22050
Hz.
Silence stale DMA buffer halves to prevent stutter during load.
Detect SB version and select 8-bit mono or 16-bit stereo mode.
Handle DMA and DSP setup for both SB16 and pre-SB16 hardware.
Add FORCE_SB_8BIT option for testing in DOSBox.
- Poll Sound Blaster DSP status instead of fixed delay after speaker-on
- Clarify DPMI conventional memory is always locked; update comments
- Document and justify DMA memory allocation strategy
- Free IRET wrapper after restoring interrupt vector to avoid leaks
- Throttle joystick axis polling to ~60 Hz to reduce BIOS timing loop
cost
- Always poll joystick buttons directly for responsiveness
Implement banked framebuffer access for VBE 1.2+ modes without LFB.
Detect and initialize banked modes, copy framebuffer data using bank
switching, and blank the framebuffer on mode set. Page-flipping is
disabled in banked mode.
Open
Open
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.