10 interesting stories served every morning and every evening.
10 interesting stories served every morning and every evening.
The DeepSeek API uses an API format compatible with OpenAI/Anthropic. By modifying the configuration, you can use the OpenAI/Anthropic SDK or softwares compatible with the OpenAI/Anthropic API to access the DeepSeek API.
* The model names deepseek-chat and deepseek-reasoner will be deprecated on 2026/07/24. For compatibility, they correspond to the non-thinking mode and thinking mode of deepseek-v4-flash, respectively.
Invoke The Chat API
Once you have obtained an API key, you can access the DeepSeek model using the following example scripts in the OpenAI API format. This is a non-stream example, you can set the stream parameter to true to get stream response.
For examples using the Anthropic API format, please refer to Anthropic API.
curl
python
nodejs
curl https://api.deepseek.com/chat/completions \ -H “Content-Type: application/json” \ -H “Authorization: Bearer ${DEEPSEEK_API_KEY}” \ -d ‘{ “model”: “deepseek-v4-pro”, “messages”: [ {“role”: “system”, “content”: “You are a helpful assistant.“}, {“role”: “user”, “content”: “Hello!“} ], “thinking”: {“type”: “enabled”}, “reasoning_effort”: “high”, “stream”: false }’
Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy.
First enthusiasm
A couple of weeks ago I subscribed to Claude Code, and during the first few weeks I had a really nice experience. It was fast, the token allowance was fair, and the quality was good.
I learned they had
raised the token allowance for non-rush hours
, and since they opposed some governmental rules, it felt good to support the right cause.
(づ  ̄ ³ ̄)づ
However… for about three weeks now my initial enthusiasm has been rapidly waning.
It began with an issue three weeks ago. I started working in the morning after about a ten-hour break; enough time for my tokens to refresh.
I sent two small questions to Claude Haiku. They were simple questions, not even related to the repository.
Suddenly, token usage spiked to 100%.
Have a nice break…
I contacted their “AI support bot”, which returned some default support nonsense and didn’t really understand the problem. So I asked for human support. A couple of days later a - what appeared to be - human support person sent a reply. It began like this:
“Our systems are detecting your inquiry is regarding usage limits on your Pro or Max plan.”
Yeah, well — it’s the Pro plan. Seems like your systems weren’t actually queried; it was just a default intro and probably a default answer, because:
This was followed by an extensive what seems to be copy-and-paste answer from their docs explaining how daily and weekly limits work.
And it closed with the typically frustrating line, that no customer likes to read at the end of an e-mail and which is just the classical middle-finger of customer support - we don’t care if your problem is solved or not, we declared it closed.
“Note that further replies to this ticket may not be monitored. If your request is not regarding usage limits on your Pro or Max plan, or you need additional support, please visit our help page at”
Great! Sending an automated e-mail that does not refer to the actual problem and then closing the channel. Thanks for nothing, I guess? Or was I wrong. I asked Claude Haiku:
@Haiku:
See the customer’s request here and the response from the AI and later W***** - did they answer the concern/question of the customer?
See the customer’s request here and the response from the AI and later W***** - did they answer the concern/question of the customer?
(╯°_°)╯︵ ┻━┻
Declining quality
In the following days and weeks, the quality was far from satisfying my needs or matching my initial experience. While I used to be able to work on up to three projects at once, now the token limit was exhausted after two hours on a single project.
And the quality was degrading. I am fully aware this is quite subjective and that the quality of the agent is always heavily impacted by the operator. The failure usually appears in front of the screen. But hey, I also develop using Github’s Copilot, OpenAI’s Codex and I am running my own inference with OMLX and Continue using Qwen3.5 – 9B. I’m not the expert, I’m lazy sometimes but I probably know a thing or two.
Let me give you this wonderful example: yesterday I asked Claude Opus to refactor a project.
While I was browsing the model’s thinking log - which I strongly suggest doing not only occasionally - I found this:
Rather than editing every slider in JSX, I’ll add a generic initializer in ui-events.js that auto-injects value displays for all range inputs that lack one.
Rather than editing every slider in JSX, I’ll add a generic initializer in ui-events.js that auto-injects value displays for all range inputs that lack one.
This is clearly bad practice. It’s a cheap workaround you wouldn’t expect even from a junior dev; it reads like someone who just doesn’t want to deliver a good result. My response:
“you can’t be serious — is this how you fix things? just WORKAROUNDS????”
At least Opus admitted:
“You’re right, that was lazy. Let me do it properly — add the labels directly in the JSX and wire them explicitly.”
Needless to say, this shortcut cost me around 50% of my five-hour token allowance.
(ง •̀_•́)ง
And even more…
Now this cache topic comes up
-
among others
. at least they are talking about it openly. The problem was: when you get back to work after some time, your conversation cache is gone and the model starts reading your codebase again. Cost-wise this is smart. But experience-wise? It means you paid tokens for the initial load and, after a forced break because the five-hour token window hit its limit, you pay again for the same load.
Think that’s all? Wait, I also got this funny anecdote: all of a sudden the weekly window changed from today to Monday. OK, I was thankful because it came with a reset to zero. But still: what is going on, Anthropic? Not only that — while I was working on my project, watching token usage with Argus-eyed vigilance, this little warning popped up:
Wait, what? I’m neither part of an organization nor do I see any hint why I suddenly have to worry about a “monthly usage limit” — also the hourly and weekly limits were still not exceeded. What is happening right now?
Turns out — two hours later - it allowed me to continue working. The warning was gone.
At least
this documentation
does not mention a monthly usage limit. And the settings page only lists the limits for the current session and week.
So… what is this monthly limit all about, Anthropic?
Sorry to let you down, Anthropic
I am a huge fan of the product. Theoretically everything just works like a charm; it offers so many opportunities. I built my
A US special forces soldier involved in the capture of Venezuelan President Nicolás Maduro was arrested and charged for allegedly betting on that operation, netting him $400,000 in profits.
According to an indictment unsealed Thursday, Master Sgt. Gannon Ken Van Dyke opened an account in late December on Polymarket, one of the best-known prediction markets. He wagered over $32,000 that Maduro would be “out” by January. The bet was a long-shot.
But Van Dyke was involved in the planning and execution of Operation Absolute Resolve, prosecutors allege, and had access to classified information before he placed the bet. His winnings, though anonymous, caught the attention of law enforcement almost immediately.
Van Dyke, an active duty soldier stationed at Fort Bragg, faces five criminal charges for stealing and misusing confidential government information, theft and fraud.
He appeared in court in North Carolina on Friday and paid a $250,000 bond. He is set to be arraigned in New York City on Tuesday and also had to surrender his passport.
Van Dyke allegedly made 13 bets from December 27 to January 2, the last being hours before the overnight capture. Prosecutors said Van Dyke sent his more than $400,000 in profits to a foreign cryptocurrency vault before he deposited them in an online brokerage account.
A master sergeant in the Army is a senior noncommissioned officer, considered a key tactical leader and technical expert and serving as the principal NCO typically at the Army battalion level. Senior NCOs are often looked to for setting and upholding the standard for more junior soldiers in the unit.
Video Ad Feedback
US special forces soldier arrested after allegedly winning $400,000 on Maduro raid
0:40
• Source:
CNN
US special forces soldier arrested after allegedly winning $400,000 on Maduro raid
0:40
“Those entrusted to safeguard our nation’s secrets have a duty to protect them and our armed service members, and not to use that information for personal financial gain,” said Jay Clayton, US attorney for the Southern District of New York.
Van Dyke was photographed just after the operation — and from when he placed his final bet — on “what appears to be the deck of a ship at sea, at sunrise wearing U.S. military fatigues, and carrying a rifle, standing alongside three other individuals wearing U.S. military fatigues,” court documents say.
Van Dyke profited more than $400,000, prosecutors say. He then allegedly moved those winnings to a foreign cryptocurrency vault before he deposited them in an online brokerage account in what prosecutors called an attempt to conceal their origin.
The Commodity Futures Trading Commission filed a related complaint against Van Dyke on Thursday, seeking restitution, disgorgement and civil monetary penalties.
CNN reported last month that federal prosecutors were investigating the Maduro trade, according to a person familiar with the matter. The chiefs of the securities and commodity fraud unit at the US attorney’s office in Manhattan met with representatives at Polymarket last month.
After the bets were placed, the US military launched a covert operation that extradited Maduro from the presidential palace in Caracas in an overnight capture while coming under heavy fire. Maduro was transported to New York to face federal drug-trafficking related charges. He has pleaded not guilty.
Polymarket in a post on X said, “When we identified a user trading on classified government information, we referred the matter to the DOJ & cooperated with their investigation. Insider trading has no place on Polymarket. Today’s arrest is proof the system works.”
ABC News first reported Thursday’s arrest.
Trading on prediction markets has exploded the past year, with users now spending a few billion dollars each week on such sites.
Lawmakers in Congress have introduced more than a dozen new bills this year to further regulate prediction markets. Some of the bills, which gained bipartisan support, would stiffen penalties against government officials who engage in insider trading.
Trump told reporters Thursday he is concerned about the growing trend of betting on geopolitical events. Asked about the charges against the US soldier, the president said he was not familiar with the specifics of the incident but compared it to baseball’s all-time hit leader Pete Rose.
“That’s like Pete Rose betting on his own team,” Trump said, referring to the late baseball player who was banned from baseball for gambling.
Pressed on whether he is concerned about betting tied to the war with Iran, Trump said it’s a global issue.
“Well I think that the whole world, unfortunately, has become somewhat of a casino,” Trump said, adding that such betting is happening “all over the world, and every place they’re doing these betting things.”
“Now, I think that I’m not happy with it,” he concluded.
The Trump administration approved Polymarket last year to start offering trades for American customers, but its US-facing site isn’t fully operational yet. The Maduro-related trades occurred on Polymarket’s highly popular international site.
That site operates out of the reach of US regulations — which is how it’s able to offer markets related to war, which is illegal under federal law. But experts say Americans can easily access the offshore site with a virtual private network, or VPN.
There is a debate in the prediction market industry over the role of insiders in prediction markets. Some experts see these markets as a vehicle for information to flow more freely from insiders to the general public.
Asked about insider trading risks, Polymarket’s CEO told Axios in November it was “super cool” that his platform “creates this financial incentive for people to go and divulge the information to the market,” including insiders.
Polymarket rolled out new rules in March, to “clarify three core categories of prohibited insider trading conduct.”
They banned trades based on information that users were legally required to keep confidential, and trades based on tips from someone with the same obligation. They also said people in “a position of authority or influence” to affect the outcome of an event cannot participate in any related markets.
This story has been updated following Van Dyke’s court appearance on Friday.
CNN’s Marshall Cohen, Haley Britzky and Alejandria Jaramillo contributed to this report.
Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy.
Hi friends,
I’ll be attending Babashka Conf on May 8 and Dutch Clojure Days on May 9.
If you’re attending either (or just visiting Amsterdam), drop me a line!
When I have an idea for a project, it tends to go in one of these two directions:
I just do it. Maybe I make a few minor revisions, but often it turns out exactly how I’d imagined and I’m happy.
I just do it. Maybe I make a few minor revisions, but often it turns out exactly how I’d imagined and I’m happy.
I think, “I should look for prior art”. There’s a lot of prior art, dealing with a much broader scope than I’d originally imagined. I start to wonder if I should incorporate that scope. Or perhaps try to build my thing on top of the existing sorta-nearby-solutions. Or maybe I should just use the popular thing. Although I could do a better job than that thing, if I put a bunch of time into it. But actually, I don’t want to maintain a big popular project, nor do I want to put that much time into this project. Uh oh, now I’ve spent a bunch of time, having neither addressed the original issue nor experienced the joy of creating something.
I think, “I should look for prior art”. There’s a lot of prior art, dealing with a much broader scope than I’d originally imagined. I start to wonder if I should incorporate that scope. Or perhaps try to build my thing on top of the existing sorta-nearby-solutions. Or maybe I should just use the popular thing. Although I could do a better job than that thing, if I put a bunch of time into it. But actually, I don’t want to maintain a big popular project, nor do I want to put that much time into this project. Uh oh, now I’ve spent a bunch of time, having neither addressed the original issue nor experienced the joy of creating something.
I prefer the first outcome, and I think the pivotal factor is how well I’ve internalized my own success criteria.
For example, last weekend I hosted my friend Marcin and we decided it’d be fun to do some woodworking, so we threw together this shelf and 3d-printed hangers for my kitchen:
Absolute banger of a project:
brainstormed the design over coffee
did a few 3d-print iterations for the Ikea bin hangers (OnShape CAD, if you want to print your own)
used material leftover from my workbench
rounded the corner by eye with a palm sander
sealed the raw plywood edge with some leftover paint from a friend
done in a weekend
The main success criteria was to jam on woodworking with a friend, and that helped me not overthink the object-level success criteria: Just make a shelf for my exact kitchen!
In contrast, this past Friday I noticed difftastic did a poor job, so I decided to shop around for structural/semantic diff tools and related workflows (a topic I’ve never studied, that I’m increasingly interested in as I’m reviewing more and more LLM-generated code).
I spent 4 hours over the weekend researching existing tools (see my notes below), going through dark periods of both “semantic tree diffing is a PhD-level complex problem” and “why do all of these have MCP servers? I don’t want an MCP server”, before I came to my senses and remembered my original success criteria: I just want a nicer diffing workflow for myself in Emacs, I should just build it myself — should take about 4 hours.
I’m cautiously optimistic that, having had this realization and committing myself to a minimal scope, I’ll be able to knock out a prototype before running out of motivation.
However, other long-running interests of mine:
interfaces for prototyping hardware (discussed September 2023)
a programming language that fuses what I like about Clojure and Rust (November 2023)
a programming language for CAD (constraints, bidirectional editing, other dubious ideas)
seem to be deep in the well of outcome #2.
That is, I’ve spent hundreds of hours on background research and little prototypes, but haven’t yet synthesized anything that addresses the original motivating issue.
It’s not quite that I regret that time — I do love learning by reading — but I have a nagging sense of unease that my inner critic (fear of failure?) is silencing my generative tendencies, keeping me from the much more enjoyable (and productive!) learning by doing.
I think in these cases the success criteria has been much fuzzier: Am I trying to replace my own usage of Rust/Clojure?
Only for some subset of problems?
Or is it that I actually just need a playground to learn about language design/implementation, and it’s fine if I don’t end up using it?
Ditto for CAD: Am I trying to replace my commercial CAD tool in favor of my own?
Only for some subset of simple or particularly parametric parts?
Do I care if it’s useful for others?
Does my tool need to be legibly different from existing open-source tools?
It’s worth considering these questions, sure.
But at the end of the day, I’d much rather have done a lot than have only considered a lot.
So I’m trying to embrace my inner clueless 20-year-old and just do things — even if some turn out to be “obviously bad” in hindsight, I’ll still be coming out ahead on net =D
Conservation of scope creep
Of course, there’s only so much time to “just do things”, and there’s a balance to be had. I’m not sure how many times I’ll re-learn YAGNI (“you ain’t gonna need it”) in my career, but I was reminded of it again after writing a bunch of code with an LLM agent, then eventually coming to my senses and throwing it all out.
I wanted a Finda-style filesystem-wide fuzzy path search for Emacs.
Since I’ve built (by hand, typing the code myself!) this exact functionality before (walk filesystem to collect paths, index them by trigram, do fast fuzzy queries via bitmap intersections), I figured it’d only take a few hours to supervise an LLM to write all the code.
I started with a “plan mode” chat, and the LLM suggested a library, Nucleo, which turned up since I wrote Finda (10 years ago, eek!).
I read through it, found it quite well-designed and documented, and decided to use it so I’d get its smart case and Unicode normalization functionality.
(E.g., query foo matches Foo and foo, whereas query Foo won’t match foo; similarly for cafe and café.)
Finding a great library wasn’t the problem, the problem was that Nucleo also supported some extra functionality: anchors (^foo only matches at the beginning of a line).
This got me thinking about what that might mean in a corpus that consists entirely of file paths.
Anchoring to the beginning of a line isn’t useful (everything starts with /), so I decided to try and interpret the anchors with respect to the path segments.
E.g., ^foo would match /root/foobar/ but not /root/barfoo/.
But to do this efficiently, the index needs to keep track of segment boundaries so that the query can be checked against each segment quickly.
But then we also need to handle a slash occurring in an anchored query (e.g., ^foo/bar) since that wouldn’t get matched when only looking at segments individually (root, foo, bar, and baz of a matching path /root/foo/bar/baz/).
Working through this took several hours: first throwing around design ideas with an LLM, having it write code to wrap Nucleo’s types, then realizing its code was bloated and didn’t spark joy, so finally writing my own (smaller) wrapper.
Then, after a break, I realized:
I can’t think of a situation where I’d ever wished Finda had anchor functionality
In a corpus of paths, I can anchor by just adding / to the start or end of a query (this works for everything except anchoring to the end of a filename).
So I tossed all of the anchoring code.
I’m pretty sure I still came out ahead compared to if I’d tried to write everything myself sans LLM or discussion with others, but I’m not certain.
Perhaps there’s some kind of conservation law here: Any increases in programming speed will be offset by a corresponding increase in unnecessary features, rabbit holes, and diversions.
Structural diffing
Speaking of unnecessary diversions, let me tell you everything I’ve learned about structural diffing recently — if you have thoughts/feelings/references in this space, I’d love to hear about ’em!
When we’re talking about code, a “diff” usually means a summary of the line-by-line changes between two versions of a file.
This might be rendered as a “unified” view, where changed lines are prefixed with + or - to indicate whether they’re additions or deletions.
For example:
We’ve removed coffee and added apple.
The same diff might also be rendered in a side-by-side view, which can be easier to read when there are more complex changes:
The problem with these line-by-line diffs is that they’re not aware of higher-level structure like functions, types, etc. — if some braces match up somehow between versions, they might not be shown at all, even if the braces “belong” to different functions.
There’s a wonderful tool, difftastic, which tries to address this by calculating diffs using treesitter-provided concrete syntax trees.
It’s a huge improvement over line-based diffs, but unfortunately it doesn’t always do a great job matching entities between versions.
Here’s the diff that motivated this entire foray:
Note that it doesn’t match up struct PendingClick, it shows it deleted on the left and added on the right.
I haven’t dug into why difftastic fails to match here, but I do feel like it’s wrong — even if the overall diff would be longer, I’d still rather see PendingClickRequest and PendingClick matched up between both sides.
Here’s a summary of tools / references in the space:
The most “baked” and thoughtful semantic diff tool I found is, perhaps unsurprisingly, semanticdiff.com, a small German company with a free VSCode plugin and web app that shows diffs for github PRs. Unfortunately they don’t have any code libraries I can use as a foundation for the workflow I want.
this semanticdiff vs. difftastic blog post covers a lot of great details (including that difftastic doesn’t even show semantically meaningful indentation changes in python !!!)
one of the authors has great HN comments with hard-won background knowledge. E.g., they moved away from treesitter because it’s unreliable for semantics:
Context-sensitive keywords in particular were a constant source of annoyance. The grammar looks correct, but it will fail to parse because of the way the lexer works. You don’t want your tool to abort just because someone named their parameter “async”.
The most “baked” and thoughtful semantic diff tool I found is, perhaps unsurprisingly, semanticdiff.com, a small German company with a free VSCode plugin and web app that shows diffs for github PRs. Unfortunately they don’t have any code libraries I can use as a foundation for the workflow I want.
this semanticdiff vs. difftastic blog post covers a lot of great details (including that difftastic doesn’t even show semantically meaningful indentation changes in python !!!)
one of the authors has great HN comments with hard-won background knowledge. E.g., they moved away from treesitter because it’s unreliable for semantics:
Context-sensitive keywords in particular were a constant source of annoyance. The grammar looks correct, but it will fail to parse because of the way the lexer works. You don’t want your tool to abort just because someone named their parameter “async”.
Context-sensitive keywords in particular were a constant source of annoyance. The grammar looks correct, but it will fail to parse because of the way the lexer works. You don’t want your tool to abort just because someone named their parameter “async”.
diffsitter
built on treesitter, has MCP server. README includes list of similar projects.
lots of github stars, but doesn’t seem particularly well-documented; I couldn’t find an explanation of how it works, but the difftastic wiki says it “runs longest-common-subsequence on the leaves of the tree”
diffsitter
built on treesitter, has MCP server. README includes list of similar projects.
lots of github stars, but doesn’t seem particularly well-documented; I couldn’t find an explanation of how it works, but the difftastic wiki says it “runs longest-common-subsequence on the leaves of the tree”
gumtree
research / academic origin in 2014
requires Java, so no-go for my use case of a quick tool I can use via Emacs
gumtree
research / academic origin in 2014
requires Java, so no-go for my use case of a quick tool I can use via Emacs
mergiraf: treesitter-based merge-driver written in rust
very nice architecture overview; tool uses Gumtree algorithm
docs and adorable illustrations indicate this project was clearly written by a thoughtful human
semanticdiff.com author in HN comments:
> GumTree is good at returning a result quickly, but there are quite a few cases where it always returned bad matches for us, no matter how many follow-up papers with improvements we tried to implement. In the end we switched over to a dijkstra based approach that tries to minimize the cost of the mapping
mergiraf: treesitter-based merge-driver written in rust
very nice architecture overview; tool uses Gumtree algorithm
Spinel — Ruby AOT Compiler
Spinel compiles Ruby source code into standalone native executables.
It performs whole-program type inference and generates optimized C code,
achieving significant speedups over CRuby.
Spinel is self-hosting: the compiler backend is written in Ruby and
compiles itself into a native binary.
How It Works
Ruby (.rb)
|
v
spinel_parse Parse with Prism (libprism), serialize AST
| (C binary, or CRuby + Prism gem as fallback)
v
AST text file
|
v
spinel_codegen Type inference + C code generation
| (self-hosted native binary)
v
C source (.c)
|
v
cc -O2 -Ilib -lm Standard C compiler + runtime header
|
v
Native binary Standalone, no runtime dependencies
Quick Start
# Fetch libprism sources (from the prism gem on rubygems.org):
make deps
# Build everything:
make
# Write a Ruby program:
cat > hello.rb <<‘RUBY’
def fib(n)
if n < 2
n
else
fib(n - 1) + fib(n - 2)
end
end
puts fib(34)
RUBY
# Compile and run:
./spinel hello.rb
./hello # prints 5702887 (instantly)
Options
./spinel app.rb # compiles to ./app
./spinel app.rb -o myapp # compiles to ./myapp
./spinel app.rb -c # generates app.c only
./spinel app.rb -S # prints C to stdout
Self-Hosting
Spinel compiles its own backend. The bootstrap chain:
CRuby + spinel_parse.rb → AST
CRuby + spinel_codegen.rb → gen1.c → bin1
bin1 + AST → gen2.c → bin2
bin2 + AST → gen3.c
gen2.c == gen3.c (bootstrap loop closed)
Benchmarks
74 tests pass. 55 benchmarks pass.
Geometric mean: ~11.6x faster than miniruby (Ruby 4.1.0dev) across
the 28 benchmarks below. Baseline is the latest CRuby miniruby build
(without bundled gems), which is considerably faster than the system
ruby (3.2.3); Spinel’s advantage is correspondingly smaller but still
substantial on computation-heavy workloads.
Computation
Data Structures & GC
Real-World Programs
Supported Ruby Features
Core: Classes, inheritance, super, include (mixin), attr_accessor,
Struct.new, alias, module constants, open classes for built-in types.
Control Flow: if/elsif/else, unless, case/when,
case/in (pattern matching), while, until, loop, for..in
(range and array), break, next, return, catch/throw,
&. (safe navigation).
Blocks: yield, block_given?, &block, proc {}, Proc.new,
lambda -> x { }, method(:name). Block methods: each,
each_with_index, map, select, reject, reduce, sort_by,
any?, all?, none?, times, upto, downto.
Exceptions: begin/rescue/ensure/retry, raise,
custom exception classes.
Types: Integer, Float, String (immutable + mutable), Array, Hash,
Range, Time, StringIO, File, Regexp, Bigint (auto-promoted), Fiber.
Polymorphic values via tagged unions. Nullable object types (T?)
for self-referential data structures (linked lists, trees).
Global Variables: $name compiled to static C variables with
type-mismatch detection at compile time.
Strings: << automatically promotes to mutable strings (sp_String)
for O(n) in-place append. +, interpolation, tr, ljust/rjust/center,
and all standard methods work on both. Character comparisons like
s[i] == “c” are optimized to direct char array access (zero allocation).
Chained concatenation (a + b + c + d) collapses to a single malloc
via sp_str_concat4 / sp_str_concat_arr — N-1 fewer allocations.
Loop-local str.split(sep) reuses the same sp_StrArray across
iterations (csv_process: 4 M allocations eliminated).
Regexp: Built-in NFA regexp engine (no external dependency).
=~, $1-$9, match?, gsub(/re/, str), sub(/re/, str),
scan(/re/), split(/re/).
Bigint: Arbitrary precision integers via mruby-bigint. Auto-promoted
from loop multiplication patterns (e.g. q = q * k). Linked as static
library — only included when used.
Ubuntu 26.04 (“Resolute Raccoon”) LTS has been released
on schedule.
This release brings a significant uplift in security, performance,
and usability across desktop, server, and cloud environments. Ubuntu
26.04 LTS introduces TPM-backed full-disk encryption, expanded use of
memory-safe components, improved application permission controls, and
Livepatch support for Arm systems, helping reduce downtime and
strengthen system resilience. […]
The newest Edubuntu, Kubuntu, Lubuntu, Ubuntu Budgie, Ubuntu Cinnamon,
Ubuntu Kylin, Ubuntu Studio, Ubuntu Unity, and Xubuntu are also being
released today. For more details on these, read their individual release
notes under the Official flavors section:
https://documentation.ubuntu.com/release-notes/26.04/#official-flavors
Maintenance updates will be provided for 5 years for Ubuntu Desktop, Ubuntu
Server, Ubuntu Cloud, Ubuntu WSL, and Ubuntu Core. All the remaining flavors
will be supported for 3 years.
This release brings a significant uplift in security, performance,
and usability across desktop, server, and cloud environments. Ubuntu
26.04 LTS introduces TPM-backed full-disk encryption, expanded use of
memory-safe components, improved application permission controls, and
Livepatch support for Arm systems, helping reduce downtime and
strengthen system resilience. […]
The newest Edubuntu, Kubuntu, Lubuntu, Ubuntu Budgie, Ubuntu Cinnamon,
Ubuntu Kylin, Ubuntu Studio, Ubuntu Unity, and Xubuntu are also being
released today. For more details on these, read their individual release
notes under the Official flavors section:
https://documentation.ubuntu.com/release-notes/26.04/#official-flavors
Maintenance updates will be provided for 5 years for Ubuntu Desktop, Ubuntu
Server, Ubuntu Cloud, Ubuntu WSL, and Ubuntu Core. All the remaining flavors
will be supported for 3 years.
See the release
notes for a list of changes, system requirements, and more.
Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy.
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.