10 interesting stories served every morning and every evening.
10 interesting stories served every morning and every evening.
The DeepSeek API uses an API format compatible with OpenAI/Anthropic. By modifying the configuration, you can use the OpenAI/Anthropic SDK or softwares compatible with the OpenAI/Anthropic API to access the DeepSeek API.
* The model names deepseek-chat and deepseek-reasoner will be deprecated on 2026/07/24. For compatibility, they correspond to the non-thinking mode and thinking mode of deepseek-v4-flash, respectively.
Invoke The Chat API
Once you have obtained an API key, you can access the DeepSeek model using the following example scripts in the OpenAI API format. This is a non-stream example, you can set the stream parameter to true to get stream response.
For examples using the Anthropic API format, please refer to Anthropic API.
curl
python
nodejs
curl https://api.deepseek.com/chat/completions \ -H “Content-Type: application/json” \ -H “Authorization: Bearer ${DEEPSEEK_API_KEY}” \ -d ‘{ “model”: “deepseek-v4-pro”, “messages”: [ {“role”: “system”, “content”: “You are a helpful assistant.“}, {“role”: “user”, “content”: “Hello!“} ], “thinking”: {“type”: “enabled”}, “reasoning_effort”: “high”, “stream”: false }’
It took just a few months of President Donald Trump’s second term for Palantir employees to question their company’s commitments to civil liberties. Last fall, Palantir seemed to become the technological backbone of Trump’s immigration enforcement machinery, providing software identifying, tracking, and helping deport immigrants on behalf of the Department of Homeland Security, when current and former employees started ringing the alarm.
Around that time, two former employees reconnected by phone. Right as they picked up the call, one of them asked, “Are you tracking Palantir’s descent into fascism?”
“That was their greeting,” the other former employee says. “There’s this feeling not of ‘Oh, this is unpopular and hard,’ but ‘This feels wrong.’”
Palantir was founded—with initial venture capital investment from the CIA—at a moment of national consensus following the September 11, 2001, attacks, when many saw fighting terrorism abroad as the most critical mission facing the US. The company, which was cofounded by tech billionaire Peter Thiel, sells software that acts as a high-powered data aggregation and analysis tool powering everything from private businesses to the US military’s targeting systems.
For the past 20 years, employees could accept the intense external criticism and awkward conversations with family and friends about working for a company named after J. R. R. Tolkien’s corrupting all-seeing orb. But a year into Trump’s second term, as Palantir deepens its relationship with an administration that many workers fear is wreaking havoc at home, employees are finally raising these concerns internally, as the US’s war on immigrants, war in Iran, and even company-released manifestos has forced them to rethink the role they play in it all.
“We hire the best and brightest talent to help defend America and its allies and to build and deploy our software to help governments and businesses around the world. Palantir is no monolith of belief, nor should we be,” a Palantir spokesperson said in a statement. “We all pride ourselves on a culture of fierce internal dialogue and even disagreement over the complex areas we work on. That has been true from our founding and remains true today.”
“The broad story of Palantir as told to itself and to employees was that coming out of 9/11 we knew that there was going to be this big push for safety, and we were worried that that safety might infringe on civil liberties,” one former employee tells WIRED. “And now the threat’s coming from within. I think there’s a bit of an identity crisis and a bit of a challenge. We were supposed to be the ones who were preventing a lot of these abuses. Now we’re not preventing them. We seem to be enabling them.”
Palantir has always had a secretive reputation, forbidding employees from speaking to the press and requiring alumni to sign non-disparagement agreements. But throughout the company’s history, management has always at least appeared to be open to engagement and internal criticism, multiple employees say. Over the last year, however, much of that feedback has been met by philosophical soliloquies and redirection. “It’s never been really that people are afraid of speaking up against Karp. It’s more a question of what it would do, if anything,” one current employee tells WIRED.
While internal tensions within Palantir have grown over the last year, they reached a boiling point in January after the violent killing of Alex Pretti, a nurse who was shot and killed by federal agents during protests against Immigration and Customs Enforcement (ICE) in Minneapolis. Employees from across the company commented in a Slack thread dedicated to the news demanding more information about the company’s relationship with ICE from management and CEO Alex Karp.
“Our involvement with ice has been internally swept under the rug under Trump2 too much,” one person wrote in a Slack message WIRED reported at the time. “We need an understanding of our involvement here.”
Around this time, Palantir started wiping Slack conversations after seven days in at least one channel where most of the internal debate takes place, #palantir-in-the-news. Because the decision wasn’t formally announced before the policy rolled out, one worker who noticed the deletions asked in the channel why the company was removing “relevant internal discourse on current events.”
A member of Palantir’s cybersecurity team responded, writing that the decision was made in response to leaks.
This period led Palantir management to release an updated wiki, or a collection of blog posts explaining the ICE contract, where the company defended its work with Homeland Security. Management wrote that the technology the company provides “is making a difference in mitigating risks while enabling targeted outcomes.”
Palantir management ran defense by holding a handful of AMA (ask me anything) forums across the company with leadership like chief technology officer Shyam Sankar and members of its privacy and civil liberties (PCL) teams.
At least one of these AMAs was organized independently of PCL leadership by two team leads, including one who worked directly on the ICE contract for a period of time. “This was very rogue,” a PCL employee who worked on the ICE contract said in a February AMA, a recording of which was obtained by WIRED. “Courtney [Bowman, head of the privacy and civil liberties team] doesn’t know that I’m spending three hours this week talking to IMPLs [Palantir terminology for its client-facing product teams], but I think this is the only real way to start going in the right direction.”
Throughout the lengthy call, employees working on a variety of Palantir’s defense projects posed hard questions. Could ICE agents delete audit logs in Palantir’s software? Could agents create harmful workflows on their own without the company’s help? What is the most malicious thing that could come out of this work?
Answering these questions, the PCL employee who worked on the ICE contract said that “a sufficiently malicious customer is, like, basically impossible to prevent at the moment” and could only be controlled through “auditing to prove what happened” and legal action after the fact if the customer breached the company’s contract.
At one point during the call, one of the employees tried to level with the group, explaining that Palantir’s work with ICE was a priority for Karp and something that likely wouldn’t change any time soon.
“Karp really wants to do this and continuously wants this,” they said. “We’re largely at the role of trying to give him suggestions and trying to redirect him, but it was largely unsuccessful and we seem to be on a very sharp path of continuing to expand this workflow.”
Around the time of these forums, Karp sat down for a prerecorded interview with Bowman, seemingly to discuss Palantir’s contracts with ICE, but refused to broach the topic directly. Instead, Karp suggested that employees interested in the work sign nondisclosure agreements before receiving more detailed information.
Then came the deadly February 28 missile strike on an Iranian elementary school on the first full day of the Trump administration and Israel’s war in Iran. The US is the only known country in the conflict to use that specific type of missile. More than 120 children were killed when a Tomahawk missile struck the school, kicking off a series of investigations that concluded that the US was responsible and that surveillance tools like Palantir’s Maven system had been used during that day’s strikes. For a company full of employees already reeling over its work with ICE, possible involvement in the death of children was a breaking point.
“I guess the root of what I’m asking is … were we involved, and are doing anything to stop a repeat if we were,” one employee asked in the Palantir news Slack channel. Some employees posed similar questions in the thread, while others criticized them for discussing what could be considered classified information in a Slack channel open to the entire company. The investigation is ongoing.
The Palantir spokesperson said the company was “proud” to support the US military “across Democratic and Republican administrations.”
In March, Karp gave an interview to CNBC claiming that AI could undermine the power of “humanities-trained—largely Democratic—voters” and increase the power of working-class male voters. While critics reacted to the piece, calling the statements concerning, so did employees internally: “Is it true that AI disruption is going to disproportionately negatively affect women and people who vote Democrat? and if it is, why are we cool with that?” one worker asked on Slack in a channel dedicated to news about Palantir.
Palantir’s leadership incensed workers yet again this week after the company posted a Saturday afternoon manifesto reducing Karp’s recent book, The Technological Republic, to 22 points. The post—which includes many of Karp’s long-standing beliefs on how Silicon Valley could better serve US national interests—goes as far as suggesting that the US should consider reinstating the draft. Critics called the manifesto fascist.
Internally, the post alarmed some workers who huddled in a Slack thread on Monday morning, questioning leadership over its decision to post it in the first place.
“I’m curious why this had to be posted. Especially on the company account. On the practical level every time stuff like that gets posted it gets harder for us to sell the software outside of the US (for sure in the current political climate), and I doubt we need this in the US?” wrote one frustrated employee. The message received more than 50 “+1” emojis.
“Wether [sic] we acknowledge it or not, this impacts us all personally,” another worker wrote on Monday. “I’ve already had multiple friends reach out and ask what the hell did we post.” This message received nearly two dozen “+1” emoji reactions.
“Yeah it turns out that short-form summaries of the book’s long-form ideas are easy to misrepresent. It’s like we taped a ‘kick me’ sign on our own backs,” a third worker wrote. “I hope no one who decided to put this out is surprised that we are, in fact, getting kicked.”
These conversations involving shame and uncertainty from workers have seemingly popped up in internal channels whenever Palantir has been in the news over the last year. “I think the only thing not different is a lot of folks are still incredibly wary about leaks and talking to the press,” one current employee tells WIRED, describing how the internal company culture has evolved over the last year.
All of this dissent doesn’t seem to bother Karp, who recently told workers that the company is “behind the curve internally” when it comes to popularity. Here, he’s been consistent; in March 2024 Karp told a CNBC reporter that “if you have a position that does not cost you ever to lose an employee, it’s not a position.”
But for employees, the culture shift feels intentional. “I don’t want to assert that I have knowledge of what’s going on in their internal mind,” one former worker tells WIRED. “But maybe it’s gotten to a place where encouraging independent thought and questioning leads to some bad conclusions.”
Over the past month, we’ve been looking into reports that Claude’s responses have worsened for some users. We’ve traced these reports to three separate changes that affected Claude Code, the Claude Agent SDK, and Claude Cowork. The API was not impacted.
All three issues have now been resolved as of April 20 (v2.1.116).
In this post, we explain what we found, what we fixed, and what we’ll do differently to ensure similar issues are much less likely to happen again.
We take reports about degradation very seriously. We never intentionally degrade our models, and we were able to immediately confirm that our API and inference layer were unaffected.
After investigation, we identified three different issues:
On March 4, we changed Claude Code’s default reasoning effort from high to medium to reduce the very long latency—enough to make the UI appear frozen—some users were seeing in high mode. This was the wrong tradeoff. We reverted this change on April 7 after users told us they’d prefer to default to higher intelligence and opt into lower effort for simple tasks. This impacted Sonnet 4.6 and Opus 4.6.
On March 26, we shipped a change to clear Claude’s older thinking from sessions that had been idle for over an hour, to reduce latency when users resumed those sessions. A bug caused this to keep happening every turn for the rest of the session instead of just once, which made Claude seem forgetful and repetitive. We fixed it on April 10. This affected Sonnet 4.6 and Opus 4.6.
On April 16, we added a system prompt instruction to reduce verbosity. In combination with other prompt changes, it hurt coding quality and was reverted on April 20. This impacted Sonnet 4.6, Opus 4.6, and Opus 4.7.
Because each change affected a different slice of traffic on a different schedule, the aggregate effect looked like broad, inconsistent degradation. While we began investigating reports in early March, they were challenging to distinguish from normal variation in user feedback at first, and neither our internal usage nor evals initially reproduced the issues identified.
This isn’t the experience users should expect from Claude Code. As of April 23, we’re resetting usage limits for all subscribers.
A change to Claude Code’s default reasoning effort
When we released Opus 4.6 in Claude Code in February, we set the default reasoning effort to high.
Soon after, we received user feedback that Claude Opus 4.6 in high effort mode would occasionally think for too long, causing the UI to appear frozen and leading to disproportionate latency and token usage for those users.
In general, the longer the model thinks, the better the output. Effort levels are how Claude Code lets users set that tradeoff—more thinking versus lower latency and fewer usage limit hits. As we calibrate effort levels for our models, we take this tradeoff into account in order to pick points along the test-time-compute curve that give people the best range of options. In the product layer, we then choose which point along this curve we set as our default, and that is the value we send to the Messages API as the effort parameter; we then make the other options available via /effort.
In our internal evals and testing, medium effort achieved slightly lower intelligence with significantly less latency for the majority of tasks. It also didn’t suffer from the same issues with occasional very long tail latencies for thinking, and it helped maximize users’ usage limits. As a result, we rolled out a change making medium the default effort, and explained the rationale via in-product dialog.
Soon after rolling out, users began reporting that Claude Code felt less intelligent. We shipped a number of design iterations to make the current effort setting clearer in order to alert people they could change the default (notices on startup, an inline effort selector, and bringing back ultrathink), but most users retained the medium effort default.
After hearing feedback from more customers, we reversed this decision on April 7. All users now default to xhigh effort for Opus 4.7, and high effort for all other models.
A caching optimization that dropped prior reasoning
When Claude reasons through a task, that reasoning is normally kept in the conversation history so that on every subsequent turn, Claude can see why it made the edits and tool calls it did.
On March 26, we shipped what was meant to be an efficiency improvement to this feature. We use prompt caching to make back-to-back API calls cheaper and faster for users. Claude writes the input tokens to the cache when it makes an API request, then after a period of inactivity the prompt is evicted from cache, making room for other prompts. Cache utilization is something we manage carefully (more on our approach).
The design should have been simple: if a session has been idle for more than an hour, we could reduce users’ cost of resuming that session by clearing old thinking sections. Since the request would be a cache miss anyway, we could prune unnecessary messages from the request to reduce the number of uncached tokens sent to the API. We’d then resume sending full reasoning history. To do this we used the clear_thinking_20251015 API header along with keep:1.
The implementation had a bug. Instead of clearing thinking history once, it cleared it on every turn for the rest of the session. After a session crossed the idle threshold once, each request for the rest of that process told the API to keep only the most recent block of reasoning and discard everything before it. This compounded: if you sent a follow-up message while Claude was in the middle of a tool use, that started a new turn under the broken flag, so even the reasoning from the current turn was dropped. Claude would continue executing, but increasingly without memory of why it had chosen to do what it was doing. This surfaced as the forgetfulness, repetition, and odd tool choices people reported.
Because this would continuously drop thinking blocks from subsequent requests, those requests also resulted in cache misses. We believe this is what drove the separate reports of usage limits draining faster than expected.
Two unrelated experiments made it challenging for us to reproduce the issue at first: an internal-only server-side experiment related to message queuing; and an orthogonal change in how we display thinking suppressed this bug in most CLI sessions, so we didn’t catch it even when testing external builds.
This bug was at the intersection of Claude Code’s context management, the Anthropic API, and extended thinking. The changes it introduced made it past multiple human and automated code reviews, as well as unit tests, end-to-end tests, automated verification, and dogfooding. Combined with this only happening in a corner case (stale sessions) and the difficulty of reproducing the issue, it took us over a week to discover and confirm the root cause.
As part of the investigation, we back-tested Code Review against the offending pull requests using Opus 4.7. When provided the code repositories necessary to gather complete context, Opus 4.7 found the bug, while Opus 4.6 didn’t. To prevent this from happening again, we are now landing support for additional repositories as context for code reviews.
We fixed this bug on April 10 in v2.1.101.
A system prompt change to reduce verbosity
Our latest model, Claude Opus 4.7, has a notable behavioral quirk relative to its predecessor: as we wrote about at launch, it tends to be quite verbose. This makes it smarter on hard problems, but it also produces more output tokens.
A few weeks before we released Opus 4.7, we started tuning Claude Code in preparation. Each model behaves slightly differently, and we spend time before each release optimizing the harness and product for it.
We have a number of tools to reduce verbosity: model training, prompting, and improving thinking UX in the product. Ultimately we used all of these, but one addition to the system prompt caused an outsized effect on intelligence in Claude Code:
“Length limits: keep text between tool calls to ≤25 words. Keep final responses to ≤100 words unless the task requires more detail.”
After multiple weeks of internal testing and no regressions in the set of evaluations we ran, we felt confident about the change and shipped it alongside Opus 4.7 on April 16.
As part of this investigation, we ran more ablations (removing lines from the system prompt to understand the impact of each line) using a broader set of evaluations. One of these evaluations showed a 3% drop for both Opus 4.6 and 4.7. We immediately reverted the prompt as part of the April 20 release.
Going forward
We are going to do several things differently to avoid these issues: we’ll ensure that a larger share of internal staff use the exact public build of Claude Code (as opposed to the version we use to test new features); and we’ll make improvements to our Code Review tool that we use internally, and ship this improved version to customers.
We’re also adding tighter controls on system prompt changes. We will run a broad suite of per-model evals for every system prompt change to Claude Code, continuing ablations to understand the impact of each line, and we have built new tooling to make prompt changes easier to review and audit. We’ve additionally added guidance to our CLAUDE.md to ensure model-specific changes are gated to the specific model they’re targeting. For any change that could trade off against intelligence, we’ll add soak periods, a broader eval suite, and gradual rollouts so we catch issues earlier.
We recently created @ClaudeDevs on X to give us the room to explain product decisions and the reasoning behind them in depth. We’ll share the same updates in centralized threads on GitHub.
Finally, we’d like to thank our users: the people who used the /feedback command to share their issues with us (or who posted specific, reproducible examples online) are the ones who ultimately allowed us to identify and fix these problems. Today we are resetting usage limits for all subscribers.
We’re immensely grateful for your feedback and for your patience.
In Brief
Posted:
11:08 AM PDT · April 23, 2026
Meta is planning to cut 10% of its workforce, amounting to 8,000 employees, according to a report from Bloomberg. Meta also will not hire for 6,000 roles that are currently open.
According to an internal memo sent to employees Thursday and viewed by Bloomberg, Meta told staff that the cuts will begin on May 20. Reuters had earlier reported on Meta’s plans for sweeping layoffs.
TechCrunch has reached out to Meta for comment.
“We’re doing this as part of our continued effort to run the company more efficiently and to allow us to offset the other investments we’re making,” chief people office Janelle Gale told employees, according to the memo. “This is not an easy tradeoff and it will mean letting go of people who have made meaningful contributions to Meta during their time here.”
Meta spent tens of billions on its metaverse efforts, which largely failed. The company has also had to make major investments in its AI efforts in order to keep up with competitors in the space — earlier this month, it debuted a completely overhauled AI product called Muse Spark.
Topics
Subscribe for the industry’s biggest tech news
Latest in Social
Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy.
A US special forces soldier involved in the capture of Venezuelan President Nicolás Maduro was arrested and charged for allegedly betting on that operation, netting him $400,000 in profits.
According to an indictment unsealed Thursday, Master Sgt. Gannon Ken Van Dyke opened an account in late December on Polymarket, one of the best-known prediction markets. He wagered about $32,000 that Maduro would be “out” by January. The bet was a long-shot.
But Van Dyke was involved in the planning and execution of Operation Absolute Resolve, prosecutors allege, and had access to classified information before he placed the bet. His winnings, though anonymous, caught the attention of law enforcement almost immediately.
Van Dyke, an active duty soldier stationed at Fort Bragg, faces five criminal charges for stealing and misusing confidential government information, theft and fraud. He will make his first court appearance in North Carolina. No attorney has been listed for him on the court docket.
He allegedly made 13 bets from December 27 to January 2, the last being hours before the overnight capture. Prosecutors said Van Dyke sent his more than $400,000 in profits to a foreign cryptocurrency vault before he deposited them in an online brokerage account.
A master sergeant in the Army is a senior noncommissioned officer, considered a key tactical leader and technical expert and serving as the principal NCO typically at the Army battalion level. Senior NCOs are often looked to for setting and upholding the standard for more junior soldiers in the unit.
Video Ad Feedback
US special forces soldier arrested after allegedly winning $400,000 on Maduro raid
0:40
• Source:
CNN
US special forces soldier arrested after allegedly winning $400,000 on Maduro raid
0:40
“Those entrusted to safeguard our nation’s secrets have a duty to protect them and our armed service members, and not to use that information for personal financial gain,” said Jay Clayton, US attorney for the Southern District of New York.
Van Dyke was photographed just after the operation — and from when he placed his final bet — on “what appears to be the deck of a ship at sea, at sunrise wearing U.S. military fatigues, and carrying a rifle, standing alongside three other individuals wearing U.S. military fatigues,” court documents say.
Van Dyke profited more than $400,000, prosecutors say. He then allegedly moved those winnings to a foreign cryptocurrency vault before he deposited them in an online brokerage account in what prosecutors called an attempt to conceal their origin.
The Commodity Futures Trading Commission filed a related complaint against Van Dyke on Thursday, seeking restitution, disgorgement and civil monetary penalties.
CNN reported last month that federal prosecutors were investigating the Maduro trade, according to a person familiar with the matter. The chiefs of the securities and commodity fraud unit at the US attorney’s office in Manhattan met with representatives at Polymarket last month.
After the bets were placed, the US military launched a covert operation that extradited Maduro from the presidential palace in Caracas in an overnight capture while coming under heavy fire. Maduro was transported to New York to face federal drug-trafficking related charges. He has pleaded not guilty.
Polymarket in a post on X said, “When we identified a user trading on classified government information, we referred the matter to the DOJ & cooperated with their investigation. Insider trading has no place on Polymarket. Today’s arrest is proof the system works.”
ABC News first reported Thursday’s arrest.
Trading on prediction markets has exploded the past year, with users now spending a few billion dollars each week on such sites.
Lawmakers in Congress have introduced more than a dozen new bills this year to further regulate prediction markets. Some of the bills, which gained bipartisan support, would stiffen penalties against government officials who engage in insider trading.
Trump told reporters Thursday he is concerned about the growing trend of betting on geopolitical events. Asked about the charges against the US soldier, the president said he was not familiar with the specifics of the incident but compared it to baseball’s all-time hit leader Pete Rose.
“That’s like Pete Rose betting on his own team,” Trump said, referring to the late baseball player who was banned from baseball for gambling.
Pressed on whether he is concerned about betting tied to the war with Iran, Trump said it’s a global issue.
“Well I think that the whole world, unfortunately, has become somewhat of a casino,” Trump said, adding that such betting is happening “all over the world, and every place they’re doing these betting things.”
“Now, I think that I’m not happy with it,” he concluded.
The Trump administration approved Polymarket last year to start offering trades for American customers, but its US-facing site isn’t fully operational yet. The Maduro-related trades occurred on Polymarket’s highly popular international site.
That site operates out of the reach of US regulations — which is how it’s able to offer markets related to war, which is illegal under federal law. But experts say Americans can easily access the offshore site with a virtual private network, or VPN.
There is a debate in the prediction market industry over the role of insiders in prediction markets. Some experts see these markets as a vehicle for information to flow more freely from insiders to the general public.
Asked about insider trading risks, Polymarket’s CEO told Axios in November it was “super cool” that his platform “creates this financial incentive for people to go and divulge the information to the market,” including insiders.
Polymarket rolled out new rules in March, to “clarify three core categories of prohibited insider trading conduct.”
They banned trades based on information that users were legally required to keep confidential, and trades based on tips from someone with the same obligation. They also said people in “a position of authority or influence” to affect the outcome of an event cannot participate in any related markets.
This story has been updated with additional details.
CNN’s Marshall Cohen, Haley Britzky and Alejandria Jaramillo contributed to this report.
France Titres, the government agency in France for issuing and managing administrative documents has disclosed a data breach after a threat actor claimed the attack and stealing citizen data.
Also known as Agence nationale des titres sécurisés (ANTS), the administrative body operates under the French Ministry of the Interior, serving as the managing authority for official identity and registration documents in France. This includes driver’s licenses, national ID cards, passports, and immigration documents.
According to an announcement the agency published yesterday, the attack occurred last week, and while the investigation is still ongoing, several data types for an undisclosed number of individuals may have been exposed.
“On Wednesday, April 15, 2026, the National Agency for Secure Documents (ANTS) detected a security incident that may involve the disclosure of data from individual and professional accounts on the ants.gouv.fr portal,” reads ANTS’s announcement.
The types of data that may have been exposed are:
Login ID
Full name
Email address
Date of birth
Unique account identifier
Postal address (for some)
Place of birth (for some)
Phone number (for some)
ANTS stated that it is currently in the process of notifying those identified as impacted.
The agency noted that the exposed information does not allow unauthorized access to its electronic portals. However, the same data can be used in phishing and social engineering attacks.
“No action is required from users. However, they are advised to remain highly vigilant regarding any suspicious or unusual messages they may receive (SMS, phone calls, emails, etc.) that appear to come from ANTS,” the agency warned.
ANTS has notified the data protection authority (CNIL), the Paris Public Prosecutor, and has also involved the national cybersecurity agency (ANSSI) in the response effort. The agency warned that the sale or dissemination of the data is illegal.
19 million records claimed stolen
On April 16, a threat actor using the moniker ‘breach3d’ claimed the attack on hacker forums claimed the attack on ANTS, alleging to be holding up to 19 million records.
The threat actor claims that the stolen data contains full names, contact details, birth data, home addresses, account metadata, and gender and civil status.
The data has been offered for sale for an undisclosed amount, so it has not been broadly leaked yet.
ANTS saus that user do not need to take any action but recommends exercising “extreme caution” about suspicious or unusual communication over SMS, voice, and emails appearing to come from the agency.
BleepingComputer has contacted ANTS to ask about the threat actor’s allegations, but we have not received a response as of publishing.
Update 4/24 - ANTS published an update on the incident where the agency confirmed that 11.7 million accounts were impacted.
99% of What Mythos Found Is Still Unpatched.
AI chained four zero-days into one exploit that bypassed both renderer and OS sandboxes. A wave of new exploits is coming.
At the Autonomous Validation Summit (May 12 & 14), see how autonomous, context-rich validation finds what’s exploitable, proves controls hold, and closes the remediation loop.
Claim Your Spot
Ubuntu 26.04 (“Resolute Raccoon”) LTS has been released
on schedule.
This release brings a significant uplift in security, performance,
and usability across desktop, server, and cloud environments. Ubuntu
26.04 LTS introduces TPM-backed full-disk encryption, expanded use of
memory-safe components, improved application permission controls, and
Livepatch support for Arm systems, helping reduce downtime and
strengthen system resilience. […]
The newest Edubuntu, Kubuntu, Lubuntu, Ubuntu Budgie, Ubuntu Cinnamon,
Ubuntu Kylin, Ubuntu Studio, Ubuntu Unity, and Xubuntu are also being
released today. For more details on these, read their individual release
notes under the Official flavors section:
https://documentation.ubuntu.com/release-notes/26.04/#official-flavors
Maintenance updates will be provided for 5 years for Ubuntu Desktop, Ubuntu
Server, Ubuntu Cloud, Ubuntu WSL, and Ubuntu Core. All the remaining flavors
will be supported for 3 years.
This release brings a significant uplift in security, performance,
and usability across desktop, server, and cloud environments. Ubuntu
26.04 LTS introduces TPM-backed full-disk encryption, expanded use of
memory-safe components, improved application permission controls, and
Livepatch support for Arm systems, helping reduce downtime and
strengthen system resilience. […]
The newest Edubuntu, Kubuntu, Lubuntu, Ubuntu Budgie, Ubuntu Cinnamon,
Ubuntu Kylin, Ubuntu Studio, Ubuntu Unity, and Xubuntu are also being
released today. For more details on these, read their individual release
notes under the Official flavors section:
https://documentation.ubuntu.com/release-notes/26.04/#official-flavors
Maintenance updates will be provided for 5 years for Ubuntu Desktop, Ubuntu
Server, Ubuntu Cloud, Ubuntu WSL, and Ubuntu Core. All the remaining flavors
will be supported for 3 years.
See the release
notes for a list of changes, system requirements, and more.
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.