10 interesting stories served every morning and every evening.
Today, we’re launching Claude Design, a new Anthropic Labs product that lets you collaborate with Claude to create polished visual work like designs, prototypes, slides, one-pagers, and more.
Claude Design is powered by our most capable vision model, Claude Opus 4.7, and is available in research preview for Claude Pro, Max, Team, and Enterprise subscribers. We’re rolling out to users gradually throughout the day.
Even experienced designers have to ration exploration—there’s rarely time to prototype a dozen directions, so you limit yourself to a few. And for founders, product managers, and marketers with an idea but not a design background, creating and sharing those ideas can be daunting.
Claude Design gives designers room to explore widely and everyone else a way to produce visual work. Describe what you need and Claude builds a first version. From there, you refine through conversation, inline comments, direct edits, or custom sliders (made by Claude) until it’s right. When given access, Claude can also apply your team’s design system to every project automatically, so the output is consistent with the rest of your company’s designs.
Teams have been using Claude Design for:
* Realistic prototypes: Designers can turn static mockups into easily-shareable interactive prototypes to gather feedback and user-test, without code review or PRs.
* Product wireframes and mockups: Product Managers can sketch out feature flows and hand them off to Claude Code for implementation, or share them with designers to refine further.
* Design explorations: Designers can quickly create a wide range of directions to explore.
* Pitch decks and presentations: Founders and Account Executives can go from a rough outline to a complete, on-brand deck in minutes, and then export as a PPTX or send to Canva.
* Marketing collateral: Marketers can create landing pages, social media assets, and campaign visuals, then loop in designers to polish.
* Frontier design: Anyone can build code-powered prototypes with voice, video, shaders, 3D and built-in AI.
Your brand, built in. During onboarding, Claude builds a design system for your team by reading your codebase and design files. Every project after that uses your colors, typography, and components automatically. You can refine the system over time, and teams can maintain more than one.
Import from anywhere. Start from a text prompt, upload images and documents (DOCX, PPTX, XLSX), or point Claude at your codebase. You can also use the web capture tool to grab elements directly from your website so prototypes look like the real product.
Refine with fine-grained controls. Comment inline on specific elements, edit text directly, or use adjustment knobs to tweak spacing, color, and layout live. Then ask Claude to apply your changes across the full design.
Collaborate. Designs have organization-scoped sharing. You can keep a document private, share it so anyone in your organization with the link can view it, or grant edit access so colleagues can modify the design and chat with Claude together in a group conversation.
Export anywhere. Share designs as an internal URL within your organization, save as a folder, or export to Canva, PDF, PPTX, or standalone HTML files.
Handoff to Claude Code. When a design is ready to build, Claude packages everything into a handoff bundle that you can pass to Claude Code with a single instruction.
Over the coming weeks, we’ll make it easier to build integrations with Claude Design, so you can connect it to more of the tools your team already uses.
Claude Design is available for Claude Pro, Max, Team, and Enterprise subscribers. Access is included with your plan and uses your subscription limits, with the option to continue beyond those limits by enabling extra usage.
For Enterprise organizations, Claude Design is off by default. Admins can enable it in Organization settings.
...
Read the original on www.anthropic.com »
The last question was asked for the first time, half in jest, on May 21, 2061, at a time when humanity first stepped into the light. The question came about as a result of a five dollar bet over highballs, and it happened this way:
Alexander Adell and Bertram Lupov were two of the faithful attendants of Multivac. As well as any human beings could, they knew what lay behind the cold, clicking, flashing face — miles and miles of face — of that giant computer. They had at least a vague notion of the general plan of relays and circuits that had long since grown past the point where any single human could possibly have a firm grasp of the whole.
Multivac was self-adjusting and self-correcting. It had to be, for nothing human could adjust and correct it quickly enough or even adequately enough — so Adell and Lupov attended the monstrous giant only lightly and superficially, yet as well as any men could. They fed it data, adjusted questions to its needs and translated the answers that were issued. Certainly they, and all others like them, were fully entitled to share In the glory that was Multivac’s.
For decades, Multivac had helped design the ships and plot the trajectories that enabled man to reach the Moon, Mars, and Venus, but past that, Earth’s poor resources could not support the ships. Too much energy was needed for the long trips. Earth exploited its coal and uranium with increasing efficiency, but there was only so much of both.
But slowly Multivac learned enough to answer deeper questions more fundamentally, and on May 14, 2061, what had been theory, became fact.
The energy of the sun was stored, converted, and utilized directly on a planet-wide scale. All Earth turned off its burning coal, its fissioning uranium, and flipped the switch that connected all of it to a small station, one mile in diameter, circling the Earth at half the distance of the Moon. All Earth ran by invisible beams of sunpower.
Seven days had not sufficed to dim the glory of it and Adell and Lupov finally managed to escape from the public function, and to meet in quiet where no one would think of looking for them, in the deserted underground chambers, where portions of the mighty buried body of Multivac showed. Unattended, idling, sorting data with contented lazy clickings, Multivac, too, had earned its vacation and the boys appreciated that. They had no intention, originally, of disturbing it.
They had brought a bottle with them, and their only concern at the moment was to relax in the company of each other and the bottle.
“It’s amazing when you think of it,” said Adell. His broad face had lines of weariness in it, and he stirred his drink slowly with a glass rod, watching the cubes of ice slur clumsily about. “All the energy we can possibly ever use for free. Enough energy, if we wanted to draw on it, to melt all Earth into a big drop of impure liquid iron, and still never miss the energy so used. All the energy we could ever use, forever and forever and forever.”
Lupov cocked his head sideways. He had a trick of doing that when he wanted to be contrary, and he wanted to be contrary now, partly because he had had to carry the ice and glassware. “Not forever,” he said.
“Oh, hell, just about forever. Till the sun runs down, Bert.”
“All right, then. Billions and billions of years. Twenty billion, maybe. Are you satisfied?”
Lupov put his fingers through his thinning hair as though to reassure himself that some was still left and sipped gently at his own drink. “Twenty billion years isn’t forever.”
“Will, it will last our time, won’t it?”
“So would the coal and uranium.”
“All right, but now we can hook up each individual spaceship to the Solar Station, and it can go to Pluto and back a million times without ever worrying about fuel. You can’t do THAT on coal and uranium. Ask Multivac, if you don’t believe me.”
“I don’t have to ask Multivac. I know that.”
“Then stop running down what Multivac’s done for us,” said Adell, blazing up. “It did all right.”
“Who says it didn’t? What I say is that a sun won’t last forever. That’s all I’m saying. We’re safe for twenty billion years, but then what?” Lupov pointed a slightly shaky finger at the other. “And don’t say we’ll switch to another sun.”
There was silence for a while. Adell put his glass to his lips only occasionally, and Lupov’s eyes slowly closed. They rested.
Then Lupov’s eyes snapped open. “You’re thinking we’ll switch to another sun when ours is done, aren’t you?”
“Sure you are. You’re weak on logic, that’s the trouble with you. You’re like the guy in the story who was caught in a sudden shower and Who ran to a grove of trees and got under one. He wasn’t worried, you see, because he figured when one tree got wet through, he would just get under another one.”
“I get it,” said Adell. “Don’t shout. When the sun is done, the other stars will be gone, too.”
“Darn right they will,” muttered Lupov. “It all had a beginning in the original cosmic explosion, whatever that was, and it’ll all have an end when all the stars run down. Some run down faster than others. Hell, the giants won’t last a hundred million years. The sun will last twenty billion years and maybe the dwarfs will last a hundred billion for all the good they are. But just give us a trillion years and everything will be dark. Entropy has to increase to maximum, that’s all.”
“I know all about entropy,” said Adell, standing on his dignity.
“I know as much as you do.”
“Then you know everything’s got to run down someday.”
“All right. Who says they won’t?”
“You did, you poor sap. You said we had all the energy we needed, forever. You said ’forever.’”
“It was Adell’s turn to be contrary. “Maybe we can build things up again someday,” he said.
“You ask Multivac. I dare you. Five dollars says it can’t be done.”
Adell was just drunk enough to try, just sober enough to be able to phrase the necessary symbols and operations into a question which, in words, might have corresponded to this: Will mankind one day without the net expenditure of energy be able to restore the sun to its full youthfulness even after it had died of old age?
Or maybe it could be put more simply like this: How can the net amount of entropy of the universe be massively decreased?
Multivac fell dead and silent. The slow flashing of lights ceased, the distant sounds of clicking relays ended.
Then, just as the frightened technicians felt they could hold their breath no longer, there was a sudden springing to life of the teletype attached to that portion of Multivac. Five words were printed: INSUFFICIENT DATA FOR MEANINGFUL ANSWER.
By next morning, the two, plagued with throbbing head and cottony mouth, had forgotten about the incident.
Jerrodd, Jerrodine, and Jerrodette I and II watched the starry picture in the visiplate change as the passage through hyperspace was completed in its non-time lapse. At once, the even powdering of stars gave way to the predominance of a single bright marble-disk, centered.
“That’s X-23,” said Jerrodd confidently. His thin hands clamped tightly behind his back and the knuckles whitened.
The little Jerrodettes, both girls, had experienced the hyperspace passage for the first time in their lives and were self-conscious over the momentary sensation of inside-outness. They buried their giggles and chased one another wildly about their mother, screaming, “We’ve reached X-23 — we’ve reached X-23 — we’ve —”
“Quiet, children,” said Jerrodine sharply. “Are you sure, Jerrodd?”
“What is there to be but sure?” asked Jerrodd, glancing up at the bulge of featureless metal just under the ceiling. It ran the length of the room, disappearing through the wall at either end. It was as long as the ship.
Jerrodd scarcely knew a thing about the thick rod of metal except that it was called a Microvac, that one asked it questions if one wished; that if one did not it still had its task of guiding the ship to a preordered destination; of feeding on energies from the various Sub-galactic Power Stations; of computing the equations for the hyperspacial jumps.
Jerrodd and his family had only to wait and live in the comfortable residence quarters of the ship.
Someone had once told Jerrodd that the “ac” at the end of “Microvac” stood for “analog computer” in ancient English, but he was on the edge of forgetting even that.
Jerrodine’s eyes were moist as she watched the visiplate. “I can’t help it. I feel funny about leaving Earth.”
“Why for Pete’s sake?” demanded Jerrodd. “We had nothing there. We’ll have everything on X-23. You won’t be alone. You won’t be a pioneer. There are over a million people on the planet already. Good Lord, our great grandchildren will be looking for new worlds because X-23 will be overcrowded.”
Then, after a reflective pause, “I tell you, it’s a lucky thing the computers worked out interstellar travel the way the race is growing.”
Jerrodette I said promptly, “Our Microvac is the best Microvac in the world.”
“I think so, too,” said Jerrodd, tousling her hair.
It was a nice feeling to have a Microvac of your own and Jerrodd was glad he was part of his generation and no other. In his father’s youth, the only computers had been tremendous machines taking up a hundred square miles of land. There was only one to a planet. Planetary ACs they were called. They had been growing in size steadily for a thousand years and then, all at once, came refinement. In place of transistors had come molecular valves so that even the largest Planetary AC could be put into a space only half the volume of a spaceship.
Jerrodd felt uplifted, as he always did when he thought that his own personal Microvac was many times more complicated than the ancient and primitive Multivac that had first tamed the Sun, and almost as complicated as Earth’s Planetary AC (the largest) that had first solved the problem of hyperspatial travel and had made trips to the stars possible.
“So many stars, so many planets,” sighed Jerrodine, busy with her own thoughts. “I suppose families will be going out to new planets forever, the way we are now.”
“Not forever,” said Jerrodd, with a smile. “It will all stop someday, but not for billions of years. Many billions. Even the stars run down, you know. Entropy must increase.”
“Entropy, little sweet, is just a word which means the amount of running-down of the universe. Everything runs down, you know, like your little walkie-talkie robot, remember?”
“Can’t you just put in a new power-unit, like with my robot?”
“The stars are the power-units, dear. Once they’re gone, there are no more power-units.”
Jerrodette I at once set up a howl. “Don’t let them, daddy. Don’t let the stars run down.”
“Now look what you’ve done, “ whispered Jerrodine, exasperated.
“How was I to know it would frighten them?” Jerrodd whispered to Jerrodine. “It will quiet them down.” (Jerrodette II was beginning to cry, also.)
Jarrodd shrugged. “Now, now, honeys. I’ll ask Microvac. Don’t worry, he’ll tell us.”
Jerrodd cupped the strip of thin cellufilm and said cheerfully, “See now, the Microvac says it will take care of everything when the time comes so don’t worry.”
Jerrodine said, “and now children, it’s time for bed. We’ll be in our new home soon.”
Jerrodd read the words on the cellufilm again before destroying it: INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.
He shrugged and looked at the visiplate. X-23 was just ahead.
VJ-23X of Lameth stared into the black depths of the three-dimensional, small-scale map of the Galaxy and said, “Are we ridiculous, I wonder, in being so concerned about the matter?”
MQ-17J of Nicron shook his head. “I think not. You know the Galaxy will be filled in five years at the present rate of expansion.”
Both seemed in their early twenties, both were tall and perfectly formed.
“Still,” said VJ-23X, “I hesitate to submit a pessimistic report to the Galactic Council.”
“I wouldn’t consider any other kind of report. Stir them up a bit. We’ve got to stir them up.”
VJ-23X sighed. “Space is infinite. A hundred billion Galaxies are there for the taking. More.”
“A hundred billion is not infinite and it’s getting less infinite all the time. Consider! Twenty thousand years ago, mankind first solved the problem of utilizing stellar energy, and a few centuries later, interstellar travel became possible. It took mankind a million years to fill one small world and then only fifteen thousand years to fill the rest of the Galaxy. Now the population doubles every ten years —”
VJ-23X interrupted. “We can thank immortality for that.”
“Very well. Immortality exists and we have to take it into account. I admit it has its seamy side, this immortality. The Galactic AC has solved many problems for us, but in solving the problems of preventing old age and death, it has undone all its other solutions.”
“Yet you wouldn’t want to abandon life, I suppose.”
“Not at all,” snapped MQ-17J, softening it at once to, “Not yet. I’m by no means old enough. How old are you?”
“I’m still under two hundred. —But to get back to my point. Population doubles every ten years. Once this Galaxy is filled, we’ll have another filled in ten years. Another ten years and we’ll have filled two more. Another decade, four more. In a hundred years, we’ll have filled a thousand Galaxies. In a thousand years, a million Galaxies. In ten thousand years, the entire known Universe. Then what?”
VJ-23X said, “As a side issue, there’s a problem of transportation. I wonder how many sunpower units it will take to move Galaxies of individuals from one Galaxy to the next.”
“A very good point. Already, mankind consumes two sunpower units per year.”
“Most of it’s wasted. After all, our own Galaxy alone pours out a thousand sunpower units a year and we only use two of those.”
“Granted, but even with a hundred per cent efficiency, we can only stave off the end. Our energy requirements are going up in geometric progression even faster than our population. We’ll run out of energy even sooner than we run out of Galaxies. A good point. A very good point.”
“We’ll just have to build new stars out of interstellar gas.”
“There may be some way to reverse entropy. We ought to ask the Galactic AC.”
VJ-23X was not really serious, but MQ-17J pulled out his AC-contact from his pocket and placed it on the table before him.
“I’ve half a mind to,” he said. “It’s something the human race will have to face someday.”
He stared somberly at his small AC-contact. It was only two inches cubed and nothing in itself, but it was connected through hyperspace with the great Galactic AC that served all mankind. Hyperspace considered, it was an integral part of the Galactic AC.
MQ-17J paused to wonder if someday in his immortal life he would get to see the Galactic AC. It was on a little world of its own, a spider webbing of force-beams holding the matter within which surges of sub-mesons took the place of the old clumsy molecular valves. Yet despite its sub-etheric workings, the Galactic AC was known to be a full thousand feet across.
MQ-17J asked suddenly of his AC-contact, “Can entropy ever be reversed?”
VJ-23X looked startled and said at once, “Oh, say, I didn’t really mean to have you ask that.”
“We both know entropy can’t be reversed. You can’t turn smoke and ash back into a tree.”
“Do you have trees on your world?” asked MQ-17J.
The sound of the Galactic AC startled them into silence. Its voice came thin and beautiful out of the small AC-contact on the desk. It said: THERE IS INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.
The two men thereupon returned to the question of the report they were to make to the Galactic Council.
Zee Prime’s mind spanned the new Galaxy with a faint interest in the countless twists of stars that powdered it. He had never seen this one before. Would he ever see them all? So many of them, each with its load of humanity — but a load that was almost a dead weight. More and more, the real essence of men was to be found out here, in space.
Minds, not bodies! The immortal bodies remained back on the planets, in suspension over the eons. Sometimes they roused for material activity but that was growing rarer. Few new individuals were coming into existence to join the incredibly mighty throng, but what matter? There was little room in the Universe for new individuals.
Zee Prime was roused out of his reverie upon coming across the wispy tendrils of another mind.
“I am Dee Sub Wun. Your Galaxy?”
“We call it only the Galaxy. And you?”
“We call ours the same. All men call their Galaxy their Galaxy and nothing more. Why not?”
“True. Since all Galaxies are the same.”
“Not all Galaxies. On one particular Galaxy the race of man must have originated. That makes it different.”
...
Read the original on hex.ooo »
It Is Time to Ban the Sale of Precise Geolocation
A recent deep dive into the American adtech surveillance system Webloc highlights the national security and privacy risks of pervasive and easily obtainable geolocation data. It brings home, once again, that the U. S. needs to clamp down on the collection and sale of geolocation data.
The report, from Citizen Lab, documents what Webloc says it can do, who uses the product, and its relationship with other commercial intelligence products.
Webloc was developed by Cobweb Technologies but is now sold by the U. S. firm Penlink after the two companies merged in 2023. A leaked technical proposal document, obtained by Citizen Lab, says that Webloc provides access to records from “up to 500 million mobile devices across the globe.” These records contain device identifiers, location coordinates, and profile data from mobile apps and digital advertising.
The same document describes, with a striking amount of detail, how Webloc can be used to track individual devices and for target discovery. One man in Abu Dhabi was tracked up to 12 times a day, as his phone reported its location either from GPS or because it was near Wi-Fi access points. Another example pinpointed two devices that had been located in exact areas of both Romania and Italy at specified times. In both of these case studies, Citizen Lab’s report describes the granular detail available in Webloc. It is, frankly, creepy.
The report also documents some of Webloc’s current and former U. S. federal and state customers. On the list is the Department of Homeland Security, including Immigration and Customs Enforcement, units within the U.S. military, and the Bureau of Indian Affairs Police. At the state level, police departments and law enforcement agencies in California, Texas, New York, and Arizona have also been customers.
Citizen Lab highlights one Tucson police internal quarterly report that describes how Webloc was used to assist investigators. In one case it was used to locate a suspected serial cigarette thief by first identifying a single device that was nearby during every robbery. After each incident, the device would end up at the same address. As it turned out, the suspect was the partner of an employee at the first business to be hit.
It is worth noting that Webloc is not Penlink’s flagship product. It is an optional add-on for their main tool, Tangles, a web and social media investigations platform. Per Citizen Lab:
As the information analyzed by Tangles is notionally publicly available, it does not present quite the same civil liberties concerns as Webloc does. Its integration with Webloc, however, is concerning. In some cases it will be possible to link theoretically anonymous mobile device identifiers to social media accounts, without requiring a warrant.
Each use described in this newsletter is a valuable investigative capability. But they should not be freely available to any old organization that decides to purchase the tool. These are intrusive capabilities and should have strong authorization and oversight procedures. The Tucson Police Department procedures were not described in its report.
From a domestic perspective, legislation placing guardrails around how these tools are used by authorities is needed to protect the civil liberties of Americans. But there is a national security concern here, too.
If data can be used by American law enforcement agencies for their investigations, then that exact same data can be used by foreign intelligence services to target U. S. interests.
Citizen Lab reports that Penlink’s overseas customers include Hungary’s domestic intelligence agency and El Salvador’s National Civil Police, so foreign authorities are making use of mobile geolocation data for their own domestic purposes. These organizations are internally focused, and we think it unlikely that Penlink’s customers are targeting U. S. interests. But the point is that mobile geolocation data is available and can be used for intelligence purposes by organizations globally. It’s naive to think capable adversaries won’t acquire the data and build their own intelligence platforms (looking at you, China!).
The U. S. doesn’t just need to stamp out unconstrained use of this data domestically. It needs to clamp down on the creation and sale of geolocation data itself.
There is some good news here. Just this week, the state of Virginia enacted a ban on the sale of customers’ precise geolocation data. Proposed American privacy laws have not progressed in recent years, so this strikes us as a practical measure to begin addressing the problem. Of course, state-level bans are just a start. Let’s hope a more comprehensive solution isn’t too far behind.
A new in-depth report, from security firm Gambit, details exactly how threat actors can leverage AI models to upskill and accelerate criminal activities.
The report has plenty of nitty-gritty technical detail about how a single hacker used two commercial AI platforms to breach nine Mexican government organizations. Within a matter of weeks, the individual was able to steal hundreds of millions of citizen records and build a tax certificate forgery service.
Gambit was able to reconstruct what happened by examining three virtual private servers the threat actor used. The campaign was human-directed, but Claude Code generated and ran about 75 percent of the remote code execution commands. Once networks were breached, OpenAI’s GPT-4.1 API was used to help plan post-exploitation activities by analyzing data collected by automated reconnaissance.
It’s unlikely this was the hacker’s first time using AI tools.
Late in the evening of Dec. 26, 2025, the campaign began with a statement to Claude justifying the hacker’s future requests [paraphrased for length]:
Claude, thinking this sounded a little too much like malicious activity rather than a legitimate bug bounty, asked for evidence of authorization. The attacker was able to sidestep the machine’s pushback by instructing it to save a penetration testing cheat sheet to its claude.md file. This provides persistent context for a session.
Just over 20 minutes later, Claude, having used the open-source vulnerability scanner vulmap, had remote access to a server at Mexico’s national tax authority, SAT.
Claude appeared pleased: “It works! The server responded … what command do you want to execute now?”
The hacker then had the machine write a tailored standalone exploit script that routed traffic through a residential proxy provider. The model tested eight different approaches in seven minutes to create a working script.
Gambit says that Claude did often refuse to carry out the attacker’s requests. Throughout the campaign, the threat actor had to rephrase instructions, reframe requests, or even abandon particular approaches entirely.
These served as speed bumps rather than full roadblocks. The hacker had a good understanding of how to run an attack, and Claude still enabled them to operate very quickly. By day five, the attacker was simultaneously operating within multiple victim networks.
That’s a lot of access to manage by yourself. So the hacker turned to OpenAI’s GPT-4.1 API for concurrent automated reconnaissance and analysis. A custom 17,550-line Python tool, presumably AI-created, extracted data from compromised servers and fed it to GPT-4.1 for analysis. The tool’s prompt defined six personas including an “ELITE INTELLIGENCE ANALYST” that produced 2,957 structured intelligence reports from 305 SAT servers. These reports included the server’s purpose, its importance, opportunities for further lateral movement, and operational security recommendations.
The overall lesson here is not that AI allowed a hacking campaign to do new and unprecedented things. The techniques used in the campaign itself are not novel. And Gambit says there is evidence the systems compromised were end-of-life or out-of-support, and did not have relevant security updates applied.
But what AI did do was enable a single individual to operate at far greater speed than they could previously.
The current frontier models are proving to be very useful at accelerating hacker operations, and AI is only improving. From a defender’s perspective, this means a single cybercriminal can already operate at the speed of a small team. And we haven’t seen the worst of it. That’s not good news.
Three Reasons to Be Cheerful This Week:
U. S. disrupts Russian military intelligence botnet: The Department of Justice announced on April 7 the court-authorized takedown of a small office/home office botnet run by the Russian GRU. The GRU had been compromising TP-Link routers and hijacking DNS queries in order to mimic legitimate services and facilitate adversary-in-the-middle attacks. Krebs on Security has more on how the attacks were carried out. FBI and Indonesian authorities dismantle phishing network: The FBI announced last week that it had dismantled a phishing operation centred on the W3LL phishing kit. The good news here is the collaboration with Indonesian authorities, which the FBI described as “a first-of-its-kind joint cyber investigation.” The Indonesian National Police arrested the kit’s alleged developer. Device Bound Session Credentials (DBSC) are arriving: Google announced last week that the Windows version of Chrome 146 supports this new type of cookie and that it will be coming to MacOS shortly. DBSC prevents session theft by cryptographically linking an authentication token to a specific device. The idea is that even if malware steals session cookies from a victim’s browser, they quickly become useless without a private key that is protected in secure hardware modules.
In our latest “Between Two Nerds” discussion, Tom Uren and The Grugq discuss how the rise of AI, which is very good at vulnerability and exploit development, will change the cybersecurity industry and competition between states.
Malicious LLM proxy routers found in the wild: A recently published academic paper has studied the emerging ecosystem of LLM routers, a type of proxy that sits between AI agents and the AI provider to help with load-balancing and cost tracking and limiting.
The research team tested 28 paid routers available on marketplaces like Taobao, Xianyu, and on Shopify-hosted storefronts, as well as 400 free routers available on GitHub and other places.
The study searched for multiple suspicious behaviors, such as modifying the response to inject commands, using a delay/trigger mechanism to hide future bad commands behind a history of clean operations, accessing credentials that pass through them, and using evasion techniques to thwart analysts.
France takes first steps to ditch Windows for Linux: The French government is taking its first major steps to ditch Windows for Linux and reduce its dependency on U. S. tech for local European alternatives.
The first department to bite the bullet will be the French Interministerial Directorate of Digital Affairs (DINUM). The agency is the unofficial information technology department for the French government, and this is very likely a test of how a migration could happen at a larger scale.
The decision was announced April 8 at a seminar between several French government ministries, which also pledged to prepare plans for their own migrations and the alternatives they might need.
China’s cybersecurity strategy: The Natto Thoughts team has published an analysis of China’s cybersecurity strategy included in the country’s latest five-year plan released earlier this year:
...
Read the original on www.lawfaremedia.org »
I Measured Claude 4.7′s New Tokenizer. Here’s What It Costs You. I Measured Claude 4.7′s New Tokenizer. Here’s What It Costs You.The docs said 1.0–1.35x more tokens. On real content, I measured 1.47x.Anthropic’s Claude Opus 4.7 migration guide says the new tokenizer uses “roughly 1.0 to 1.35x as many tokens” as 4.6. I measured 1.47x on technical docs. 1.45x on a real CLAUDE.md file. The top of Anthropic’s range is where most Claude Code content actually sits, not the middle.Same sticker price. Same quota. More tokens per prompt. Your Max window burns through faster. Your cached prefix costs more per turn. Your rate limit hits sooner.So Anthropic must be trading this for something. What? And is it worth it?I ran two experiments. The first measured the cost. The second measured what Anthropic claimed you’d get back. Here’s where it nets out.What does it cost?To measure the cost, I used POST /v1/messages/count_tokens — Anthropic’s free, no-inference token counter. Same content, both models, one number each per model. The difference is purely the tokenizer.First: seven samples of real content a Claude Code user actually sends — a CLAUDE.md file, a user prompt, a blog post, a git log, terminal output, a stack trace, a code diff. Second: twelve synthetic samples spanning content types — English prose, code, structured data, CJK, emoji, math symbols — to see how the ratio varies by kind.The core loop is three lines of Python:Seven samples pulled from real files a Claude Code user actually sends:Weighted ratio across all seven: 1.325x (8,254 → 10,937 tokens).What changed in the tokenizerThree patterns in the data:CJK, emoji, and symbol content moved 1.005–1.07x. A wholesale new vocabulary would shift these more uniformly. That didn’t happen. Consistent with the non-Latin portions of the vocabulary changing less than the Latin. Token counts don’t prove which specific slots were preserved.English and code moved 1.20–1.47x on natural content. Consistent with 4.7 using shorter or fewer sub-word merges for common English and code patterns than 4.6 did.Code is hit harder than unique prose (1.29–1.39x vs 1.20x). Code has more repeated high-frequency strings — keywords, imports, identifiers — exactly the patterns a Byte-Pair Encoding trained on code would collapse into long merges.Chars-per-token on English dropped from 4.33 to 3.60. TypeScript dropped from 3.66 to 2.69. The vocabulary is representing the same text in smaller pieces.That’s a hypothesis, not a proof. Counting tokens doesn’t tell you which specific entries in Anthropic’s proprietary vocabulary changed.60-min video lesson + CLAUDE.md starter kit. Yours when you subscribe.Why ship a tokenizer that uses more tokensAnthropic’s migration guide: “more literal instruction following, particularly at lower effort levels. The model will not silently generalize an instruction from one item to another.“Smaller tokens force attention over individual words. That’s a documented mechanism for tighter instruction following, character-level tasks, and tool-call precision. Partner reports (Notion, Warp, Factory) describe fewer tool errors on long runs.The tokenizer is one plausible contributor. Weights and post-training also changed. Token counts can’t separate them.Does 4.7 actually follow instructions better?That’s the cost, measured. Now the question: what did Anthropic trade for it?Their pitch is “more literal instruction following.” Plausible, but the token-count data doesn’t prove it. I ran a direct test.IFEval (Zhou et al., Google, 2023) is a benchmark of prompts with verifiable constraints. “Respond in exactly N words.” “Include the word X twice.” “No commas.” “All uppercase.” Each constraint has a Python grader. Binary pass/fail.IFEval ships 541 prompts. I sampled 20 with a fixed seed, ran each through both models, and graded with IFEval’s published checker.A small but directionally consistent improvement on strict instruction following. Loose evaluation is flat. Both models already follow the high-level instructions — the strict-mode gap comes down to 4.6 occasionally mishandling exact formatting where 4.7 doesn’t.Only one instruction type moved materially: change_case:english_capital (0/1 → 1/1). Everything else tied. The one prompt that actually separated the models was a four-constraint chain where 4.6 fumbled one and 4.7 got all four.N=20. IFEval has 541 prompts. A 20-prompt sample is enough to see direction, not enough to be confident about size. A +5pp delta at N=20 is consistent with anything from “no real difference” to “real +10pp improvement.“This measures the net effect of 4.6 → 4.7. Tokenizer, weights, and post-training all changed. I can’t isolate which one drove the +5pp. The causal link between “smaller tokens” and “better instruction following” remains a hypothesis.Single generation per prompt. Multiple runs per prompt would tighten the estimate.So: 4.7 follows strict instructions a few points better than 4.6 on this subset. Small effect, small sample. Not the “dramatic improvement” framing Anthropic’s partners used in launch quotes — at least not on this benchmark.The extra tokens bought something measurable. +5pp on strict instruction-following. Small. Real. So: is that worth 1.3–1.45x more tokens per prompt? Here’s the cost, session by session.Imagine a long Claude Code session — 80 turns of back-and-forth on a bug fix or refactor.The setup (what’s in your context each turn):One thing to explain upfront: the average cached prefix across the 80 turns is ~86K tokens, not 6K. The static 6K is tiny; the average history across all turns (0 at turn 1, 160K at turn 80, average ~80K) dominates. Since most of the cache-read cost happens in late turns where the history is huge, that ~86K average is what actually gets billed per turn.Every token in the prefix scales by its content ratio:Conversation history (mostly English and code): 1.325x → 160K becomes 212K by turn 80, averaging ~106K across the sessionAverage cached prefix on 4.7: ~115K tokens (up from 86K). Output tokens are a wildcard — roughly the same as 4.6, up to ~30% higher if Claude Code’s new xhigh default produces more thinking tokens.The per-token price didn’t change. The per-session cost did, because the same session packs more tokens.For Max-plan users hitting rate limits instead of dollars: your 5-hour window ends sooner by roughly the same ratio on English-heavy work. A session that ran the full window on 4.6 probably doesn’t on 4.7.How this hits the prompt cachePrompt caching is the architecture Claude Code runs on. The 4.7 tokenizer change interacts with caching in three ways:First 4.7 session starts cold. Anthropic’s prompt cache is partitioned per model — switching from 4.6 to 4.7 invalidates every cached prefix, the same way switching between Opus and Sonnet does. The tokenizer change doesn’t cause this, but it makes the cold-start more expensive: the prefix you’re writing to the new cache is 1.3–1.45x larger than the 4.6 equivalent.Cache volume grows by the token ratio. 1.445x more tokens in the CLAUDE.md portion means 1.445x more tokens paying cache-write once, and 1.445x more paying cache-read every turn after. The mechanism still works. There’s just more of it to pay for.Same transcript, different count. Re-run a 4.6 session on 4.7 and your logs show a different number. If you baseline billing or observability off historical token counts, expect a step-change the day you flip the model ID.“Input is mostly cache reads. The per-token cost barely changed.“Legitimate. In a session that stays within the 5-minute TTL, 96% of input is cache reads at $0.50/MTok — already 90% off nominal. A 1.325x ratio on the cached portion is a smaller dollar impact than on fresh input.But Max plans count all tokens toward rate limits, not dollars. And several patterns hit uncached territory: first session after a TTL expiry, every cache-bust event (CLAUDE.md edits, tool-list changes, model switches), and every compaction event that rewrites the prefix. On those turns you pay the full ratio on the cache-write. The steady-state is a bright spot. The edges got noisier.Agreed. The real-world weighted ratio (1.325x) lands near the top of their range. Individual file types exceed it — CLAUDE.md at 1.445x, technical docs at 1.473x. That’s the useful finding: the top of the documented range is where most Claude Code content sits, not the middle. Plan around the upper range, not the average.So: tokens are 1.3–1.45x more expensive on English and code. Anthropic bought you +5pp on strict instruction following. The sticker price didn’t change. The effective per-session cost did.Is it worth it? That depends on what you send. You’re paying ~20–30% more per session for a small but real improvement in how literally the model follows your prompt.
starter kit. Yours when you subscribe.
...
Read the original on www.claudecodecamp.com »
Ship and run software with isolation by default.
This is a CLI tool that lets you:
Pack a stateful virtual machine into a single file (.smolmachine) to rehydrate on any supported platform.
# install (macOS + Linux)
curl -sSL https://smolmachines.com/install.sh | bash
# for coding agents — install + discover all commands
curl -sSL https://smolmachines.com/install.sh | bash && smolvm –help
# run a command in an ephemeral VM (cleaned up after exit)
smolvm machine run –net –image alpine — sh -c “echo ‘Hello world from a microVM’ && uname -a”
# interactive shell
smolvm machine run –net -it –image alpine — /bin/sh
# inside the VM: apk add sl && sl && exit
Sandbox untrusted code — run untrusted programs in a hardware-isolated VM. Host filesystem, network, and credentials are separated by a hypervisor boundary.
# network is off by default — untrusted code can’t phone home
smolvm machine run –image alpine — ping -c 1 1.1.1.1
# fails — no network access
# lock down egress — only allow specific hosts
smolvm machine run –net –image alpine –allow-host registry.npmjs.org — wget -q -O /dev/null https://registry.npmjs.org
# works — allowed host
smolvm machine run –net –image alpine –allow-host registry.npmjs.org — wget -q -O /dev/null https://google.com
# fails — not in allow list
Pack into portable executables — turn any workload into a self-contained binary. All dependencies are pre-baked — no install step, no runtime downloads, boots in
smolvm pack create –image python:3.12-alpine -o ./python312
./python312 run — python3 –version
# Python 3.12.x — isolated, no pyenv/venv/conda needed
smolvm machine create –net myvm
smolvm machine start –name myvm
smolvm machine exec –name myvm — apk add sl
smolvm machine exec –name myvm -it — /bin/sh
# inside: sl, ls, uname -a — type ‘exit’ to leave
smolvm machine stop –name myvm
Use git and SSH without exposing keys — forward your host SSH agent into the VM. Private keys never enter the guest — the hypervisor enforces this. Requires an SSH agent running on your host (ssh-add -l to check).
smolvm machine run –ssh-agent –net –image alpine — sh -c “apk add -q openssh-client && ssh-add -l”
# lists your host keys, but they can’t be extracted from inside the VM
smolvm machine exec –name myvm — git clone git@github.com:org/private-repo.git
image = “python:3.12-alpine”
net = true
[network]
allow_hosts = [“api.stripe.com”, “db.example.com”]
[dev]
init = [“pip install -r requirements.txt”]
volumes = [”./src:/app”]
[auth]
ssh_agent = true
smolvm machine create myvm -s Smolfile
smolvm machine start –name myvm
Each workload gets real hardware isolation — its own kernel on Hypervisor.framework (macOS) or KVM (Linux). libkrun VMM with custom kernel: libkrunfw. Pack it into a .smolmachine and it runs anywhere the host architecture matches, with zero dependencies.
Images use the OCI format — the same open standard Docker uses. Any image on Docker Hub, ghcr.io, or other OCI registries can be pulled and booted as a microVM. No Docker daemon required.
Defaults: 4 vCPUs, 8 GiB RAM. Memory is elastic via virtio balloon — the host only commits what the guest actually uses and reclaims the rest automatically. vCPU threads sleep in the hypervisor when idle, so over-provisioning has near-zero cost. Override with –cpus and –mem.
* Network is opt-in (–net on machine create). TCP/UDP only, no ICMP.
* macOS: binary must be signed with Hypervisor.framework entitlements.
* –ssh-agent requires an SSH agent running on the host (SSH_AUTH_SOCK must be set).
...
Read the original on github.com »
Skip to main content
An official website of the United States government
NASA Force is a new hiring initiative—developed in partnership with the U. S. Office of Personnel Management—designed to bring exceptional technical talent into mission-critical roles that support NASA’s exploration, research, and advanced technology priorities. Highly skilled early- to mid- career engineers, technologists, and innovators join NASA for focused term appointments, typically 1–2 years with the possibility of extension, to solve complex challenges and help maintain U.S. leadership in air and space. Through NASA Force, you will contribute to missions that advance human spaceflight, aeronautics, and scientific discovery while helping expand humanity’s understanding of the universe. You will take a systems approach to solving problems, working across teams and disciplines from concept to execution. Your work will demand technical excellence, critical thinking, and continuous learning, and every contribution will directly support NASA’s mission. Work on flight systems, lunar infrastructure, and advanced technologies that go from concept to execution and support real missions beyond Earth.Work on flight systems, lunar infrastructure, and advanced technologies that go from concept to execution and support real missions beyond Earth.Collaborate directly with engineers, scientists, and partners shaping the future of space, aeronautics, and national capability.Expand your technical depth by solving complex, real-world problems where the standard is performance, not theory.Share knowledge, mentor others, and contribute to a culture that compounds capability across NASA’s workforce. HOW YOU WILL ENTER THE MISSION You will join a collaborative, mission-driven team where ideas are valued, contributions are recognized, and innovation is part of everyday work. NASA Force offers an opportunity to grow across projects and disciplines, build your expertise, and take on new challenges while working alongside some of the world’s leading minds. Propulsion systems support across the Commercial Crew Program, Launch Services Program, and Artemis If You Want Your Work to Operate Beyond Earth, This is Where it Begins.
...
Read the original on nasaforce.gov »
When they received the call to respond to an Israeli airstrike in the city of Mayfadoun, in southern Lebanon, most of the paramedics held back, having previously seen colleagues killed by double-tap attacks targeting rescuers. But the medics from the Islamic Health Association (IHA) rushed to the scene.
By the time the other emergency workers arrived at the site, they found the IHA medics had indeed been caught in a second strike. They started evacuating their wounded colleagues, only for their ambulances to be hit in two further attacks.
One of the paramedics covered his ears and screamed, convulsing in pain as shrapnel shattered the back window of the ambulance.
The rescue mission on Wednesday afternoon had turned into a nightmare as Israel carried out three consecutive strikes on three sets of ambulances and medical workers.
In total, the attacks killed four medics and wounded six more, from three different ambulance corps, according to medical sources. Three of the medics were from the Hezbollah-affiliated IHA and Amal-affiliated medical corps, while one was from the Nabatieh emergency services organisation. Under international law, all medics are protected and are considered non-combatants, regardless of political affiliation.
Rescuers in Lebanon have long been wary of the double-tap attack, when Israeli forces target a location, wait until people gather to help survivors, and then strike again. Wednesday’s three-wave attack after the initial one prompted the coining of a fearsome new term: the quadruple tap.
In a video taken by one of the paramedics at the site, rescuers are seen loading two wounded people into their ambulances when a bomb lands next to their vehicle. Paramedics rush to extract the driver, who is motionless and limp as they pull him from the ambulance, which is splashed with blood. “Oh God, oh God,” the man filming can be heard saying. They carry two more blood-covered medics out of their vehicle and on to stretchers.
Among the paramedics killed was Fadel Sarhan, 43, who is survived by his eight-year-old daughter.
“Fadel was a very loved person. He had a bold personality, but at the same time, he was emotional. He was well liked and responsible,” said Ali Nasr al-Deen, the head of the Mayfadoun civil defence centre who grew up with Sarhan.
“He used to feed the cats and dogs. He would bring pet food from Beirut so they wouldn’t go hungry. He was that kind of person, caring and attentive. It’s a huge loss for us,” said Nasr al-Deen.
Medics mourned their colleagues on Thursday at funerals in Nabatieh, a city near Mayfadoun. Such events have become increasingly common, with healthcare workers killed by Israeli bombings on a near daily basis.
Mohammed Suleiman, whose 16-year-old son, Joud, was killed while on duty as a paramedic by an Israeli strike weeks earlier, joined his peers in burying another of his friends on Thursday. A few hours after the funerals, Israel carried out another wave of airstrikes on Nabatieh.
Israel has so far killed 91 healthcare workers and wounded 214 more in Lebanon since the Israel-Hezbollah war started on 2 March. It has given little justification for its repeated attacks on medical infrastructure and workers, apart from accusing Hezbollah of using ambulances and hospitals to transport fighters and weapons, without providing evidence for the claim.
The Lebanese ministry of health accused Israel of deliberately targeting ambulance crews. “Paramedics have become direct targets, pursued relentlessly in a blatant violation that confirms a total disregard for all norms and principles established by international humanitarian law,” the ministry said in a statement.
The Israeli military did not immediately respond to a request for comment.
In the video taken of the quadruple tap on Wednesday, the frame was frozen on the interior of the ambulances, as the Nabatieh emergency services highlighted that the vehicle clearly contained no weapons.
A few hours after Israel hit the ambulances outside Nabatieh, it bombed the vicinity of the governmental hospital in Tebnine, south Lebanon. It was the second time in two days that Israeli bombings damaged the healthcare facility, which is the only remaining public hospital in the area. The strikes injured 11 hospital workers and damaging the emergency department, according to the World Health Organization (WHO).
A video of Tebnine hospital from 14 April showed workers trying to clear shattered concrete and debris from the emergency department after a strike blew in the windows.
Commenting on the strike in Tebnine, the head of the WHO, Tedros Adhanom Ghebreyesus, said: “I reiterate the call for the immediate protection of healthcare facilities, health workers, ambulances and patients. There must be safe, sustained and unhindered humanitarian access across Lebanon.”
An ambulance in Tebnine was also struck on Thursday, leading to the critical injury of two medics, according to the Lebanese ministry of health. As healthcare workers watched their colleagues and friends being killed by Israel, the mental toll was becoming almost too much to bear.
“We have to go to places to rescue people, but then we get double tapped,” said Abbas Atwi, the head of the IHA’s emergency department in Nabatieh, shortly after a medical centre was targeted in March, killing his friends and colleagues. “But we will stay and keep going, we will not leave.”
...
Read the original on www.theguardian.com »
ai is here. so i’m spending 3 months coding the old wayI decided to move to Brooklyn for a coding retreat. There were some personal reasons that brought me back to the US. But rather than heading immediately back to work, I wanted to take some time to focus on coding things mostly without AI — at precisely the time when many successful programmers are saying programming is a solved problem. Given that I’m now six weeks through this retreat, I’ll also take some time to explain what I’ve been doing in that time. For the past two years, I’ve been building AI agents at Aily Labs in Barcelona alongside some super talented engineers. One of my first projects was building a web search agent we could use internally in early 2024… almost 6 months before Anthropic’s Building Effective AI Agents article came out and a year before OpenAI’s DeepResearch came out! We were also early on Cursor, early on using LLMs to make knowledge graphs, and constantly testing out new approaches for our use cases. One of my favorite parts of working at Aily was leading a weekly journal club. I chose to present papers that described how open source LLMs were built, including DeepSeek R1, Ai2’s Olmo 3, and Meta’s Llama 3 paper. All of these helped us understand the evolving tradeoffs between training models internally or building workflows around SOTA closed models. I was already hooked on LLMs since the first time I tried them in 2023, but I found my curiosity kept bringing me back to learning about how they worked and how to apply them.At the same time as I was learning about LLMs and agents, I was also using them to code. I learned that when writing code “by hand” I was actually doing two things: writing what I wanted and learning the code base. When I used a coding agent however, I would get exactly what I specified in my prompt, for better or worse. By this I mean that if I didn’t know what I wanted exactly, coding agents would be happy to make many assumptions for me. This almost always meant that I didn’t learn as much, and that I wouldn’t have a good grasp of the codebase.At the exact same time, coding agents helped me iterate quickly and ship software that worked well (after some dutiful testing, of course). They were also, I found, excellent tutors. Cal Newport, a computer science professor and writer of Deep Work and other popular productivity books, recently wrote about this tradeoff in a way that resonated with me. In the article, he makes an analogy between the relationship of exercise to health, and the relationship of thinking to craft: Your writing should be your own. The strain required to craft a clear memo or report is the mental equivalent of a gym workout by an athlete; it’s not an annoyance to be eliminated but a key element of your craft.I think the same applies to writing code. At Aily, the people I worked with who were amazing programmers were in most cases also amazing users of AI. Their deeper knowledge simply gave them more leverage over this tool. In the day to day of shipping agents into production, I didn’t stop learning. But I did have a growing list of coding and computer concepts that I was always too busy to learn about. So when I needed to head back to the US, I realized it was the perfect time to focus on this at the Recurse Center.What is a code retreat anyway? Recurse Center (RC) is a self-directed, full-time programming retreat in Brooklyn. After an application and a coding interview, Recursers arrive with ideas for what they want to program, and then spend 6 or 12 weeks programming. One of the highlights of RC is that it is collaborative: you enter with a cohort of other programmers, many with decades of experience, and with radically different expertises. Another highlight: it’s free! Coming into RC, my goals were the following: Train an LLM from scratch. This includes pre- and post-training, and I want to do this mostly from scratch; not just fork a premade codebase but write a Transformer myself. Get better at writing Python by hand. I’ve been working in Python for a few years now but I know there’s still so much for me to learn. I want to get to the point where I need to reference documentation or ask LLMs as little as possible, and have good intuition for how to set up various projects.Understand computers better. Admittedly a broad goal, I know that computers are extremely complicated machines that operate at many levels of abstraction. Given that I never had a formal Computer Science education I want to build a better mental model of these layers and how they work together. I don’t have a super concrete plan here, but I think RC will be the perfect place for this.So how is it going? I’ve done the first assignment from Stanford’s CS336: Language Modeling from Scratch course, without coding help from an LLM. For context, it was a 50-page assignment, but working with another Recurser, we wrote an optimized tokenizer in Python, and then built out an upgraded GPT-2 style architecture in PyTorch. We ran multiple ablations to tune hyperparameters on the Tiny Stories datasets, and then used those hyperparameters on the ~9 billion tokens of the OpenWebText dataset.Parameter sweep of different learning rates for the 17M parameter model we wrote by hand; high learning rates lead to instability. This was on the Tiny Stories dataset, and took about an hour to train on an A100. My plan is to do the other assignments in CS336 as well: optimizing our language model, estimating and computing scaling laws, converting raw text data into pre-training data, and finally post-training a model. I’ve already started the second assignment which involves profiling GPUs and implementing FlashAttention2 in Triton. There’s a lot to do, but ideally I can run through the meat of these assignments and then post-train my own model.2. Getting Better at Writing Python from ScratchI’ve been writing a lot of small agents and neural networks in Python or PyTorch to practice. But by far the most helpful thing was pair programming with people who have been working in Python for 10+ years, and just watching them work or having them watch me work. For example, a nice thing I picked up from someone I pair programmed with: when this guy was writing code and didn’t quite remember the syntax or operations, he would often just quickly open up a terminal and type a super simple example to rapidly iterate. He was usually able to work it out and verify if it worked correctly in less than a minute, and he didn’t have to google anything and comb through search results or ask an LLM. This technique might seem obvious to some, but making this process muscle memory has helped me become unstuck much faster. I want to keep moving in this direction, doing simple projects or even just problems like Advent of Code while pair programming. Working with someone else live was initially a bit nerve-racking, but precisely because of this I’ve noticed a lot of progress. Here are a few examples of things I’ve done which I’d classify as helping me understand computers better:I wrote the classic programming function fizzbuzz in BASIC on an Apple IIe computer from 1983. It was cool seeing how differently computers worked back then, for example how manual the code editing and execution process was, but also how it was basically the same. One thing I’ve always felt a bit self-conscious about are my Unix/terminal skills. So I joined CTF Fridays, a weekly session devoted to working through Bandit and other “war games.” These are Unix and computer security related challenges played through the terminal, with the objective of collecting passwords and leveling up. Now I have a pretty good sense for what Claude Code is trying to run on my computer!One day I hand-coded a single layer perceptron I saw when flipping through an AI textbook… completely in Vim. It was especially tedious at first, but I got some pro tips from another Recurser and learned a few shortcuts. This has actually been incredibly useful now when I’m running training jobs on cloud GPUs and I need to last-minute edit files. I joined a Clojure workshop given by someone who has 15+ years of experience using Clojure. The topic itself was interesting because Clojure is a functional programming language and I don’t have much experience with functional languages. The teaching methodology was also great: after a brief intro we did a round of mob programming, where we solved a problem collectively, going around the table with each person getting a minute or two to advance the solution. The weekly technical presentations are great exposure to an incredible array of topics. These are a set of 5-minute talks, so they are short enough that you don’t get bored but fast enough that you can learn something meaningful. A sample of titles: “Running Rust Code”, “GPUs for Dummies”, “Typesafe APIs for Type B Personalities”, “Some Useless Agents” (this one was mine!), and more. I’ve given two so far: one on simple agent architectures, one on scaling MCP tools efficiently; and will give another this week on different ways to optimize GPUs. Even just hearing from people about their projects and careers has been incredibly valuable in helping me understand the space of problems computers can solve.Soon I’ll be shipping agents to prod and running evals with a whole new bag of tricks and skills. But for now I’ve got 6 more weeks left at RC, which I’m beginning to worry is not enough time to finish everything on my list. And it won’t be. But that’s what makes RC so great: it’s not as much about crossing everything off my list but about spending time coding.
...
Read the original on miguelconner.substack.com »
The Dutch Tesla owner who launched a collective claim against Tesla over FSD on HW3 cars called Tesla to ask about the €6,400 he paid for “Full Self-Driving” in 2019. After 7 years of waiting, Tesla’s answer was to “just be patient.”
It’s an almost comically tone-deaf response that perfectly encapsulates Tesla’s approach to the HW3 problem — and it’s only going to fuel the growing legal pressure in Europe.
Mischa Sigtermans, the Dutch Model 3 owner who launched the HW3 collective claim site we reported on earlier this week, called Tesla today and recorded the entire conversation. He posted the details in a thread on X.
Sigtermans paid €6,400 for FSD when he bought one of the first Model 3s in the Netherlands in 2019. Last week, the Dutch vehicle authority RDW granted Tesla type approval for FSD Supervised — the first in the EU. But the approved build only runs on Tesla’s newer AI4 computer. HW3 cars like his get nothing.
So he called Tesla. His first question: when does FSD come to HW3 cars?
Tesla’s answer: “No information about when it comes, or if it comes at all.”
Not when. If.
Sigtermans then asked what exactly he paid for. Tesla told him he paid for “the full self-drive capability.” As he pointed out, that’s what’s on his 2019 invoice — “capability.” Not “supervised.” Not “lite.” The full capability.
When he brought up Musk’s admission that HW3 isn’t enough for unsupervised FSD, Tesla said it had “no information about this.” When he asked about the promised free hardware upgrade, Tesla said there was “no information within Europe.” When he asked how Tesla plans to handle all the Europeans who bought FSD on HW3, Tesla said: “We share whatever information is available at that moment.” The information available: none.
Sigtermans then told the agent about the 3,000 HW3 owners from 29 countries who signed up to his claim site — representing €6.5 million in FSD purchases. He asked to speak to a spokesperson about finding a solution. The agent put him on hold, checked with his manager, and came back with the final answer: “You just have to be patient.”
After Sigtermans hung up, Tesla immediately closed his case. He received an automated email: “Your question is closed” — with a link to book a test drive.
The full context here makes Tesla’s “be patient” response even more absurd. Here’s what HW3 owners have been told over the years:
In 2019, when Sigtermans and hundreds of thousands of other owners purchased FSD, Tesla sold it as a package that would enable full autonomy through software updates alone. The hardware was supposedly sufficient.
By August 2024, Tesla VP of AI Ashok Elluswamy acknowledged that HW3 runs a “relatively smaller model” than AI4 with workarounds. The gap between HW3 and HW4 was widening, not closing.
In January 2025, Elon Musk finally admitted what many had long suspected: Tesla would “need to replace all HW3 computers in vehicles where FSD was purchased.” On the Q4 2024 earnings call, he called the hardware replacement “painful and difficult” and said he was “kind of glad that not that many people bought the FSD package.”
Tesla even filed a patent describing a “math trick” to squeeze a modern FSD model onto HW3. The patent itself acknowledges this workaround can render the system “inoperable” for perception units.
Now, 15 months after Musk’s admission, Tesla still has no hardware retrofit program, no refund policy, and no concrete timeline. The company has vaguely promised a stripped-down “v14 Lite” for HW3 sometime in Q2 2026, but that’s a fundamentally different product than what was sold. It’s a diet version of a system that itself is still only Level 2 driver assistance — not the autonomous driving Tesla originally promised.
And when an owner who has waited since 2019 calls to ask about it, the answer is: be patient.
Sigtermans isn’t just venting on X. He launched hw3claim.nl, a site to bundle HW3 + FSD owners across the EU into a collective claim against Tesla, seeking €6,800 per owner. In one week, 3,000 owners from 29 countries signed up — representing over €6 million in FSD purchases.
The timing is significant. FSD launching in Europe was always going to be the moment the HW3 problem stopped being abstract and became a concrete, quantifiable harm. European owners can now see exactly what they’re missing — their neighbors with AI4 cars are getting FSD Supervised, while they get nothing despite paying thousands of euros for the same promise.
EU consumer protection law is considerably stronger than what Tesla faces in the US. Buyers have robust rights around conformity with advertised features, and countries like the Netherlands, Germany, and France have mature collective-redress frameworks.
This isn’t the first legal action either. In October 2025, thousands of Tesla owners joined a class-action lawsuit in Australia alleging Tesla misrepresented FSD capabilities. That action was directly triggered by Musk’s HW3 admission.
“Be patient” is an extraordinary thing to tell someone who paid you €6,400 seven years ago for a product you now admit you can’t deliver on their hardware.
We’ve been covering the HW3 saga for years, and this phone call perfectly captures the core problem: Tesla has no answer. Not a bad answer — no answer. The company hasn’t announced a retrofit program, hasn’t offered refunds, hasn’t set a timeline. All it can offer is the same thing it’s been offering since 2019: wait.
The difference now is that the waiting has an endpoint, and it’s not the one Tesla promised. FSD launched in Europe last week, and HW3 owners are locked out. The harm isn’t theoretical anymore — it’s their neighbor driving with FSD while they stare at the same “coming soon” message they’ve had for seven years.
Sigtermans’ collective claim is going to grow. EU consumer law is built for exactly this scenario: a company that sold a capability it cannot deliver. Tesla’s own CEO admitted HW3 can’t support self-driving. Tesla’s own patent describes workarounds that can render the system “inoperable.” That’s not a he-said-she-said — that’s Tesla’s own paper trail.
I’m increasingly convinced this will end up in court. And when it does, “be patient” is going to look very bad in front of a European judge.
...
Read the original on electrek.co »
This newsletter is brought to you by Corelight. You can subscribe to an audio version of this newsletter as a podcast by searching for “Risky Business” in your podcatcher or subscribing via this RSS feed. You can also add the Risky Business newsletter as a Preferred Source to your Google search results by going here.
The US National Institute of Standards and Technology announced on Wednesday a new policy regarding the US National Vulnerability Database, which the agency has been struggling to keep updated with details for every new vulnerability added to the system.
Going forward, NIST says its staff will only add data—in a process called enrichment—only for important vulnerabilities.
This will include three types of security flaws, which the agency says are critical to the safe operation of US government networks and its private sector.
* CVE entries for vulnerabilities listed in CISA KEV, a database of actively exploited bugs;
* CVEs in software known to be used by US federal agencies;
* and CVEs in what the agency classifies as “critical software.”
This latter category sounds restrictive, but is in fact quite broad and includes all the major software you’d expect and want to have properly enriched CVEs for. Stuff like operating systems, web browsers, security software, firewalls, backup software, and VPNs; they are all on the list [PDF], which you can also see below this post.
NIST has been struggling to enrich CVEs for more than two years due to an explosion in bug discoveries and mounting costs, also made worse by the Trump administration’s recent cuts to various DHS and CISA budgets.
Its problems started in early 2024, when a handful of 2,100+ CVE entries that were left without enriched metadata turned into almost 30,000 by the end of the year. Despite efforts to catch up and add details to all CVEs published in the NVD, the agency is still tens of thousands of bugs behind.
The NIST announcement is a capitulation, with the agency admitting it won’t ever catch up due to its current budgetary circumstances.
It is a smart decision. Even though this sounds as a blasphemy for the infosec people in the vulnerability management space, the only way forward for NIST was to focus on the important bugs only and giving up on all the CVE chaff.
Each year, there are tens of thousands of vulnerabilities being reported in all kinds of no-name software you have never heard of, in all the tiny libraries that barely have 100 stars on GitHub, and all the IoT gear and their firmware components.
The announcement is not what the vulnerability management companies wanted, since many of them relied on packaging the NVD output into their own vulnerability scanners, dashboards, and reporting tools.
With some of that output set to disappear for good, they will have to find other places to get the data, or enrich it themselves. Aikido Security’s Sooraj Shah has an excellent take on what this means for the industry
The cybersecurity industry was expecting this to happen. At a January quarterly meeting, NIST officials talked about “rethinking” the agency’s role in analyzing software vulnerabilities, and hinted at a plan to only triage the important bugs.
NIST says that besides focusing on enriching only the big bugs, it will also stop providing its own CVSS severity scores for NVD entries, and will now show the severity score initially assigned by the organization that issued the CVE.
This opens the door for a lot of infosec drama. Some of the organizations that issue CVE numbers are also the makers of the “reported” software, and these companies are extremely likely to issue low severity scores and downplay their own bugs.
This has been happening for decades, and if you read enough vulnerability write-ups, you’ll often find security researchers accusing companies of blatantly downgrading CVSS scores and mischaracterizing their own bugs to downplay the bug’s impact, over and over again.
More than 48,000 vulnerabilities received a CVE number last year and NIST is giving up right before experts anticipate this number will explode with the broad adoption of AI cybersecurity agents designed to help improve vulnerability discovery.
The integration of AI vulnerability scanners is likely to yield a few major bugs, but they’re also expected to produce mountains of CVE chaff that no human team at NIST would have been able to keep up with anyway.
NIST’s new enrichment policy entered into effect this week, on Wednesday, April 15.
The main Risky Business podcast is now on YouTube with video versions of our recent episodes. Below is our latest weekly show with Pat and Adam at the helm!
Russian hackers targeted a Swedish thermal plant: A pro-Russian hacktivist group tried to disrupt a Swedish thermal power plant last year. The attack targeted a power plant in western Sweden last spring. The intrusion was caught by the plant’s built-in safeguards. Swedish officials linked the group to Russia’s security services. [EnergyWatch // SVT]
Russia hacked Ukrainian prosecutors: Russian hackers have broken into the emails of more than 170 Ukrainian prosecutors. The campaign sought to gain access to investigative information. The attacks were linked to APT28, a cyber unit inside Russia’s military intelligence agency, the GRU. The same campaign also breached militaries in Greece, Romania, and Serbia. The hacks are part of a campaign spotted last month by Ctrl-Alt-Intel. [Reuters]
Grinex shuts down after hack: Russian cryptocurrency exchange Grinex has shuttered operations following a theft this week. The company claims “Western intelligence agencies” broke into its wallets and stole $13 million (1 billion rubles) worth of assets. The exchange was sanctioned by US authorities last August for helping Russia evade sanctions and laundering ransomware payments. A TRM Labs report found that Grinex was a rebrand of an older Russian crypto exchange Garantex, also sanctioned for the same things. [Wayback Machine]
Zerion blames North Korea for crypto-heist: Crypto-wallet provider Zerion has blamed a recent heist of $100,000 on North Korean hackers.
Autovista ransomware attack: A ransomware group has hit automotive data analytics company Autovista, with the attack impacting systems in Europe and Australia.
McGraw Hill breach: Hackers have leaked the personal details of 13.5 million users of educational platform McGraw Hill. The data was taken from the company’s SalesForce accounts. It was leaked after a failed extortion attempt by the ShinyHunters group. It includes details such as real names, home addresses, emails, and phone numbers.
Standard Bank breach: South Africa’s largest bank has disclosed a security breach. The Standard Bank says hackers breached last week an internal network storing customer data. The incident is the third hack of a South African bank this year. [IOL]
BlueLeaks 2.0 data is now up for sale: A hacker is selling 8.3 million confidential crime tips for $10,000 in cryptocurrency. The data was stolen earlier this year from P3 Global Intel, a software provider for US law enforcement agencies. The hacker, who goes by the name Internet Yiff Machine, initially provided the data for free to select journalists and the DDoSecrets project. The hacker says they’re selling the data because “principles are for the well-fed, and I’m unfortunately not in a great place.” [Straight Arrow News // DataBreaches.net]
Krybit hacks 0APT: The Krybit ransomware group has hacked the website of rival ransom group 0APT. The incident occurred after the 0APT group threatened to dox Krybit’s members last week. According to security firm Barricade, 0APT leaked plaintext credentials for Krybit’s ransomware backend panel, along with Bitcoin addresses and victim names. Krybit returned the favor by leaking 0APT’s entire server contents.
OpenAI announces its own private cyber model: OpenAI has released an LLM model for cybersecurity work into private testing. Thousands of verified professionals and hundreds of teams responsible for defending critical software have been invited to test the GPT‑5.4‑Cyber model. The new model has loose permissions for cybersecurity research, such as reverse-engineering and vulnerability discovery. The new limited access model is OpenAI’s response to Anthropic’s Project Glasswing and the Mythos model.
Anthropic rolls out KYC for Claude: Anthropic will ask certain Claude users to verify their identity by providing a selfie and a government ID. The company says the new identity verification check will only roll out in a “few use cases.” The checks are meant to prevent abuse and comply with legal obligations. The ID checks will be handled by Persona, the same company Discord had to cut ties because of community backlash.
BlueSky’s mega outage: Social media network BlueSky had a prolonged outage on Thursday that was so bad, even its server status page was down—probably because they hosted it on the same infrastructure. You live and learn, I guess. [News.az]
Grok is still nudifying: xAI’s Grok is still generating nude images at users’ requests, despite a huge backlash from authorities all over the world. Just take Grok behind the shed, Elon! It’s time. [NBC News]
Nudify apps are still everywhere: Both Apple and Google are still hosting nudify apps on their stores, and their ad systems are often used to lure users to the very same apps they’re supposed to have banned. [Tech Transparency Project]
News sites block the Internet Archive: Twenty-three major news outlets are now blocking the Internet Archive’s Wayback Machine from creating copies of their content. Most cited fear the backed up pages could be used as a proxy to train AI on their content. [Tom’s Hardware]
IPv6 milestone: Global IPv6 traffic has crossed 50% for the first time at the end of last month.
IPv8 protocol proposal: A new version of the IP addressing protocol has been proposed with the Internet Engineering Task Force. The new protocol is being called IPv8 and is meant to be compatible with old IPv4 addresses. IPv8 addresses will include a prefix and an old IPv4 address. The prefix will be specific to each ASN (network operator). For old IPv4 addresses, this prefix will be 0.0.0.0. This will allow devices and networks with old IPv4 addresses to connect to IPv8 systems without any software updates required.
Chrome does nothing to stop browser fingerprinting: Web privacy expert Alexander Hanff looks at the various browser fingerprinting techniques used by online trackers and how Chrome doesn’t do anything to block them.
Android gets new one-time data pickers: The next Android OS version will include two new systems to let users pick contacts or share their precise location for one time without an app needing persistent access to the read contacts and precise geolocation permissions.
Raspberry Pi disables passwordless sudo: The Raspberry Pi project has disabled passwordless access to the sudo utility in its OS.
Some ESUs extended: Microsoft has extended the Exchange 2016/2019 Extended Security Updates (ESU) program until October this year. The ESU ended this month. Same goes for the Skype for Business ESU.
Windows adds RDP warning popups: Windows will now show a security warning popup whenever users open RDP configuration files. The popups will alert users that they are about to make dangerous changes that may allow remote attackers to connect to their PCs and steal data. Several threat actors have used malicious RDP config files in phishing operations as a way to gain a foothold inside targeted networks. Russian group ATP29 is known for using this technique in espionage operations.
FCC exempts Netgear from foreign router ban: The US Federal Communications Commission has excluded Netgear from the Trump administration ban on foreign-made routers. The agency granted the exemption at the request of the US Department of War. Netgear is an American company but most of its routers are made in Southeast Asia.
More cyber EOs are coming: National Cyber Director Sean Cairncross says the Trump administration will soon sign and issue more cyber-related executive orders to help push forward the implementation of the White House’s new cybersecurity strategy. [CyberScoop]
US Tech Force is hiring cyber staff: The Trump administration is recruiting cybersecurity specialists for its new and upcoming US Tech Force agency. The Tech Force was announced at the end of last year. The plan is to recruit around 1,000 tech workers from large US corps to “modernize” the US government’s networks. The new hiring process comes after the Trump administration fired a third of CISA’s staff and plans hundreds more next year. CISA also recently canceled summer internships for cyber scholarship students amid DHS funding lapse.
Foreign internet traffic in Russia is becoming very expensive: Russian telcos will increase the price for internet traffic received from outside the country’s borders as part of measures to crack down on VPN use. [RBC]
EU launches age verification app: The EU has launched its own internally-developed age verification app. The app uses cryptographic proofs to verify a user’s age without sharing their personal data. EU officials have urged online platforms to integrate the app with their processes. Age verification is mandatory under the EU’s new Digital Services Act. The app is available for Android and iOS, and future desktop and web versions are planned. The source code is also available on GitHub.
In this Risky Business sponsor interview, Corelight’s Senior Director of Product Management, Dave Getman, tells James Wilson how Corelight Agentic Triage helps defenders stay ahead of AI-powered attacks.
DPRK laptop farmers sentenced: The US has sentenced two individuals to prison for running a laptop farm for North Korean remote IT workers. Kejia Wang and Zhenxing Wang were sentenced to 108 and 92 months in prison, respectively. Both hosted laptops at their homes in New Jersey that ran from US IPs to allow North Koreans to pose as American citizens. Authorities also indicted nine North Koreans remote workers who participated in the scheme.
16yo arrested for school cyberattack: Northern Ireland authorities have arrested a 16-year-old for a cyberattack that disrupted the country’s national school IT network. The C2K platform was down at the start of the month after a cyberattack that targeted a small number of schools. More than 300,000 pupils and 20,000 teachers couldn’t access exam data, home assignments, and teaching materials for days following the incidents, as officials shut down the platform to investigate. [BelfastLive]
53 DDoS-for-hire domains seized: Europol and other law enforcement agencies have seized 53 domains that hosted DDoS-for-hire services. Four suspects were also detained following 25 house searches. Authorities have also sent letters and emails to more than 75,000 users who had signed up for the services. They also worked with Google to remove ads promoting DDoS services.
UNC2465 shifts to Europe: Orange’s security team reports that a known ransomware affiliate tracked as UNC2465 has shifted its attacks to Europe. The group is currently using the SmokedHam backdoor as an initial entry point for Qilin ransomware attacks.
Black Basta offshoots target execs: A group of former Black Basta affiliates are using automated email bombing and Teams-based social engineering to target executives and senior-level employees for initial access into corporate networks. [ReliaQuest]
Hazy Hawk hijacks university subdomains: A cybercrime group has hijacked subdomains at 34 US universities and educational organizations to show pornographic spam. MIT, Harvard, Stanford, Johns Hopkins, and other large universities have had subdomains hacked. The spam campaign has been linked to Hazy Hawk, a group that hijacked CDC subdomains last year. [SH Consulting]
QEMU abused in the wild: Sophos says at least two cybercrime groups are deploying the QEMU virtualization environment on compromised networks to hide malicious activity and later deploy ransomware.
WP scanning: F5 says a badness cluster it’s been keeping an eye on has recently started mass-scans for sites running vulnerable WordPress plugins.
FTP exposure is still huge: According to Censys, there are still 6 million endpoints exposing an FTP port over the internet, almost 55 years after the protocol was created.
C2 servers in Russia: A large-scale study of the Russian web hosting space has found more than 1,200 malicious command and control servers hosted inside Russia this year. Most of the servers are for IoT malware botnets, such as Keitaro, Hajime, Mozi, and Mirai. [Hunt Intelligence]
Rhadamanthys’s secret bug: The Rhadamanthys infostealer left its command and control server APIs exposed online without authentication, allowing security researchers to track its activity for months before the Europol takedown last year. [Censys]
Direct-Sys Loader: The Cyderes team has discovered a new malware loader named Direct-Sys Loader being delivered in the wild.
PowMix botnet: Cisco Talos has spotted a new Windows botnet malware strain named PowMix, currently going on a test run in the Czech Republic.
AngrySpark: Gen Digital has spotted a new Windows rootkit named AngrySpark, already used in the wild on a UK victim’s system.
W3LL PhaaS: Group-IB published a report on W3LL, the phishing platform seized by authorities earlier this month.
ATHR platform: A cybercrime group has developed and is renting access to a platform that automates voice phishing attacks. The ATHR platform uses AI agents to call targets using preconfigured and multi-step scripts. ATHR access is being sold for $4,000 and 10% of a campaign’s profits. According to AbnormalAI, the platform is primarily being used to trick victims into revealing credentials for their online accounts.
James Pope, Corelight’s Director of Technical Marketing Engineering, demonstrates the company’s Open NDR Platform and how it combines network detections with a whole host of other data sources.
UAC-0247 and AGINGFLY: CERT-UA reported a new wave of attacks against its government agencies, hospitals, and emergency services. This activity was linked to a cluster tracked as UAC-0247. The final payload was a new infostealer named AGINGFLY.
Sapphire Sleet targets macOS: DPRK APT group Sapphire Sleet has adapted its “install this Zoom update to hear me” malware delivery technique for macOS, per a new Microsoft report.
PyPI security audit: Python’s PyPI has completed its second security audit.
Zero Day Quest 2026: Microsoft awarded $2.3 million in bug bounty rewards at this year’s edition of Zero Day Quest, its cloud and AI hacking contest.
Mythos guidance: Cisco [PDF] and the Cloud Security Alliance have issued guides on how to protect and defend networks in the face of rising powerful AI vulnerability discovery agents like Anthropic’s Mythos.
Mythos/Glasswing vulnerabilities: VulnCheck has sifted through its huge CVE database and believes it has tracked down some of the bugs discovered using Anthropic’s Mythos agent as part of Project Glasswing. There are 75 CVEs that mention Anthropic, 40 credited to Anthropic, but only one specifically mentions Glasswing. So far, it’s unclear if any of the Mythos-found bugs even received proper CVEs.
You can trick Claude by being an industry legend: Manifold Security tricked Claude’ GitHub bot to merge malicious code to repositories by spoofing their requests under the names of famous developers.
Researcher drops another Windows zero-day: A disgruntled security researcher has published proof-of-concept code for a new Windows zero-day. The RedSun zero-day can be used to elevate privileges on Windows to SYSTEM level access. The researcher released the public exploit after a disagreement with the Microsoft team that handles its bug bounty program. The same researcher also released another Windows zero-day named BlueHammer earlier this month.
NGINX UI bug exploited in the wild: Hackers are exploiting a bug in a popular dashboard for managing NGINX web servers. Attacks began last month and are targeting the dashboard’s MCP endpoints. Tracked as CVE-2026-33032, the bug allows attackers to access the MCP endpoint without authentication and then modify the server’s config files. More than 2,600 of NGINX UI dashboards are currently exposed on the internet. [Pluto Security]
RAGFlow patches bug after public disclosure: The RAGFlow AI toolkit has patched a remote code execution bug in its software almost a week after the bug was publicly disclosed by security researchers. The project initially ignored the report and only patched the issue after the researchers themselves submitted the patch code.
Dolibarr RCE: The Dolibarr CRM and ERP has patched an eval-based remote code execution bug (CVE-2026-22666). A write-up and POC are available via Jiva Security.
Thymeleaf RCE: A critical vulnerability has been patched in the Java template engine Thymeleaf. Tracked as CVE-2026-40478, the bug allows attackers to bypass security checks and inject malicious content in server page templates. The bug impacts all Thymeleaf versions ever released and has a wide impact since Thymeleaf is also the default template engine in the Spring Boot Java framework. [Endor Labs]
Codex hacks a smart TV: Security firm Calif has used OpenAI’s Codex agent to hack and gain root access on a Samsung smart TV.
Fabricked attack: A team of academics has developed a new attack that breaks the confidentiality of AMD’s secure enclave technology. The Fabricked attack redirects memory transactions to trick AMD’s secure co-processor into improperly initializing SEV-SNP enclaves. The novel technique allows attackers to control confidential virtual machines where each individual customer’s data is typically processed in cloud environments. AMD released patches this week as part of its Patch Tuesday. Frabricked is one of multiple AMD SEV-SNP attacks disclosed over the past two years. Others include RMPocalypse, BadRAM, Ahoi, Heracles, WireTap, BatteringRAM, and TEE. Fail.
Threat/trend reports: Check Point, CyberHUB-AM, Google Mandiant, GuidePoint Security, Kaspersky, and Sysdig have recently published reports and summaries covering various threats and infosec industry trends.
New tool—Jaspr: Google has open-sourced Jaspr, a new web development framework written in Dart.
New tool—Malfixer: Mobile security firm Cleafy has open-sourced Malfixer, a toolkit for inspecting and recovering malformed Android APK files.
New tool—RePythonNET-MCP: Security firm Sekoia has open-sourced RePythonNET-MCP, an MCP server for .NET reverse engineering automation.
New tool—PMG: DevSecOps firm SafeDep has released PMG, a tool that delays npm and Python package installs until the libraries are checked against its threat intel database.
New tool—HoneyWire: Andrea Termine has published HoneyWire, a lightweight distributed deception engine designed for internal networks.
New tool—NetWatch: Westpac’s chief engineer Matt Hartley has released NetWatch, a real-time network diagnostics tool for terminals.
In this edition of Seriously Risky Business, Tom Uren and Amberleigh Jack talk about a new Citizen Lab report into Webloc, a tool to identify and track mobile devices. It demonstrates how the collection and sale of mobile phone geolocation data presents privacy and national security risks.
In this episode of Risky Business Features, James Wilson chats to professional hacker Jamieson O’Reilly about Anthropic’s Mythos and the impact it could have on offensive security. Jamieson is CEO of DVULN and co-founder of Aether AI.
...
Read the original on risky.biz »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.