10 interesting stories served every morning and every evening.
When we released Claude Code, we expected developers to use it for coding. They did—and then quickly began using it for almost everything else. This prompted us to build Cowork: a simpler way for anyone—not just developers—to work with Claude in the very same way. Cowork is available today as a research preview for Claude Max subscribers on our macOS app, and we will improve it rapidly from here.
How is using Cowork different from a regular conversation? In Cowork, you give Claude access to a folder of your choosing on your computer. Claude can then read, edit, or create files in that folder. It can, for example, re-organize your downloads by sorting and renaming each file, create a new spreadsheet with a list of expenses from a pile of screenshots, or produce a first draft of a report from your scattered notes.
In Cowork, Claude completes work like this with much more agency than you’d see in a regular conversation. Once you’ve set it a task, Claude will make a plan and steadily complete it, while looping you in on what it’s up to. If you’ve used Claude Code, this will feel familiar—Cowork is built on the very same foundations. This means Cowork can take on many of the same tasks that Claude Code can handle, but in a more approachable form for non-coding tasks.
When you’ve mastered the basics, you can make Cowork more powerful still. Claude can use your existing connectors, which link Claude to external information, and in Cowork we’ve added an initial set of skills that improve Claude’s ability to create documents, presentations, and other files. If you pair Cowork with Claude in Chrome, Claude can complete tasks that require browser access, too.
Cowork is designed to make using Claude for new work as simple as possible. You don’t need to keep manually providing context or converting Claude’s outputs into the right format. Nor do you have to wait for Claude to finish before offering further ideas or feedback: you can queue up tasks and let Claude work through them in parallel. It feels much less like a back-and-forth and much more like leaving messages for a coworker.
In Cowork, you can choose which folders and connectors Claude can see: Claude can’t read or edit anything you don’t give it explicit access to. Claude will also ask before taking any significant actions, so you can steer or course-correct it as you need.
That said, there are still things to be aware of before you give Claude control. By default, the main thing to know is that Claude can take potentially destructive actions (such as deleting local files) if it’s instructed to. Since there’s always some chance that Claude might misinterpret your instructions, you should give Claude very clear guidance around things like this.
You should also be aware of the risk of “prompt injections”: attempts by attackers to alter Claude’s plans through content it might encounter on the internet. We’ve built sophisticated defenses against prompt injections, but agent safety—that is, the task of securing Claude’s real-world actions—is still an active area of development in the industry.
These risks aren’t new with Cowork, but it might be the first time you’re using a more advanced tool that moves beyond a simple conversation. We recommend taking precautions, particularly while you learn how it works. We provide more detail in our Help Center.
This is a research preview. We’re releasing Cowork early because we want to learn what people use it for, and how they think it could be better. We encourage you to experiment with what Cowork can do for you, and to try things you don’t expect to work: you might be surprised! As we learn more from this preview, we plan to make lots of improvements (including by adding cross-device sync and bringing it to Windows), and we’ll identify further ways to make it safer.
Claude Max subscribers can try Cowork now by downloading the macOS app, then clicking on “Cowork” in the sidebar. If you’re on another plan, you can join the waitlist for future access.
...
Read the original on claude.com »
Recently, I’ve been spending my time building an image-to-ASCII renderer. Below is the result — try dragging it around, the demo is interactive!
One thing I spent a lot of effort on is getting edges looking sharp. Take a look at this rotating cube example:
Try opening the “split” view. Notice how well the characters follow the contour of the square.
This renderer works well for animated scenes, like the ones above, but we can also use it to render static images:
The image of Saturn was generated with ChatGPT.
Then, to get better separation between different colored regions, I also implemented a cel shading-like effect to enhance contrast between edges. Try dragging the contrast slider below:
The contrast enhancement makes the separation between different colored regions far clearer. That was key to making the 3D scene above look as good as it does.
I put so much focus on sharp edges because they’re an aspect of ASCII rendering that is often overlooked when programmatically rendering images as ASCII. Consider this animated 3D scene from Cognition’s landing page that is rendered via ASCII characters:
It’s a cool effect, especially while in motion, but take a look at those blurry edges! The characters follow the cube contours very poorly, and as a result, the edges look blurry and jagged in places:
This blurriness happens because the ASCII characters are being treated like pixels — their shape is ignored. It’s disappointing to see because ASCII art looks so much better when shape is utilized. I don’t believe I’ve ever seen shape utilized in generated ASCII art, and I think that’s because it’s not really obvious how to consider shape when building an ASCII renderer.
I started building my ASCII renderer to prove to myself that it’s possible to utilize shape in ASCII rendering. In this post, I’ll cover the techniques and ideas I used to capture shape and build this ASCII renderer in detail.
We’ll start with the basics of image-to-ASCII conversion and see where the common issue of blurry edges comes from. After that, I’ll show you the approach I used to fix that and achieve sharp, high-quality ASCII rendering. At the end, we’ll improve on that by implementing the contrast enhancement effect I showed above.
Let’s get to it!
ASCII contains 95 printable characters that we can use. Let’s start off by rendering the following image containing a white circle using those ASCII characters:
ASCII art is (almost) always rendered using a monospace font. Since every character in a monospace font is equally wide and tall, we can split the image into a grid. Each grid cell will contain a single ASCII character.
The image with the circle is pixels. For the ASCII grid, I’ll pick a row height of pixels and a column width of pixels. That splits the canvas into rows and columns — an grid:
Monospace characters are typically taller than they are wide, so I made each grid cell a bit taller than it is wide.
Our task is now to pick which character to place in each cell. The simplest approach is to calculate a lightness value for each cell and pick a character based on that.
We can get a lightness value for each cell by sampling the lightness of the pixel at the cell’s center:
We want each pixel’s lightness as a numeric value between and , but our image data consists of pixels with RGB color values.
We can use the following formula to convert an RGB color (with component values between and ) to a lightness value:
Now that we have a lightness value for each cell, we want to use those values to pick ASCII characters. As mentioned before, ASCII has 95 printable characters, but let’s start simple with just these characters:
We can sort them in approximate density order like so, with lower-density characters to the left, and high-density characters to the right:
I added space as the first (least dense) character.
We can then map lightness values between and to one of those characters like so:
This maps low lightness values to low-density characters and high lightness values to high-density characters.
Rendering the circle from above with this method gives us:
That works… but the result is pretty ugly. We seem to always get @ for cells that fall within the circle and a space for cells that fall outside.
That is happening because we’ve pretty much just implemented nearest-neighbor downsampling. Let’s see what that means.
Downsampling, in the context of image processing, is taking a larger image (in our case, the image with the circle) and using that image’s data to construct a lower resolution image (in our case, the ASCII grid). The pixel values of the lower resolution image are calculated by sampling values from the higher resolution image.
The simplest and fastest method of sampling is nearest-neighbor interpolation, where, for each cell (pixel), we only take a single sample from the higher resolution image.
Consider the circle example again. Using nearest-neighbor interpolation, every sample either falls inside or outside of the shape, resulting in either or lightness:
If, instead of picking an ASCII character for each grid cell, we color each grid cell (pixel) according to the sampled value, we get the following pixelated rendering:
This pixelated rendering is pretty much equivalent to the ASCII rendering from before. The only difference is that instead of @s we have white pixels, and instead of spaces we have black pixels.
These square, jagged looking edges are aliasing artifacts, commonly called jaggies. They’re a common result of using nearest-neighbor interpolation.
To get rid of jaggies, we can collect more samples for each cell. Consider this line:
The line’s slope on the axis is . When we pixelate it with nearest-neighbor interpolation, we get the following:
Let’s try to get rid of the jagginess by taking multiple samples within each cell and using the average sampled lightness value as the cell’s lightness. The example below lets you vary the number of samples using the slider:
With multiple samples, cells that lie on the edge of a shape will have some of their samples fall within the shape, and some outside of it. Averaging those, we get gray in-between colors that smooth the downsampled image. Below is the same example, but with an overlay showing where the samples are taken:
This method of collecting multiple samples from the larger image is called supersampling. It’s a common method of spatial anti-aliasing (avoiding jaggies at edges). Here’s what the rotating square looks like with supersampling (using samples for each cell):
Let’s look at what supersampling does for the circle example from earlier. Try dragging the sample quality slider:
The circle becomes less jagged, but the edges feel blurry. Why’s that?
Well, they feel blurry because we’re pretty much just rendering a low-resolution, pixelated image of a circle. Take a look at the pixelated view:
The ASCII and pixelated views are mirror images of each other. Both are just low-resolution versions of the original high-resolution image, scaled up to the original’s size — it’s no wonder they both look blurry.
Increasing the number of samples is insufficient. No matter how many samples we take per cell, the samples will be averaged into a single lightness value, used to render a single pixel.
And that’s the core problem: treating each grid cell as a pixel in an image. It’s an obvious and simple method, but it disregards that ASCII characters have shape.
We can make our ASCII renderings far more crisp by picking characters based on their shape. Here’s the circle rendered that way:
The characters follow the contour of the circle very well. By picking characters based on shape, we get a far higher effective resolution. The result is also more visually interesting.
Let’s see how we can implement this.
So what do I mean by shape? Well, consider the characters T, L, and O placed within grid cells:
The character T is top-heavy. Its visual density in the upper half of the grid cell is higher than in the lower half. The opposite can be said for L — it’s bottom-heavy. O is pretty much equally dense in the upper and lower halves of the cell.
We might also compare characters like L and J. The character L is heavier within the left half of the cell, while J is heavier in the right half:
We also have more “extreme” characters, such as _ and ^, that only occupy the lower or upper portion of the cell, respectively:
This is, roughly, what I mean by “shape” in the context of ASCII rendering. Shape refers to which regions of a cell a given character visually occupies.
To pick characters based on their shape, we’ll somehow need to quantify (put numbers to) the shape of each character.
Let’s start by only considering how much characters occupy the upper and lower regions of our cell. To do that, we’ll define two “sampling circles” for each grid cell — one placed in the upper half and one in the lower half:
It may seem odd or arbitrary to use circles instead of just splitting the cell into two rectangles, but using circles will give us more flexibility later on.
A character placed within a cell will overlap each of the cell’s sampling circles to some extent.
One can compute that overlap by taking a bunch of samples within the circle (for example, at every pixel). The fraction of samples that land inside the character gives us the overlap as a numeric value between and :
For T, we get an overlap of approximately for the upper circle and for the lower. Those overlap values form a -dimensional vector:
We can generate such a -dimensional vector for each character within the ASCII alphabet. These vectors quantify the shape of each ASCII character along these dimensions (upper and lower). I’ll call these vectors shape vectors.
Below are some ASCII characters and their shape vectors. I’m coloring the sampling circles using the component values of the shape vectors:
We can use the shape vectors as 2D coordinates — here’s every ASCII character on a 2D plot:
Let’s say that we have our ASCII characters and their associated shape vectors in a CHARACTERS array:
We can then perform a nearest neighbor search like so:
The findBestCharacter function gives us the ASCII character whose shape best matches the input lookup vector.
Note: this brute force search is not very performant. This becomes a bottleneck when we start rendering thousands of ASCII characters at . I’ll talk more about this later.
To make use of this in our ASCII renderer, we’ll calculate a lookup vector for each cell in the ASCII grid and pass it to findBestCharacter to determine the character to display.
Let’s try it out. Consider the following zoomed-in circle as an example. It is split into three grid cells:
Overlaying our sampling circles, we see varying degrees of overlap:
When calculating the shape vector of each ASCII character, we took a huge number of samples. We could afford to do that because we only need to calculate those shape vectors once up front. After they’re calculated, we can use them again and again.
However, if we’re converting an animated image (e.g. canvas or video) to ASCII, we need to be mindful of performance when calculating the lookup vectors. An ASCII rendering might have hundreds or thousands of cells. Multiplying that by tens or hundreds of samples would be incredibly costly in terms of performance.
With that being said, let’s pick a sampling quality of with the samples placed like so:
For the top sampling circle of the leftmost cell, we get one white sample and two black, giving us an average lightness of . Doing the same calculation for all of the sampling circles, we get the following 2D vectors:
From now on, instead of using the term “lookup vectors”, I’ll call these vectors, sampled from the image that we’re rendering as ASCII, sampling vectors. One sampling vector is calculated for each cell in the grid.
Anyway, we can use these sampling vectors to find the best-matching ASCII character. Let’s see what that looks like on our 2D plot — I’ll label the sampling vectors (from left to right) C0, C1, and C2:
Hmm… this is not what we want. Since none of the ASCII shape vector components exceed , they’re all clustered towards the bottom-left region of our plot. This makes our sampling vectors map to a few characters on the edge of the cluster.
We can fix this by normalizing the shape vectors. We’ll do that by taking the maximum value of each component across all shape vectors, and dividing the components of each shape vector by the maximum. Expressed in code, that looks like so:
Here’s what the plot looks like with the shape vectors normalized:
If we now map the sampling vectors to their nearest neighbors, we get a much more sensible result:
We get ’, M and $. Let’s see how well those characters match the circle:
Nice! They match very well.
Let’s try rendering the full circle from before with the same method:
Much better than before! The picked characters follow the contour of the circle very well.
Using two sampling circles — one upper and one lower — produces a much better result than the -dimensional (pixelated) approach. However, it still falls short when trying to capture other aspects of a character’s shape.
For example, two circles don’t capture the shape of characters that fall in the middle of the cell. Consider -:
For -, we get a shape vector of . That doesn’t represent the character very well at all.
The two upper-lower sampling circles also don’t capture left-right differences, such as the difference between p and q:
We could use such differences to get better character picks, but our two sampling circles don’t capture them. Let’s add more dimensions to our shape to fix that.
Since cells are taller than they are wide (at least with the monospace font I’m using), we can use sampling circles to cover the area of each cell quite well:
sampling circles capture left-right differences, such as between p and q, while also capturing differences across the top, bottom, and middle regions of the cell, differentiating ^, -, and _. They also capture the shape of “diagonal” characters like / to a reasonable degree.
One problem with this grid-like configuration for the sampling circles is that there are gaps. For example, . falls between the sampling circles:
To compensate for this, we can stagger the sampling circles vertically (e.g. lowering the left sampling circles and raising the right ones) and make them a bit larger. This causes the cell to be almost fully covered while not causing excessive overlap across the sampling circles:
We can use the same procedure as before to generate character vectors using these sampling circles, this time yielding a -dimensional vector. Consider the character L:
For L, we get the vector:
I’m presenting -dimensional shape vectors in a matrix form because it’s easier to grok geometrically, but the actual vector is a flat list of numbers.
The lightness values certainly look L-shaped! The 6D shape vector captures L’s shape very well.
Now we have a 6D shape vector for every ASCII character. Does that affect character lookups (how we find the best matching character)?
Earlier, in the findBestCharacter function, I referenced a getDistance function. That function returns the Euclidean distance between the input points. Given two 2D points and , the formula to calculate their Euclidean distance looks like so:
Put into code, this looks like so:
...
Read the original on alexharri.com »
Apple is joining forces with Google to power its artificial intelligence features, including a major Siri upgrade expected later this year.
The multiyear partnership will lean on Google’s Gemini and cloud technology for future Apple foundational models, according to a joint statement obtained by CNBC’s Jim Cramer.
“After careful evaluation, we determined that Google’s technology provides the most capable foundation for Apple Foundation Models and we’re excited about the innovative new experiences it will unlock for our users,” Apple said in a statement Monday.
The models will continue to run on Apple devices and the company’s private cloud compute, the companies added.
Apple declined to comment on the terms of the deal. Google referred CNBC to the joint statement.
In August, Bloomberg reported that Apple was in early talks with Google to use a custom Gemini model to power a new iteration of Siri. The news outlet later reported that Apple was planning to pay about $1 billion a year to utilize Google AI.
The deal is another major indicator of growing trust in Google’s accelerating AI agenda and comeback against OpenAI. In 2025, the search giant logged its best year since 2009 and surpassed Apple in market capitalization last week for the first time since 2019.
Google already pays Apple billions each year to be the default search engine on iPhones. But that lucrative partnership briefly came into question after Google was found to hold an illegal internet search monopoly.
In September, a judge ruled against a worst-case scenario outcome that could have forced Google to divest its Chrome browser business.
The decision also allowed Google to continue to make deals such as the one with Apple.
...
Read the original on www.cnbc.com »
i was at bombay airport. some dude was watching reels on full volume and laughing loudly. asking nicely doesn’t work anymore. me being me, didn’t have the courage to speak up.
so i built a tiny app that plays back the same audio it hears, delayed by ~2 seconds. asked claude, it spat out a working version in one prompt. surprisingly WORKS.
something something auditory feedback loop something something cognitive dissonance. idk i’m not a neuroscientist. all i know is it makes people shut up and that’s good enough for me.
straight up honest - originally called this “make-it-stop” but then saw @TimDarcet also built similar and named it STFU. wayyyyy better name. so stole it. sorry not sorry.
made with spite and web audio api. do whatever you want with it.
yo, meanwhile if you are new here, you might find my, other side projects kinda funny.
...
Read the original on github.com »
The FBI raided the home of a Washington Post reporter early on Wednesday in what the newspaper called a “highly unusual and aggressive” move by law enforcement, and press freedom groups condemned as a “tremendous intrusion” by the Trump administration.
Agents descended on the Virginia home of Hannah Natanson as part of an investigation into a government contractor accused of illegally retaining classified government materials.
An email sent on Wednesday afternoon to Post staff from the executive editor, Matt Murray, obtained by the Guardian, said agents turned up “unannounced”, searched her home and seized electronic devices.
“This extraordinary, aggressive action is deeply concerning and raises profound questions and concern around the constitutional protections for our work,” the email said.
“The Washington Post has a long history of zealous support for robust press freedoms. The entire institution stands by those freedoms and our work.”
“It’s a clear and appalling sign that this administration will set no limits on its acts of aggression against an independent press,” Marty Baron, the Post’s former executive editor, told the Guardian.
Murray said neither the newspaper nor Natanson were told they were the target of a justice department investigation.
Pam Bondi, the attorney general, said in a post on X that the raid was conducted by the justice department and FBI at the request of the Pentagon.
The warrant, she said, was executed “at the home of a Washington Post journalist who was obtaining and reporting classified and illegally leaked information from a Pentagon contractor. The leaker is currently behind bars.”
The statement gave no further details of the raid or investigation. Bondi added: “The Trump administration will not tolerate illegal leaks of classified information that, when reported, pose a grave risk to our nation’s national security and the brave men and women who are serving our country.”
The reporter’s home and devices were searched, and her Garmin watch, phone, and two laptop computers, one belonging to her employer, were seized, the newspaper said. It added that agents told Natanson she was not the focus of the investigation, and was not accused of any wrongdoing.
A warrant obtained by the Post cited an investigation into Aurelio Perez-Lugones, a system administrator in Maryland with a top secret security clearance who has been accused of accessing and taking home classified intelligence reports.
Natanson, the Post said, covers the federal workforce and has been a part of the newspaper’s “most high-profile and sensitive coverage” during the first year of the second Trump administration.
As the paper noted in its report, it is “highly unusual and aggressive for law enforcement to conduct a search on a reporter’s home”.
In a first-person account published last month, Natanson described herself as the Post’s “federal government whisperer”, and said she would receive calls day and night from “federal workers who wanted to tell me how President Donald Trump was rewriting their workplace policies, firing their colleagues or transforming their agency’s missions”.
“It’s been brutal,” the article’s headline said.
Natanson said her work had led to 1,169 new sources, “all current or former federal employees who decided to trust me with their stories”. She said she learned information “people inside government agencies weren’t supposed to tell me”, saying that the intensity of the work nearly “broke” her.
The federal investigation into Perez-Lugones, the Post said, involved documents found in his lunchbox and his basement, according to an FBI affidavit. The criminal complaint against him does not accuse him of leaking classified information, the newspaper said.
Press freedom groups were united in their condemnation of the raid on Wednesday.
“Physical searches of reporters’ devices, homes and belongings are some of the most invasive investigative steps law enforcement can take,” Bruce D Brown, president of the Reporters’ Committee for Freedom of the Press, said in a statement.
“There are specific federal laws and policies at the Department of Justice that are meant to limit searches to the most extreme cases because they endanger confidential sources far beyond just one investigation and impair public interest reporting in general.
“While we won’t know the government’s arguments about overcoming these very steep hurdles until the affidavit is made public, this is a tremendous escalation in the administration’s intrusions into the independence of the press.”
Jameel Jaffer, executive director of the Knight First Amendment Institute, demanded a public explanation from the justice department of “why it believes this search was necessary and legally permissible”.
In a statement, Jaffer said: “Any search targeting a journalist warrants intense scrutiny because these kinds of searches can deter and impede reporting that is vital to our democracy.
“Attorney General Bondi has weakened guidelines that were intended to protect the freedom of the press, but there are still important legal limits, including constitutional ones, on the government’s authority to use subpoenas, court orders, and search warrants to obtain information from journalists.
“Searches of newsrooms and journalists are hallmarks of illiberal regimes, and we must ensure that these practices are not normalized here.”
Seth Stern, chief of advocacy for the Freedom of the Press Foundation, said it was “an alarming escalation in the Trump administration’s multipronged war on press freedom” and called the warrant “outrageous”.
“The administration may now be in possession of volumes of journalist communications having nothing to do with any pending investigation and, if investigators are able to access them, we have zero faith that they will respect journalist-source confidentiality,” he said.
Tim Richardson, journalism and disinformation program director at PEN America, said: “A government action this rare and aggressive signals a growing assault on independent reporting and undermines the First Amendment.
“It is intended to intimidate sources and chill journalists’ ability to gather news and hold the government accountable. Such behavior is more commonly associated with authoritarian police states than democratic societies that recognize journalism’s essential role in informing the public.”
The Post has had a rocky relationship with the Trump administration in recent months, despite its billionaire owner, Jeff Bezos, the Amazon founder, attempting to curry favor by blocking it from endorsing Kamala Harris, the Democratic nominee, in the 2024 presidential election.
Bezos defended the action, which saw the desertion of more than 200,000 subscribers in protest.
...
Read the original on www.theguardian.com »
The Astro Technology Company — the company behind the Astro web framework — is joining Cloudflare! Adoption of the Astro web framework continues to double every year, and Astro 6 is right around the corner. With Cloudflare’s support, we’ll have more resources and fewer distractions to continue our mission to build the best framework for content-driven websites.
What this means for Astro:
* Astro continues to support a wide set of deployment targets, not just Cloudflare
* All full-time employees of The Astro Technology Company are now employees of Cloudflare, and will continue to work on Astro full-time.
In 2021, Astro was born out of frustration. The trend at the time was that every website should be architected as an application, and then shipped to the user’s browser to render. This was not very performant, and we’ve spent the last decade coming up with more and more complex solutions to solve for that performance problem. SSR, ISR, RSC, PPR, TTI optimizations via code-splitting, tree-shaking, lazy-loading, all to generate a blocking double-data hydration payload from a pre-warmed server running halfway around the world.
Our mission to design a web framework specifically for building websites — what we call content-driven websites, to better distinguish from data-driven, stateful web applications — resonated. Now Astro is downloaded almost 1,000,000 times per week, and has been used by 100,000s of developers to build fast, beautiful websites. Today you’ll find Astro all over the web, powering major websites and even entire developer platforms for companies like Webflow, Wix, Microsoft, and Google.
Along the way, we also tried to grow a business. In 2021 we raised some money and formed The Astro Technology Company. Our larger vision was that a well-designed framework like Astro could sit at the center of a massive developer platform, with optional hosted primitives (database, storage, analytics) designed in lockstep with the framework.
We were never able to realize this vision. Attempts to introduce paid, hosted primitives into our ecosystem fell flat, and rarely justified their own existence. We considered going more directly after first-class hosting or content management for Astro, but knew we’d spend much of our time playing catchup to well-funded, savvy competitors. We kept exploring different ideas, but nothing clicked with users the same way Astro did.
It wasn’t all bad. Astro DB (our attempt to build a hosted database product for Astro projects) eventually evolved into the open, built-in Astro database client that still lives in core today. Our exploration into building an e-commerce layer with Astro was eventually open-sourced. It was rewarding work, but over the years the distraction took its toll. Each attempt at a new paid product or offering took myself and others on the project away from working on the Astro framework that developers were using and loving every day.
Last year, Dane (Cloudflare CTO) and I began to talk more seriously about the future of the web. Those conversations quickly grew into something bigger: What does the next decade look like? How do frameworks adapt to a world of AI coding and agents?
It became clear that even as web technologies evolve, content remains at the center. We realized that we’ve each been working toward this same vision from different angles:
* Cloudflare has been solving it from the infrastructure side: betting on a platform that is global by default, with fast startup, low latency, and security built-in.
* Astro has been solving it from the framework side: betting on a web framework that makes it easy to build sites that are fast by default, without overcomplicating things.
The overlap is obvious. By working together, Cloudflare gives us the backing we need to keep innovating for our users. Now we can stop spending cycles worrying about building a business on top of Astro, and start focusing 100% on the code, with a shared vision to move the web forward.
Cloudflare has been a long-time sponsor and champion of Astro. They have a proven track record of supporting great open-source projects like Astro, TanStack, and Hono without trying to capture or lock anything down. Staying open to all was a non-negotiable requirement for both us and for Cloudflare.
That is why Astro will remain free, open-source, and MIT-licensed. We will continue to run our project in the open, with an open governance model for contributors and an open community roadmap that anyone can participate in. We remain fully committed to maintaining Astro as a platform-agnostic framework, meaning we will continue to support and improve deployments for all targets—not just Cloudflare.
With Cloudflare’s resources and support, we can now return our focus fully towards building the best web framework for content-driven websites. The web is changing fast, and the bar keeps rising: performance, scale, reliability, and a better experience for the teams shipping content on the web.
You’ll see that focus reflected across our roadmap, as we prepare for the upcoming Astro 6 release (beta out now!) and our 2026 roadmap. Stay tuned!
I want to extend a huge thank you to the agencies, companies, sponsors, partners, and theme authors who chose to work with us over the years. Thank you to our initial investors — Haystack, Gradient, Uncorrelated, Lightspeed — without whom Astro likely wouldn’t exist. Thank you to everyone in our open source community who continues to help make Astro better every day. And finally, thank you to everyone who uses Astro and puts their trust in us to help them build for the web.
...
Read the original on astro.build »
Two days ago, Anthropic released the Claude Cowork research preview (a general-purpose AI agent to help anyone with their day-to-day work). In this article, we demonstrate how attackers can exfiltrate user files from Cowork by exploiting an unremediated vulnerability in Claude’s coding environment, which now extends to Cowork. The vulnerability was first identified in Claude.ai chat before Cowork existed by Johann Rehberger, who disclosed the vulnerability — it was acknowledged but not remediated by Anthropic.
Anthropic warns users, “Cowork is a research preview with unique risks due to its agentic nature and internet access.” Users are recommended to be aware of “suspicious actions that may indicate prompt injection”. However, as this feature is intended for use by the general populace, not just technical users, we agree with Simon Willison’s take:
“I do not think it is fair to tell regular non-programmer users to watch out for ‘suspicious actions that may indicate prompt injection’!”
As Anthropic has acknowledged this risk and put it on users to “avoid granting access to local files with sensitive information” (while simultaneously encouraging the use of Cowork to organize your Desktop), we have chosen to publicly disclose this demonstration of a threat users should be aware of. By raising awareness, we hope to enable users to better identify the types of ‘suspicious actions’ mentioned in Anthropic’s warning.
This attack leverages the allowlisting of the Anthropic API to achieve data egress from Claude’s VM environment (which restricts most network access).
The victim connects Cowork to a local folder containing confidential real estate filesThe victim uploads a file to Claude that contains a hidden prompt injection
For general use cases, this is quite common; a user finds a file online that they upload to Claude code. This attack is not dependent on the injection source - other injection sources include, but are not limited to: web data from Claude for Chrome, connected MCP servers, etc. In this case, the attack has the file being a Claude ‘Skill’ (although, as mentioned, it could also just be a regular document), as it is a generalizable file convention that users are likely to encounter, especially when using Claude.
Note: If you are familiar with Skills, they are canonically Markdown files (which users often do not heavily scrutinize). However, we demonstrate something more interesting: here, the user uploads a .docx (such as may be shared on an online forum), which poses as a Skill - the contents appear to be Markdown that was just saved after editing in Word. In reality, this trick allows attackers to conceal the injection using 1-point font, white-on-white text, and with line spacing set to 0.1 — making it effectively impossible to detect. The victim asks Cowork to analyze their files using the Real Estate ‘skill’ they uploadedThe injection manipulates Cowork to upload files to the attacker’s Anthropic account
The injection tells Claude to use a ‘curl’ command to make a request to the Anthropic file upload API with the largest available file. The injection then provides the attacker’s API key, so the file will be uploaded to the attacker’s account.
At no point in this process is human approval required.If we expand the ‘Running command’ block, we can see the malicious request in detail:Code executed by Claude is run in a VM - restricting outbound network requests to almost all domains - but the Anthropic API flies under the radar as trusted, allowing this attack to complete successfully. The attacker’s account contains the victim’s file, allowing them to chat with itThe exfiltrated file contains financial figures and PII, including partial SSNs.
The above exploit was demonstrated against Claude Haiku. Although Claude Opus 4.5 is known to be more resilient against injections, Opus 4.5 in Cowork was successfully manipulated via indirect prompt injection to leverage the same file upload vulnerability to exfiltrate data in a test that considered a ‘user’ uploading a malicious integration guide while developing a new AI tool:
As the focus of this article was more for everyday users (and not developers), we opted to demonstrate the above attack chain instead of this one.
An interesting finding: Claude’s API struggles when a file does not match the type it claims to be. When operating on a malformed PDF (ends .pdf, but it is really a text file with a few sentences in it), after trying to read it once, Claude starts throwing an API error in every subsequent chat in the conversation.
We posit that it is likely possible to exploit this failure via indirect prompt injection to cause a limited denial of service attack (e.g., an injection can elicit Claude to create a malformed file, and then read it). Uploading the malformed file via the files API resulted in notifications with an error message, both in the Claude client and the Anthropic Console.
One of the key capabilities that Cowork was created for is the ability to interact with one’s entire day-to-day work environment. This includes the browser and MCP servers, granting capabilities like sending texts, controlling one’s Mac with AppleScripts, etc.
These functionalities make it increasingly likely that the model will process both sensitive and untrusted data sources (which the user does not review manually for injections), making prompt injection an ever-growing attack surface. We urge users to exercise caution when configuring Connectors. Though this article demonstrated an exploit without leveraging Connectors, we believe they represent a major risk surface likely to impact everyday users.
...
Read the original on www.promptarmor.com »
To use the Mastodon web application, please enable JavaScript. Alternatively, try one of the native apps for Mastodon for your platform.
...
Read the original on mastodon.social »
The URL shortener that makes your links look as suspicious as possible.
Normal links are too trustworthy. Make them creepy.
Please see the below statement in regards to the legal nastygrams:
This website is a joke. Redirect pages are in place to inform the user of what website they are being directed to. It is not designed to “weaken global cybersecurity hygiene” and does not facilitate phishing.
This website does not violate any known laws, policies, or rules, to the best of the author’s knowledge.
Valid concerns brought up in your letter have been addressed. Going forward, I would greatly appreciate if you would reach out at [email deobfuscation failed] to discuss concerns or problems with this service, rather than sending legal threats.
...
Read the original on creepylink.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.