10 interesting stories served every morning and every evening.
10 interesting stories served every morning and every evening.
Tim Cook to become Apple Executive ChairmanJohn Ternusto become Apple CEO
Apple Executive Chairman
John Ternusto become Apple CEO
Text of this article
Text of this article
Images in this article
Images in this article
Our latest model, Claude Opus 4.7, is now generally available. Opus 4.7 is a notable improvement on Opus 4.6 in advanced software engineering, with particular gains on the most difficult tasks. Users report being able to hand off their hardest coding work—the kind that previously needed close supervision—to Opus 4.7 with confidence. Opus 4.7 handles complex, long-running tasks with rigor and consistency, pays precise attention to instructions, and devises ways to verify its own outputs before reporting back.The model also has substantially better vision: it can see images in greater resolution. It’s more tasteful and creative when completing professional tasks, producing higher-quality interfaces, slides, and docs. And—although it is less broadly capable than our most powerful model, Claude Mythos Preview—it shows better results than Opus 4.6 across a range of benchmarks:Last week we announced Project Glasswing, highlighting the risks—and benefits—of AI models for cybersecurity. We stated that we would keep Claude Mythos Preview’s release limited and test new cyber safeguards on less capable models first. Opus 4.7 is the first such model: its cyber capabilities are not as advanced as those of Mythos Preview (indeed, during its training we experimented with efforts to differentially reduce these capabilities). We are releasing Opus 4.7 with safeguards that automatically detect and block requests that indicate prohibited or high-risk cybersecurity uses. What we learn from the real-world deployment of these safeguards will help us work towards our eventual goal of a broad release of Mythos-class models.Security professionals who wish to use Opus 4.7 for legitimate cybersecurity purposes (such as vulnerability research, penetration testing, and red-teaming) are invited to join our new Cyber Verification Program.Opus 4.7 is available today across all Claude products and our API, Amazon Bedrock, Google Cloud’s Vertex AI, and Microsoft Foundry. Pricing remains the same as Opus 4.6: $5 per million input tokens and $25 per million output tokens. Developers can use claude-opus-4 – 7 via the Claude API.Claude Opus 4.7 has garnered strong feedback from our early-access testers:In early testing, we’re seeing the potential for a significant leap for our developers with Claude Opus 4.7. It catches its own logical faults during the planning phase and accelerates execution, far beyond previous Claude models. As a financial technology platform serving millions of consumers and businesses at significant scale, this combination of speed and precision could be game-changing: accelerating development velocity for faster delivery of the trusted financial solutions our customers rely on every day.Anthropic has already set the standard for coding models, and Claude Opus 4.7 pushes that further in a meaningful way as the state-of-the-art model on the market. In our internal evals, it stands out not just for raw capability, but for how well it handles real-world async workflows—automations, CI/CD, and long-running tasks. It also thinks more deeply about problems and brings a more opinionated perspective, rather than simply agreeing with the user.Claude Opus 4.7 is the strongest model Hex has evaluated. It correctly reports when data is missing instead of providing plausible-but-incorrect fallbacks, and it resists dissonant-data traps that even Opus 4.6 falls for. It’s a more intelligent, more efficient Opus 4.6: low-effort Opus 4.7 is roughly equivalent to medium-effort Opus 4.6.On our 93-task coding benchmark, Claude Opus 4.7 lifted resolution by 13% over Opus 4.6, including four tasks neither Opus 4.6 nor Sonnet 4.6 could solve. Combined with faster median latency and strict instruction following, it’s particularly meaningful for complex, long-running coding workflows. It cuts the friction from those multi-step tasks so developers can stay in the flow and focus on building.Based on our internal research-agent benchmark, Claude Opus 4.7 has the strongest efficiency baseline we’ve seen for multi-step work. It tied for the top overall score across our six modules at 0.715 and delivered the most consistent long-context performance of any model we tested. On General Finance—our largest module—it improved meaningfully on Opus 4.6, scoring 0.813 versus 0.767, while also showing the best disclosure and data discipline in the group. And on deductive logic, an area where Opus 4.6 struggled, Opus 4.7 is solid.Claude Opus 4.7 extends the limit of what models can do to investigate and get tasks done. Anthropic has clearly optimized for sustained reasoning over long runs, and it shows with market-leading performance. As engineers shift from working 1:1 with agents to managing them in parallel, this is exactly the kind of frontier capability that unlocks new workflows.We’re seeing major improvements in Claude Opus 4.7’s multimodal understanding, from reading chemical structures to interpreting complex technical diagrams. The higher resolution support is helping Solve Intelligence build best-in-class tools for life sciences patent workflows, from drafting and prosecution to infringement detection and invalidity charting.Claude Opus 4.7 takes long-horizon autonomy to a new level in Devin. It works coherently for hours, pushes through hard problems rather than giving up, and unlocks a class of deep investigation work we couldn’t reliably run before.For Replit, Claude Opus 4.7 was an easy upgrade decision. For the work our users do every day, we observed it achieving the same quality at lower cost—more efficient and precise at tasks like analyzing logs and traces, finding bugs, and proposing fixes. Personally, I love how it pushes back during technical discussions to help me make better decisions. It really feels like a better coworker.Claude Opus 4.7 demonstrates strong substantive accuracy on BigLaw Bench for Harvey, scoring 90.9% at high effort with better reasoning calibration on review tables and noticeably smarter handling of ambiguous document editing tasks. It correctly distinguishes assignment provisions from change-of-control provisions, a task that has historically challenged frontier models. Substance was consistently rated as a strength across our evaluations: correct, thorough, and well-cited.Claude Opus 4.7 is a very impressive coding model, particularly for its autonomy and more creative reasoning. On CursorBench, Opus 4.7 is a meaningful jump in capabilities, clearing 70% versus Opus 4.6 at 58%.For complex multi-step workflows, Claude Opus 4.7 is a clear step up: plus 14% over Opus 4.6 at fewer tokens and a third of the tool errors. It’s the first model to pass our implicit-need tests, and it keeps executing through tool failures that used to stop Opus cold. This is the reliability jump that makes Notion Agent feel like a true teammate.In our evals, we saw a double-digit jump in accuracy of tool calls and planning in our core orchestrator agents. As users leverage Hebbia to plan and execute on use cases like retrieval, slide creation, or document generation, Claude Opus 4.7 shows the potential to improve agent decision-making in these workflows.On Rakuten-SWE-Bench, Claude Opus 4.7 resolves 3x more production tasks than Opus 4.6, with double-digit gains in Code Quality and Test Quality. This is a meaningful lift and a clear upgrade for the engineering work our teams are shipping every day.For CodeRabbit’s code review workloads, Claude Opus 4.7 is the sharpest model we’ve tested. Recall improved by over 10%, surfacing some of the most difficult-to-detect bugs in our most complex PRs, while precision remained stable despite the increased coverage. It’s a bit faster than GPT-5.4 xhigh on our harness, and we’re lining it up for our heaviest review work at launch.For Genspark’s Super Agent, Claude Opus 4.7 nails the three production differentiators that matter most: loop resistance, consistency, and graceful error recovery. Loop resistance is the most critical. A model that loops indefinitely on 1 in 18 queries wastes compute and blocks users. Lower variance means fewer surprises in prod. And Opus 4.7 achieves the highest quality-per-tool-call ratio we’ve measured.Claude Opus 4.7 is a meaningful step up for Warp. Opus 4.6 is one of the best models out there for developers, and this model is measurably more thorough on top of that. It passed Terminal Bench tasks that prior Claude models had failed, and worked through a tricky concurrency bug Opus 4.6 couldn’t crack. For us, that’s the signal.Claude Opus 4.7 is the best model in the world for building dashboards and data-rich interfaces. The design taste is genuinely surprising—it makes choices I’d actually ship. It’s my default daily driver now.Claude Opus 4.7 is the most capable model we’ve tested at Quantium. Evaluated against leading AI models through our proprietary benchmarking solution, the biggest gains showed up where they matter most: reasoning depth, structured problem-framing, and complex technical work. Fewer corrections, faster iterations, and stronger outputs to solve the hardest problems our clients bring us.Claude Opus 4.7 feels like a real step up in intelligence. Code quality is noticeably improved, it’s cutting out the meaningless wrapper functions and fallback scaffolding that used to pile up, and fixes its own code as it goes. It’s the cleanest jump we’ve seen since the move from Sonnet 3.7 to the Claude 4 series.For the computer-use work that sits at the heart of XBOW’s autonomous penetration testing, the new Claude Opus 4.7 is a step change: 98.5% on our visual-acuity benchmark versus 54.5% for Opus 4.6. Our single biggest Opus pain point effectively disappeared, and that unlocks its use for a whole class of work where we couldn’t use it before.Claude Opus 4.7 is a solid upgrade with no regressions for Vercel. It’s phenomenal on one-shot coding tasks, more correct and complete than Opus 4.6, and noticeably more honest about its own limits. It even does proofs on systems code before starting work, which is new behavior we haven’t seen from earlier Claude models.Claude Opus 4.7 is very strong and outperforms Opus 4.6 with a 10% to 15% lift in task success for Factory Droids, with fewer tool errors and more reliable follow-through on validation steps. It carries work all the way through instead of stopping halfway, which is exactly what enterprise engineering teams need.Claude Opus 4.7 autonomously built a complete Rust text-to-speech engine from scratch—neural model, SIMD kernels, browser demo—then fed its own output through a speech recognizer to verify it matched the Python reference. Months of senior engineering, delivered autonomously. The step up from Opus 4.6 is clear, and the codebase is public.Claude Opus 4.7 passed three TBench tasks that prior Claude models couldn’t, and it’s landing fixes our previous best model missed, including a race condition. It demonstrates strong precision in identifying real issues, and surfaces important findings that other models either gave up on or didn’t resolve. In Qodo’s real-world code review benchmark, we observed top-tier precision.On Databricks’ OfficeQA Pro, Claude Opus 4.7 shows meaningfully stronger document reasoning, with 21% fewer errors than Opus 4.6 when working with source information. Across our agentic reasoning over data benchmarks, it is the best-performing Claude model for enterprise document analysis.For Ramp, Claude Opus 4.7 stands out in agent-team workflows. We’re seeing stronger role fidelity, instruction-following, coordination, and complex reasoning, especially on engineering tasks that span tools, codebases, and debugging context. Compared with Opus 4.6, it needs much less step-by-step guidance, helping us scale the internal agent workflows our engineering teams run.Claude Opus 4.7 is measurably better than Opus 4.6 for Bolt’s longer-running app-building work, up to 10% better in the best cases, without the regressions we’ve come to expect from very agentic models. It pushes the ceiling on what our users can ship in a single session.Below are some highlights and notes from our early testing of Opus 4.7:Instruction following. Opus 4.7 is substantially better at following instructions. Interestingly, this means that prompts written for earlier models can sometimes now produce unexpected results: where previous models interpreted instructions loosely or skipped parts entirely, Opus 4.7 takes the instructions literally. Users should re-tune their prompts and harnesses accordingly.Improved multimodal support. Opus 4.7 has better vision for high-resolution images: it can accept images up to 2,576 pixels on the long edge (~3.75 megapixels), more than three times as many as prior Claude models. This opens up a wealth of multimodal uses that depend on fine visual detail: computer-use agents reading dense screenshots, data extractions from complex diagrams, and work that needs pixel-perfect references.1Real-world work. As well as its state-of-the-art score on the Finance Agent evaluation (see table above), our internal testing showed Opus 4.7 to be a more effective finance analyst than Opus 4.6, producing rigorous analyses and models, more professional presentations, and tighter integration across tasks. Opus 4.7 is also state-of-the-art on GDPval-AA, a third-party evaluation of economically valuable knowledge work across finance, legal, and other domains.Memory. Opus 4.7 is better at using file system-based memory. It remembers important notes across long, multi-session work, and uses them to move on to new tasks that, as a result, need less up-front context.The charts below display more evaluation results from our pre-release testing, across a range of different domains:Overall, Opus 4.7 shows a similar safety profile to Opus 4.6: our evaluations show low rates of concerning behavior such as deception, sycophancy, and cooperation with misuse. On some measures, such as honesty and resistance to malicious “prompt injection” attacks, Opus 4.7 is an improvement on Opus 4.6; in others (such as its tendency to give overly detailed harm-reduction advice on controlled substances), Opus 4.7 is modestly weaker. Our alignment assessment concluded that the model is “largely well-aligned and trustworthy, though not fully ideal in its behavior”. Note that Mythos Preview remains the best-aligned model we’ve trained according to our evaluations. Our safety evaluations are discussed in full in the Claude Opus 4.7 System Card.Overall misaligned behavior score from our automated behavioral audit. On this evaluation, Opus 4.7 is a modest improvement on Opus 4.6 and Sonnet 4.6, but Mythos Preview still shows the lowest rates of misaligned behavior.In addition to Claude Opus 4.7 itself, we’re launching the following updates:More effort control: Opus 4.7 introduces a new xhigh (“extra high”) effort level between high and max, giving users finer control over the tradeoff between reasoning and latency on hard problems. In Claude Code, we’ve raised the default effort level to xhigh for all plans. When testing Opus 4.7 for coding and agentic use cases, we recommend starting with high or xhigh effort.On the Claude Platform (API): as well as support for higher-resolution images, we’re also launching task budgets in public beta, giving developers a way to guide Claude’s token spend so it can prioritize work across longer runs.In Claude Code: The new /ultrareview slash command produces a dedicated review session that reads through changes and flags bugs and design issues that a careful reviewer would catch. We’re giving Pro and Max Claude Code users three free ultrareviews to try it out. In addition, we’ve extended auto mode to Max users. Auto mode is a new permissions option where Claude makes decisions on your behalf, meaning that you can run longer tasks with fewer interruptions—and with less risk than if you had chosen to skip all permissions.Opus 4.7 is a direct upgrade to Opus 4.6, but two changes are worth planning for because they affect token usage. First, Opus 4.7 uses an updated tokenizer that improves how the model processes text. The tradeoff is that the same input can map to more tokens—roughly 1.0 – 1.35× depending on the content type. Second, Opus 4.7 thinks more at higher effort levels, particularly on later turns in agentic settings. This improves its reliability on hard problems, but it does mean it produces more output tokens. Users can control token usage in various ways: by using the effort parameter, adjusting their task budgets, or prompting the model to be more concise. In our own testing, the net effect is favorable—token usage across all effort levels is improved on an internal coding evaluation, as shown below—but we recommend measuring the difference on real traffic. We’ve written a migration guide that provides further advice on upgrading from Opus 4.6 to Opus 4.7.Score on an internal agentic coding evaluation as a function of token usage at each effort level. In this evaluation, the model works autonomously from a single user prompt, and results may not be representative of token usage in interactive coding. See the migration guide for more on tuning effort levels.
In September 2024, Amandla Thomas-Johnson was a Ph. D. candidate studying in the U.S. on a student visa when he briefly attended a pro-Palestinian protest. In April 2025, Immigration and Customs Enforcement (ICE) sent Google an administrative subpoena requesting his data. The next month, Google gave Thomas-Johnson’s information to ICE without giving him the chance to challenge the subpoena, breaking a nearly decade-long promise to notify users before handing their data to law enforcement.
Google names a handful of exceptions to this promise (such as if Google receives a gag order from a court) that do not apply to Thomas-Johnson’s case. While ICE “requested” that Google not notify Thomas-Johnson, the request was not enforceable or mandated by a court. Today, the Electronic Frontier Foundation sent complaints to the California and New York Attorneys General asking them to investigate Google for deceptive trade practices for breaking that promise. You can read about the complaints here. Below is Thomas-Johnson’s account of his ordeal.
I thought my ordeal with U. S. immigration authorities was over a year ago, when I left the country, crossing into Canada at Niagara Falls.
By that point, the Trump administration had effectively turned federal power against international students like me. After I attended a pro-Palestine protest at Cornell University—for all of five minutes—the administration’s rhetoric about cracking down on students protesting what we saw as genocide forced me into hiding for three months. Federal agents came to my home looking for me. A friend was detained at an airport in Tampa and interrogated about my whereabouts.
I’m currently a Ph. D. student. Before that, I was a reporter. I’m a dual British and Trinadad and Tobago citizen. I have not been accused of any crime.
I believed that once I left U. S. territory, I had also left the reach of its authorities. I was wrong.
Weeks later, in Geneva, Switzerland, I received what looked like a routine email from Google. It informed me that the company had already handed over my account data to the Department of Homeland Security.
At first, I wasn’t alarmed. I had seen something similar before. An associate of mine, Momodou Taal, had received advance notice from Google and Facebook that his data had been requested. He was given advanced notice of the subpoenas, and law enforcement eventually withdrew them before the companies turned over his data.
Google had already disclosed my data without telling me.
I assumed I would be given the same opportunity. But the language in my email was different. It was final: “Google has received and responded to legal process from a law enforcement authority compelling the release of information related to your Google Account.”
Google had already disclosed my data without telling me. There was no opportunity to contest it.
To be clear, this should not have happened this way. Google promises that it will notify users before their data is handed over in response to legal processes, including administrative subpoenas. That notice is meant to provide a chance to challenge the request. In my case, that safeguard was bypassed. My data was handed over without warning—at the request of an administration targeting students engaged in protected political speech.
Months later, my lawyer at the Electronic Frontier Foundation obtained the subpoena itself. On paper, the request focused largely on subscriber information: IP addresses, physical address, other identifiers, and session times and durations.
But taken together, these fragments form something far more powerful—a detailed surveillance profile. IP logs can be used to approximate location. Physical addresses show where you sleep. Session times would show when you were communicating with friends or family. Even without message content, the picture that emerges is intimate and invasive.
What this experience has made clear is that anyone can be targeted by law enforcement. And with their massive stores of data, technology companies can facilitate those arbitrary investigations. Together, they can combine state power, corporate data, and algorithmic inference in ways that are difficult to see—and even harder to challenge.
The consequences of what happened to me are not abstract. I left the United States. But I do not feel that I have left its reach. Being investigated by the federal government is intimidating. Questions run through your head. Am I now a marked individual? Will I face heightened scrutiny if I continue my reporting? Can I travel safely to see family in the Caribbean?
Who, exactly, can I hold accountable?
Update: This post has been updated to include more information about Google’s exceptions to their notification policy, none of which applied to the subpoena targeting Thomas-Johnson.
We’re checking if you’re a real person and not an automated bad bot. Usually, the captcha below will complete itself. If it doesn’t, simply click the checkbox in the captcha to verify. Once verified, you’ll be taken to the page you wanted to visit.
If for some reason after verifying the captcha above, you are constantly being redirected to this exact same page to re-verify the captcha again, then please click on the button below to get in touch with the support team.
Finally, great battery life in a Framework Laptop
20 hours
Netflix 4K streaming250nit brightness, 30% volume, Windows 11
17 hours
Active web usage
250nit brightness, 30% volume, Windows 11
11 hours
Video conferencing250nit brightness, 30% volume, Windows 11
7 days
Standby without charging
Wi-Fi connected on Ubuntu
Intel® Core™ Ultra Series 3 processors
The Framework Laptop 13 Pro runs on Intel® Core™ Ultra Series 3 processors, unlocking 20 hours of battery ϟ life, up to 64GB of LPCAMM2 LPDDR5X memory, and support for up to 8TB of PCIe Gen 5.0 NVMe storage. It’s designed to stay responsive under sustained, heavy workloads.
Power-efficient memory, made upgradeable
We’re among the first to pair Intel® Core™ Ultra Series 3 with LPCAMM2. A high-density interposer enables LPDDR5X in a modular form, delivering 7467 MT/s and high performance per watt without soldering it down.
A laptop that you own
You can customize it,
Pick your ports with the Framework Expansion Card system and install them directly into your laptop without relying on external adapters. The magnet-attach Bezel lets you customize with bold or translucent color options.
USB-C
USB-A
Audio Jack
DisplayPort
HDMI
MicroSD
SD
Storage - 250GB
Storage - 1TB
Ethernet
repair it,
A truly easy-to-repair laptop that’s built to respect your rights. Just scan the QR codes, follow the guides, and replace any part with a single tool that’s included in the box.
upgrade it.
When you’re ready for more performance, you can upgrade individual components instead of replacing your entire laptop. Install a new Mainboard for generational processor upgrades, add memory to handle heavier workloads, or expand your storage to increase capacity or enable dual booting. The Framework Marketplace makes it easy to find the compatible parts you need.
Runs Linux. Really well.
(you can also use Windows 11 if you want)
We don’t just support Linux; we live in it. Framework Laptop 13 Pro with Intel® Core™ Ultra Series 3 is our first Ubuntu Certified system. We seed development hardware and provide funding to a range of other distros like Fedora, Bazzite, NixOS, CachyOS, and more to ensure reliable support.
A sensory upgrade
13.5″ 2880x1920 Touchscreen Display
A custom 13.5″ 3:2 touchscreen display with sharp 2880×1920 resolution gives you the vertical space you need for coding and productivity. A 30 – 120Hz variable refresh rate keeps motion smooth while optimizing power, and with up to 700nits of brightness and a matte surface, it stays clear across a wide range of lighting conditions.
A haptic touchpad that beats your expectations
The large 123.7mm × 76.7mm Haptic Touchpad, powered by four piezoelectric actuators, delivers consistent, high-quality clicks across the surface. Feedback and gestures are fully tunable, so you can set it up exactly how you want.
The keyboard you love, now even better
With 1.5mm of key travel, the keyboard delivers deeper, more tactile feedback than most modern laptops without increasing noise. A CNC aluminum Input Cover Frame reduces deck flex for a more solid and consistent feel. Available in multiple ANSI and ISO layouts, in black, black with lavender, and black with gray and orange.
Dolby Atmos® audio
The side-firing speakers are tuned with Dolby Atmos® to deliver clear, balanced audio on Windows, ideal for calls or music while you work.
Thin, light, and fully aluminum
At just 15.85mm thick and 1.4kg, gaining durability doesn’t mean losing portability. The Top Cover, Input Cover, and Bottom Cover are now CNC machined from 6063 aluminum, increasing rigidity and durability.
296.63mm
Width
228.98mm
Depth
15.85mm
Height
1.4kg
Weight
Open source ecosystems
We’ve open sourced design files and documentation for many core components and firmware on GitHub, giving you the freedom to modify, extend, or repurpose them.
Respecting your privacy
Privacy switches
Your privacy is protected at a hardware level, with physical switches that electrically cut off the webcam and microphones whenever you need.
No crapware
We hate software bloat as much as you do. Our pre-builts ship with Ubuntu or stock Windows 11 plus the necessary drivers, and our DIY Edition lets you bring whichever operating system you’d like.
The choice is yours
Framework Laptop 13 Pro is available pre-built with Windows or Ubuntu pre-installed, or as a DIY Edition that lets you install the operating system of your choice.
Upgrade, customize, and repair
Pick up new parts and modules for your Framework Laptop 13 Pro.
Keep track of what we’re working on with the Framework Newsletter.
ϟ
Testing conducted by Framework in April 2026 using Framework Laptop 13 Pro tested with Intel® Core™ Ultra X7 358H Processor, Intel® Arc™ B390 graphics, 2.8K touchscreen display, 32GB memory and 1TB storage, with display brightness set to 250nits, display refresh rate set to 60Hz, speaker volume as 30%, Dolby Atmos® disabled, and wireless enabled. Battery life tested by streaming Netflix 4K content in the Netflix app on Windows 11 under Best Power Efficiency mode. Battery life varies by use and configuration.
Today, we’re launching Claude Design, a new Anthropic Labs product that lets you collaborate with Claude to create polished visual work like designs, prototypes, slides, one-pagers, and more.
Claude Design is powered by our most capable vision model, Claude Opus 4.7, and is available in research preview for Claude Pro, Max, Team, and Enterprise subscribers. We’re rolling out to users gradually throughout the day.
Even experienced designers have to ration exploration—there’s rarely time to prototype a dozen directions, so you limit yourself to a few. And for founders, product managers, and marketers with an idea but not a design background, creating and sharing those ideas can be daunting.
Claude Design gives designers room to explore widely and everyone else a way to produce visual work. Describe what you need and Claude builds a first version. From there, you refine through conversation, inline comments, direct edits, or custom sliders (made by Claude) until it’s right. When given access, Claude can also apply your team’s design system to every project automatically, so the output is consistent with the rest of your company’s designs.
Teams have been using Claude Design for:
* Realistic prototypes: Designers can turn static mockups into easily-shareable interactive prototypes to gather feedback and user-test, without code review or PRs.
* Product wireframes and mockups: Product Managers can sketch out feature flows and hand them off to Claude Code for implementation, or share them with designers to refine further.
* Design explorations: Designers can quickly create a wide range of directions to explore.
* Pitch decks and presentations: Founders and Account Executives can go from a rough outline to a complete, on-brand deck in minutes, and then export as a PPTX or send to Canva.
* Marketing collateral: Marketers can create landing pages, social media assets, and campaign visuals, then loop in designers to polish.
* Frontier design: Anyone can build code-powered prototypes with voice, video, shaders, 3D and built-in AI.
Your brand, built in. During onboarding, Claude builds a design system for your team by reading your codebase and design files. Every project after that uses your colors, typography, and components automatically. You can refine the system over time, and teams can maintain more than one.
Import from anywhere. Start from a text prompt, upload images and documents (DOCX, PPTX, XLSX), or point Claude at your codebase. You can also use the web capture tool to grab elements directly from your website so prototypes look like the real product.
Refine with fine-grained controls. Comment inline on specific elements, edit text directly, or use adjustment knobs to tweak spacing, color, and layout live. Then ask Claude to apply your changes across the full design.
Collaborate. Designs have organization-scoped sharing. You can keep a document private, share it so anyone in your organization with the link can view it, or grant edit access so colleagues can modify the design and chat with Claude together in a group conversation.
Export anywhere. Share designs as an internal URL within your organization, save as a folder, or export to Canva, PDF, PPTX, or standalone HTML files.
Handoff to Claude Code. When a design is ready to build, Claude packages everything into a handoff bundle that you can pass to Claude Code with a single instruction.
Over the coming weeks, we’ll make it easier to build integrations with Claude Design, so you can connect it to more of the tools your team already uses.
Claude Design is available for Claude Pro, Max, Team, and Enterprise subscribers. Access is included with your plan and uses your subscription limits, with the option to continue beyond those limits by enabling extra usage.
For Enterprise organizations, Claude Design is off by default. Admins can enable it in Organization settings.
A collection of principles and patterns that shape software systems, teams, and decisions.
56 laws
•
Click any card to learn more
Organizations design systems that mirror their own communication structure.
Premature optimization is the root of all evil.
With a sufficient number of API users, all observable behaviors of your system will be depended on by somebody.
Leave the code better than you found it.
YAGNI (You Aren’t Gonna Need It)
Don’t add functionality until it is necessary.
Adding manpower to a late software project makes it later.
A complex system that works is invariably found to have evolved from a simple system that worked.
All non-trivial abstractions, to some degree, are leaky.
Every application has an inherent amount of irreducible complexity that can only be shifted, not eliminated.
A distributed system can guarantee only two of: consistency, availability, and partition tolerance.
Small, successful systems tend to be followed by overengineered, bloated replacements.
A set of eight false assumptions that new distributed system designers often make.
Every program attempts to expand until it can read mail.
There is a cognitive limit of about 150 stable relationships one person can maintain.
The square root of the total number of participants does 50% of the work.
Those who understand technology don’t manage it, and those who manage it don’t understand it.
In a hierarchy, every employee tends to rise to their level of incompetence.
The minimum number of team members whose loss would put the project in serious trouble.
Companies tend to promote incompetent employees to management to limit the damage they can do.
Work expands to fill the time available for its completion.
The first 90% of the code accounts for the first 90% of development time; the remaining 10% accounts for the other 90%.
It always takes longer than you expect, even when you take into account Hofstadter’s Law.
When a measure becomes a target, it ceases to be a good measure.
Anything you need to quantify can be measured in some way better than not measuring it.
Anything that can go wrong will go wrong.
Be conservative in what you do, be liberal in what you accept from others.
Technical Debt is everything that slows us down when developing software.
Given enough eyeballs, all bugs are shallow.
Debugging is twice as hard as writing the code in the first place.
A project should have many fast unit tests, fewer integration tests, and only a small number of UI tests.
Repeatedly running the same tests becomes less effective over time.
Software that reflects the real world must evolve, and that evolution has predictable limits.
90% of everything is crap.
The speedup from parallelization is limited by the fraction of work that cannot be parallelized.
It is possible to achieve significant speedup in parallel processing by increasing the problem size.
The value of a network is proportional to the square of the number of users.
Every piece of knowledge must have a single, unambiguous, authoritative representation.
Designs and systems should be as simple as possible.
Five main guidelines that enhance software design, making code more maintainable and scalable.
An object should only interact with its immediate friends, not strangers.
Software and interfaces should behave in a way that least surprises users and other developers.
The less you know about something, the more confident you tend to be.
Never attribute to malice that which is adequately explained by stupidity or carelessness.
The simplest explanation is often the most accurate one.
Sticking with a choice because you’ve invested time or energy in it, even when walking away helps you.
The Map Is Not the Territory
Our representations of reality are not the same as reality itself.
A tendency to favor information that supports our existing beliefs or ideas.
We tend to overestimate the effect of a technology in the short run and underestimate the impact in the long run.
The longer something has been in use, the more likely it is to continue being used.
Breaking a complex problem into its most basic blocks and then building up from there.
Solving a problem by considering the opposite outcome and working backward from it.
80% of the problems result from 20% of the causes.
The best way to get the correct answer on the Internet is not to ask a question, it’s to post the wrong answer.
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.