10 interesting stories served every morning and every evening.
Our latest model, Claude Opus 4.7, is now generally available. Opus 4.7 is a notable improvement on Opus 4.6 in advanced software engineering, with particular gains on the most difficult tasks. Users report being able to hand off their hardest coding work—the kind that previously needed close supervision—to Opus 4.7 with confidence. Opus 4.7 handles complex, long-running tasks with rigor and consistency, pays precise attention to instructions, and devises ways to verify its own outputs before reporting back.The model also has substantially better vision: it can see images in greater resolution. It’s more tasteful and creative when completing professional tasks, producing higher-quality interfaces, slides, and docs. And—although it is less broadly capable than our most powerful model, Claude Mythos Preview—it shows better results than Opus 4.6 across a range of benchmarks:Last week we announced Project Glasswing, highlighting the risks—and benefits—of AI models for cybersecurity. We stated that we would keep Claude Mythos Preview’s release limited and test new cyber safeguards on less capable models first. Opus 4.7 is the first such model: its cyber capabilities are not as advanced as those of Mythos Preview (indeed, during its training we experimented with efforts to differentially reduce these capabilities). We are releasing Opus 4.7 with safeguards that automatically detect and block requests that indicate prohibited or high-risk cybersecurity uses. What we learn from the real-world deployment of these safeguards will help us work towards our eventual goal of a broad release of Mythos-class models.Security professionals who wish to use Opus 4.7 for legitimate cybersecurity purposes (such as vulnerability research, penetration testing, and red-teaming) are invited to join our new Cyber Verification Program.Opus 4.7 is available today across all Claude products and our API, Amazon Bedrock, Google Cloud’s Vertex AI, and Microsoft Foundry. Pricing remains the same as Opus 4.6: $5 per million input tokens and $25 per million output tokens. Developers can use claude-opus-4-7 via the Claude API.Claude Opus 4.7 has garnered strong feedback from our early-access testers:In early testing, we’re seeing the potential for a significant leap for our developers with Claude Opus 4.7. It catches its own logical faults during the planning phase and accelerates execution, far beyond previous Claude models. As a financial technology platform serving millions of consumers and businesses at significant scale, this combination of speed and precision could be game-changing: accelerating development velocity for faster delivery of the trusted financial solutions our customers rely on every day.Anthropic has already set the standard for coding models, and Claude Opus 4.7 pushes that further in a meaningful way as the state-of-the-art model on the market. In our internal evals, it stands out not just for raw capability, but for how well it handles real-world async workflows—automations, CI/CD, and long-running tasks. It also thinks more deeply about problems and brings a more opinionated perspective, rather than simply agreeing with the user.Claude Opus 4.7 is the strongest model Hex has evaluated. It correctly reports when data is missing instead of providing plausible-but-incorrect fallbacks, and it resists dissonant-data traps that even Opus 4.6 falls for. It’s a more intelligent, more efficient Opus 4.6: low-effort Opus 4.7 is roughly equivalent to medium-effort Opus 4.6.On our 93-task coding benchmark, Claude Opus 4.7 lifted resolution by 13% over Opus 4.6, including four tasks neither Opus 4.6 nor Sonnet 4.6 could solve. Combined with faster median latency and strict instruction following, it’s particularly meaningful for complex, long-running coding workflows. It cuts the friction from those multi-step tasks so developers can stay in the flow and focus on building.Based on our internal research-agent benchmark, Claude Opus 4.7 has the strongest efficiency baseline we’ve seen for multi-step work. It tied for the top overall score across our six modules at 0.715 and delivered the most consistent long-context performance of any model we tested. On General Finance—our largest module—it improved meaningfully on Opus 4.6, scoring 0.813 versus 0.767, while also showing the best disclosure and data discipline in the group. And on deductive logic, an area where Opus 4.6 struggled, Opus 4.7 is solid.Claude Opus 4.7 extends the limit of what models can do to investigate and get tasks done. Anthropic has clearly optimized for sustained reasoning over long runs, and it shows with market-leading performance. As engineers shift from working 1:1 with agents to managing them in parallel, this is exactly the kind of frontier capability that unlocks new workflows.We’re seeing major improvements in Claude Opus 4.7’s multimodal understanding, from reading chemical structures to interpreting complex technical diagrams. The higher resolution support is helping Solve Intelligence build best-in-class tools for life sciences patent workflows, from drafting and prosecution to infringement detection and invalidity charting.Claude Opus 4.7 takes long-horizon autonomy to a new level in Devin. It works coherently for hours, pushes through hard problems rather than giving up, and unlocks a class of deep investigation work we couldn’t reliably run before.For Replit, Claude Opus 4.7 was an easy upgrade decision. For the work our users do every day, we observed it achieving the same quality at lower cost—more efficient and precise at tasks like analyzing logs and traces, finding bugs, and proposing fixes. Personally, I love how it pushes back during technical discussions to help me make better decisions. It really feels like a better coworker.Claude Opus 4.7 demonstrates strong substantive accuracy on BigLaw Bench for Harvey, scoring 90.9% at high effort with better reasoning calibration on review tables and noticeably smarter handling of ambiguous document editing tasks. It correctly distinguishes assignment provisions from change-of-control provisions, a task that has historically challenged frontier models. Substance was consistently rated as a strength across our evaluations: correct, thorough, and well-cited.Claude Opus 4.7 is a very impressive coding model, particularly for its autonomy and more creative reasoning. On CursorBench, Opus 4.7 is a meaningful jump in capabilities, clearing 70% versus Opus 4.6 at 58%.For complex multi-step workflows, Claude Opus 4.7 is a clear step up: plus 14% over Opus 4.6 at fewer tokens and a third of the tool errors. It’s the first model to pass our implicit-need tests, and it keeps executing through tool failures that used to stop Opus cold. This is the reliability jump that makes Notion Agent feel like a true teammate.In our evals, we saw a double-digit jump in accuracy of tool calls and planning in our core orchestrator agents. As users leverage Hebbia to plan and execute on use cases like retrieval, slide creation, or document generation, Claude Opus 4.7 shows the potential to improve agent decision-making in these workflows.On Rakuten-SWE-Bench, Claude Opus 4.7 resolves 3x more production tasks than Opus 4.6, with double-digit gains in Code Quality and Test Quality. This is a meaningful lift and a clear upgrade for the engineering work our teams are shipping every day.For CodeRabbit’s code review workloads, Claude Opus 4.7 is the sharpest model we’ve tested. Recall improved by over 10%, surfacing some of the most difficult-to-detect bugs in our most complex PRs, while precision remained stable despite the increased coverage. It’s a bit faster than GPT-5.4 xhigh on our harness, and we’re lining it up for our heaviest review work at launch.For Genspark’s Super Agent, Claude Opus 4.7 nails the three production differentiators that matter most: loop resistance, consistency, and graceful error recovery. Loop resistance is the most critical. A model that loops indefinitely on 1 in 18 queries wastes compute and blocks users. Lower variance means fewer surprises in prod. And Opus 4.7 achieves the highest quality-per-tool-call ratio we’ve measured.Claude Opus 4.7 is a meaningful step up for Warp. Opus 4.6 is one of the best models out there for developers, and this model is measurably more thorough on top of that. It passed Terminal Bench tasks that prior Claude models had failed, and worked through a tricky concurrency bug Opus 4.6 couldn’t crack. For us, that’s the signal.Claude Opus 4.7 is the best model in the world for building dashboards and data-rich interfaces. The design taste is genuinely surprising—it makes choices I’d actually ship. It’s my default daily driver now.Claude Opus 4.7 is the most capable model we’ve tested at Quantium. Evaluated against leading AI models through our proprietary benchmarking solution, the biggest gains showed up where they matter most: reasoning depth, structured problem-framing, and complex technical work. Fewer corrections, faster iterations, and stronger outputs to solve the hardest problems our clients bring us.Claude Opus 4.7 feels like a real step up in intelligence. Code quality is noticeably improved, it’s cutting out the meaningless wrapper functions and fallback scaffolding that used to pile up, and fixes its own code as it goes. It’s the cleanest jump we’ve seen since the move from Sonnet 3.7 to the Claude 4 series.For the computer-use work that sits at the heart of XBOW’s autonomous penetration testing, the new Claude Opus 4.7 is a step change: 98.5% on our visual-acuity benchmark versus 54.5% for Opus 4.6. Our single biggest Opus pain point effectively disappeared, and that unlocks its use for a whole class of work where we couldn’t use it before.Claude Opus 4.7 is a solid upgrade with no regressions for Vercel. It’s phenomenal on one-shot coding tasks, more correct and complete than Opus 4.6, and noticeably more honest about its own limits. It even does proofs on systems code before starting work, which is new behavior we haven’t seen from earlier Claude models.Claude Opus 4.7 is very strong and outperforms Opus 4.6 with a 10% to 15% lift in task success for Factory Droids, with fewer tool errors and more reliable follow-through on validation steps. It carries work all the way through instead of stopping halfway, which is exactly what enterprise engineering teams need.Claude Opus 4.7 autonomously built a complete Rust text-to-speech engine from scratch—neural model, SIMD kernels, browser demo—then fed its own output through a speech recognizer to verify it matched the Python reference. Months of senior engineering, delivered autonomously. The step up from Opus 4.6 is clear, and the codebase is public.Claude Opus 4.7 passed three TBench tasks that prior Claude models couldn’t, and it’s landing fixes our previous best model missed, including a race condition. It demonstrates strong precision in identifying real issues, and surfaces important findings that other models either gave up on or didn’t resolve. In Qodo’s real-world code review benchmark, we observed top-tier precision.On Databricks’ OfficeQA Pro, Claude Opus 4.7 shows meaningfully stronger document reasoning, with 21% fewer errors than Opus 4.6 when working with source information. Across our agentic reasoning over data benchmarks, it is the best-performing Claude model for enterprise document analysis.For Ramp, Claude Opus 4.7 stands out in agent-team workflows. We’re seeing stronger role fidelity, instruction-following, coordination, and complex reasoning, especially on engineering tasks that span tools, codebases, and debugging context. Compared with Opus 4.6, it needs much less step-by-step guidance, helping us scale the internal agent workflows our engineering teams run.Claude Opus 4.7 is measurably better than Opus 4.6 for Bolt’s longer-running app-building work, up to 10% better in the best cases, without the regressions we’ve come to expect from very agentic models. It pushes the ceiling on what our users can ship in a single session.Below are some highlights and notes from our early testing of Opus 4.7:Instruction following. Opus 4.7 is substantially better at following instructions. Interestingly, this means that prompts written for earlier models can sometimes now produce unexpected results: where previous models interpreted instructions loosely or skipped parts entirely, Opus 4.7 takes the instructions literally. Users should re-tune their prompts and harnesses accordingly.Improved multimodal support. Opus 4.7 has better vision for high-resolution images: it can accept images up to 2,576 pixels on the long edge (~3.75 megapixels), more than three times as many as prior Claude models. This opens up a wealth of multimodal uses that depend on fine visual detail: computer-use agents reading dense screenshots, data extractions from complex diagrams, and work that needs pixel-perfect references.1Real-world work. As well as its state-of-the-art score on the Finance Agent evaluation (see table above), our internal testing showed Opus 4.7 to be a more effective finance analyst than Opus 4.6, producing rigorous analyses and models, more professional presentations, and tighter integration across tasks. Opus 4.7 is also state-of-the-art on GDPval-AA, a third-party evaluation of economically valuable knowledge work across finance, legal, and other domains.Memory. Opus 4.7 is better at using file system-based memory. It remembers important notes across long, multi-session work, and uses them to move on to new tasks that, as a result, need less up-front context.The charts below display more evaluation results from our pre-release testing, across a range of different domains:Overall, Opus 4.7 shows a similar safety profile to Opus 4.6: our evaluations show low rates of concerning behavior such as deception, sycophancy, and cooperation with misuse. On some measures, such as honesty and resistance to malicious “prompt injection” attacks, Opus 4.7 is an improvement on Opus 4.6; in others (such as its tendency to give overly detailed harm-reduction advice on controlled substances), Opus 4.7 is modestly weaker. Our alignment assessment concluded that the model is “largely well-aligned and trustworthy, though not fully ideal in its behavior”. Note that Mythos Preview remains the best-aligned model we’ve trained according to our evaluations. Our safety evaluations are discussed in full in the Claude Opus 4.7 System Card.Overall misaligned behavior score from our automated behavioral audit. On this evaluation, Opus 4.7 is a modest improvement on Opus 4.6 and Sonnet 4.6, but Mythos Preview still shows the lowest rates of misaligned behavior.In addition to Claude Opus 4.7 itself, we’re launching the following updates:More effort control: Opus 4.7 introduces a new xhigh (“extra high”) effort level between high and max, giving users finer control over the tradeoff between reasoning and latency on hard problems. In Claude Code, we’ve raised the default effort level to xhigh for all plans. When testing Opus 4.7 for coding and agentic use cases, we recommend starting with high or xhigh effort.On the Claude Platform (API): as well as support for higher-resolution images, we’re also launching task budgets in public beta, giving developers a way to guide Claude’s token spend so it can prioritize work across longer runs.In Claude Code: The new /ultrareview slash command produces a dedicated review session that reads through changes and flags bugs and design issues that a careful reviewer would catch. We’re giving Pro and Max Claude Code users three free ultrareviews to try it out. In addition, we’ve extended auto mode to Max users. Auto mode is a new permissions option where Claude makes decisions on your behalf, meaning that you can run longer tasks with fewer interruptions—and with less risk than if you had chosen to skip all permissions.Opus 4.7 is a direct upgrade to Opus 4.6, but two changes are worth planning for because they affect token usage. First, Opus 4.7 uses an updated tokenizer that improves how the model processes text. The tradeoff is that the same input can map to more tokens—roughly 1.0–1.35× depending on the content type. Second, Opus 4.7 thinks more at higher effort levels, particularly on later turns in agentic settings. This improves its reliability on hard problems, but it does mean it produces more output tokens. Users can control token usage in various ways: by using the effort parameter, adjusting their task budgets, or prompting the model to be more concise. In our own testing, the net effect is favorable—token usage across all effort levels is improved on an internal coding evaluation, as shown below—but we recommend measuring the difference on real traffic. We’ve written a migration guide that provides further advice on upgrading from Opus 4.6 to Opus 4.7.Score on an internal agentic coding evaluation as a function of token usage at each effort level. In this evaluation, the model works autonomously from a single user prompt, and results may not be representative of token usage in interactive coding. See the migration guide for more on tuning effort levels.
...
Read the original on www.anthropic.com »
In September 2024, Amandla Thomas-Johnson was a Ph. D. candidate studying in the U.S. on a student visa when he briefly attended a pro-Palestinian protest. In April 2025, Immigration and Customs Enforcement (ICE) sent Google an administrative subpoena requesting his data. The next month, Google gave Thomas-Johnson’s information to ICE without giving him the chance to challenge the subpoena, breaking a nearly decade-long promise to notify users before handing their data to law enforcement.
Google names a handful of exceptions to this promise (such as if Google receives a gag order from a court) that do not apply to Thomas-Johnson’s case. While ICE “requested” that Google not notify Thomas-Johnson, the request was not enforceable or mandated by a court. Today, the Electronic Frontier Foundation sent complaints to the California and New York Attorneys General asking them to investigate Google for deceptive trade practices for breaking that promise. You can read about the complaints here. Below is Thomas-Johnson’s account of his ordeal.
I thought my ordeal with U. S. immigration authorities was over a year ago, when I left the country, crossing into Canada at Niagara Falls.
By that point, the Trump administration had effectively turned federal power against international students like me. After I attended a pro-Palestine protest at Cornell University—for all of five minutes—the administration’s rhetoric about cracking down on students protesting what we saw as genocide forced me into hiding for three months. Federal agents came to my home looking for me. A friend was detained at an airport in Tampa and interrogated about my whereabouts.
I’m currently a Ph. D. student. Before that, I was a reporter. I’m a dual British and Trinadad and Tobago citizen. I have not been accused of any crime.
I believed that once I left U. S. territory, I had also left the reach of its authorities. I was wrong.
Weeks later, in Geneva, Switzerland, I received what looked like a routine email from Google. It informed me that the company had already handed over my account data to the Department of Homeland Security.
At first, I wasn’t alarmed. I had seen something similar before. An associate of mine, Momodou Taal, had received advance notice from Google and Facebook that his data had been requested. He was given advanced notice of the subpoenas, and law enforcement eventually withdrew them before the companies turned over his data.
Google had already disclosed my data without telling me.
I assumed I would be given the same opportunity. But the language in my email was different. It was final: “Google has received and responded to legal process from a law enforcement authority compelling the release of information related to your Google Account.”
Google had already disclosed my data without telling me. There was no opportunity to contest it.
To be clear, this should not have happened this way. Google promises that it will notify users before their data is handed over in response to legal processes, including administrative subpoenas. That notice is meant to provide a chance to challenge the request. In my case, that safeguard was bypassed. My data was handed over without warning—at the request of an administration targeting students engaged in protected political speech.
Months later, my lawyer at the Electronic Frontier Foundation obtained the subpoena itself. On paper, the request focused largely on subscriber information: IP addresses, physical address, other identifiers, and session times and durations.
But taken together, these fragments form something far more powerful—a detailed surveillance profile. IP logs can be used to approximate location. Physical addresses show where you sleep. Session times would show when you were communicating with friends or family. Even without message content, the picture that emerges is intimate and invasive.
What this experience has made clear is that anyone can be targeted by law enforcement. And with their massive stores of data, technology companies can facilitate those arbitrary investigations. Together, they can combine state power, corporate data, and algorithmic inference in ways that are difficult to see—and even harder to challenge.
The consequences of what happened to me are not abstract. I left the United States. But I do not feel that I have left its reach. Being investigated by the federal government is intimidating. Questions run through your head. Am I now a marked individual? Will I face heightened scrutiny if I continue my reporting? Can I travel safely to see family in the Caribbean?
Who, exactly, can I hold accountable?
Update: This post has been updated to include more information about Google’s exceptions to their notification policy, none of which applied to the subpoena targeting Thomas-Johnson.
...
Read the original on www.eff.org »
Tim Cook to become :br(s): :br(m): :br(l): :br(xl):Apple Executive Chairman
John Ternus
to become Apple CEO
CUPERTINO, CALIFORNIA Apple announced that Tim Cook will become executive chairman of Apple’s board of directors and John Ternus, senior vice president of Hardware Engineering, will become Apple’s next chief executive officer effective on September 1, 2026. The transition, which was approved unanimously by the Board of Directors, follows a thoughtful, long-term succession planning process.
Cook will continue in his role as CEO through the summer as he works closely with Ternus on a smooth transition. As executive chairman, Cook will assist with certain aspects of the company, including engaging with policymakers around the world.
“It has been the greatest privilege of my life to be the CEO of Apple and to have been trusted to lead such an extraordinary company. I love Apple with all of my being, and I am so grateful to have had the opportunity to work with a team of such ingenious, innovative, creative, and deeply caring people who have been unwavering in their dedication to enriching the lives of our customers and creating the best products and services in the world,” said Cook. “John Ternus has the mind of an engineer, the soul of an innovator, and the heart to lead with integrity and with honor. He is a visionary whose contributions to Apple over 25 years are already too numerous to count, and he is without question the right person to lead Apple into the future. I could not be more confident in his abilities and his character, and I look forward to working closely with him on this transition and in my new role as executive chairman.”
“I am profoundly grateful for this opportunity to carry Apple’s mission forward,” said Ternus. “Having spent almost my entire career at Apple, I have been lucky to have worked under Steve Jobs and to have had Tim Cook as my mentor. It has been a privilege to help shape the products and experiences that have changed so much of how we interact with the world and with one another. I am filled with optimism about what we can achieve in the years to come, and I am so happy to know that the most talented people on earth are here at Apple, determined to be part of something bigger than any one of us. I am humbled to step into this role, and I promise to lead with the values and vision that have come to define this special place for half a century.”
Arthur Levinson, who has been Apple’s non-executive chairman for the past 15 years, will become its lead independent director on September 1, 2026. Ternus will join the board of directors, also effective September 1, 2026.
“Tim’s unprecedented and outstanding leadership has transformed Apple into the world’s best company. He’s introduced groundbreaking products and services time and again, and his integrity and values are infused into everything Apple does,” said Levinson. “On behalf of the entire board of directors, we are incredibly grateful for his countless contributions to Apple and the world, and we are thrilled he will now be executive chairman. We believe John is the best possible leader to succeed Tim and as he transitions to CEO we know his love of Apple, his leadership, deep technical knowledge, and relentless focus on creating great products will help lead Apple to an extraordinary future.”
“I want to thank Art for the incredible work he has done leading the board of directors for the past 15 years,” said Cook. “I have always found his advice to be invaluable and I appreciate his thoughtfulness and his unwavering dedication to the company. I am grateful he will serve as our lead independent director, and I look forward to working with him in my new role.”
Tim Cook joined Apple in 1998. He became CEO in 2011 and has overseen the introduction of numerous products and services, including new categories like Apple Watch, AirPods, and Apple Vision Pro, and services ranging from iCloud and Apple Pay to Apple TV and Apple Music. He was also instrumental in expanding existing product lines. Under Cook’s leadership Apple has grown from a market capitalization of approximately $350 billion to $4 trillion, representing a more than 1,000% increase, and yearly revenue has nearly quadrupled, from $108 billion in fiscal year 2011 to more than $416 billion in fiscal year 2025. The company has expanded its global footprint substantially, particularly in emerging markets; it is now in more than 200 countries and territories. Apple operates over 500 retail stores and has more than doubled the number of countries in which its customers can visit an Apple Store. During his tenure, Apple has grown by more than 100,000 team members and increased its active installed base to more than 2.5 billion devices.
Apple Services has been a major focus area of Cook’s, and during his tenure the category has grown to become a more than $100 billion business, the equivalent of a Fortune 40 company. Cook was also instrumental in creating the wearables category at Apple, which now includes the world’s most popular watch and headphones, and which has served as the foundation for Apple’s remarkable impact on the health and safety of its users. Under Cook’s leadership, Apple also transitioned to Apple-designed silicon, enabling the company to own more of its primary technology and deliver industry-leading gains in power efficiency and performance that directly benefit users across its products.
Cook has made Apple’s core values even more central to the company’s decision making and product development. Under his leadership, the company reduced its carbon footprint by more than 60 percent below 2015 levels during a period in which revenue nearly doubled. Cook, who has long advocated for privacy as a fundamental human right, has made privacy and security imperative at Apple, setting a standard for user protection that continues to set the company apart from the rest of the technology industry. He has also pushed for continued innovation in the accessibility space, believing that Apple products should be made for everyone. And he has made central to his leadership the notion that Apple should be a place where everyone can feel they belong and where everyone is treated with dignity and respect.
Ternus joined Apple’s product design team in 2001 and became a vice president of Hardware Engineering in 2013. He joined the executive team in 2021 as senior vice president of Hardware Engineering. Throughout his tenure at Apple, Ternus has overseen hardware engineering work on a variety of groundbreaking products across every category. He was instrumental in the introduction of multiple new product lines, including iPad and AirPods, as well as many generations of products across iPhone, Mac, and Apple Watch.
Ternus’s work on Mac has helped the category become more powerful and more popular globally than at any time in its 40-year history. That includes the recent introduction of MacBook Neo, an all-new laptop that makes the Mac experience even more accessible to more people around the world. This past fall, his team’s efforts were on full display with the introduction of a redefined iPhone lineup, including the incredibly powerful iPhone 17 Pro and Pro Max, the radically thin and durable iPhone Air, and the iPhone 17, which has been an incredible upgrade for users. Under his leadership, his team also drove advancements in AirPods to make them the world’s best in-ear headphones, with unprecedented active noise cancellation, as well as the capability to become an all-in-one hearing health system that can serve as over-the-counter hearing aids.
Ternus led much of the company’s focus in areas like reliability and durability, introducing new techniques that have made Apple products remarkably resilient. He has also driven much of Apple’s innovation in materials and hardware design that have reduced the carbon footprint of its products, including the creation of a new, recycled aluminum compound that has been introduced across multiple product lines, the use of 3-D printed titanium in Apple Watch Ultra 3, and innovations in repairability that have increased the lifespans of several Apple products.
Prior to Apple, Ternus worked as a mechanical engineer at Virtual Research Systems. He holds a bachelor’s degree in Mechanical Engineering from the University of Pennsylvania.
This press release contains forward-looking statements, within the meaning of the Private Securities Litigation Reform Act of 1995. These forward-looking statements include without limitation those about Apple’s executive succession plans. These statements involve risks and uncertainties, and actual results may differ materially from any future results expressed or implied by the forward-looking statements. More information regarding potential risks and other factors that could affect the company are included in Apple’s filings with the SEC, including in the “Risk Factors” and “Management’s Discussion and Analysis of Financial Condition and Results of Operations” sections of Apple’s most recently filed periodic reports on Form 10-K and Form 10-Q and subsequent filings. Apple assumes no obligation to update any forward-looking statements or information, which speak only as of the date they are made.
About Apple
Apple revolutionized personal technology with the introduction of the Macintosh in 1984. Today, Apple leads the world in innovation with iPhone, iPad, Mac, AirPods, Apple Watch, and Apple Vision Pro. Apple’s six software platforms — iOS, iPadOS, macOS, watchOS, visionOS, and tvOS — provide seamless experiences across all Apple devices and empower people with breakthrough services including the App Store, Apple Music, Apple Pay, iCloud, and Apple TV+. Apple’s more than 150,000 employees are dedicated to making the best products on earth and to leaving the world better than we found it.
© 2026 Apple Inc. All rights reserved. Apple, the Apple logo, Apple Watch, AirPods, Apple Vision Pro, iCloud, Apple Pay, Apple TV, Apple Music, Apple Store, iPad, iPhone, Mac, MacBook Neo, and iPhone Air are trademarks of Apple. Other company and product names may be trademarks of their respective owners.
...
Read the original on www.apple.com »
Today, we’re launching Claude Design, a new Anthropic Labs product that lets you collaborate with Claude to create polished visual work like designs, prototypes, slides, one-pagers, and more.
Claude Design is powered by our most capable vision model, Claude Opus 4.7, and is available in research preview for Claude Pro, Max, Team, and Enterprise subscribers. We’re rolling out to users gradually throughout the day.
Even experienced designers have to ration exploration—there’s rarely time to prototype a dozen directions, so you limit yourself to a few. And for founders, product managers, and marketers with an idea but not a design background, creating and sharing those ideas can be daunting.
Claude Design gives designers room to explore widely and everyone else a way to produce visual work. Describe what you need and Claude builds a first version. From there, you refine through conversation, inline comments, direct edits, or custom sliders (made by Claude) until it’s right. When given access, Claude can also apply your team’s design system to every project automatically, so the output is consistent with the rest of your company’s designs.
Teams have been using Claude Design for:
* Realistic prototypes: Designers can turn static mockups into easily-shareable interactive prototypes to gather feedback and user-test, without code review or PRs.
* Product wireframes and mockups: Product Managers can sketch out feature flows and hand them off to Claude Code for implementation, or share them with designers to refine further.
* Design explorations: Designers can quickly create a wide range of directions to explore.
* Pitch decks and presentations: Founders and Account Executives can go from a rough outline to a complete, on-brand deck in minutes, and then export as a PPTX or send to Canva.
* Marketing collateral: Marketers can create landing pages, social media assets, and campaign visuals, then loop in designers to polish.
* Frontier design: Anyone can build code-powered prototypes with voice, video, shaders, 3D and built-in AI.
Your brand, built in. During onboarding, Claude builds a design system for your team by reading your codebase and design files. Every project after that uses your colors, typography, and components automatically. You can refine the system over time, and teams can maintain more than one.
Import from anywhere. Start from a text prompt, upload images and documents (DOCX, PPTX, XLSX), or point Claude at your codebase. You can also use the web capture tool to grab elements directly from your website so prototypes look like the real product.
Refine with fine-grained controls. Comment inline on specific elements, edit text directly, or use adjustment knobs to tweak spacing, color, and layout live. Then ask Claude to apply your changes across the full design.
Collaborate. Designs have organization-scoped sharing. You can keep a document private, share it so anyone in your organization with the link can view it, or grant edit access so colleagues can modify the design and chat with Claude together in a group conversation.
Export anywhere. Share designs as an internal URL within your organization, save as a folder, or export to Canva, PDF, PPTX, or standalone HTML files.
Handoff to Claude Code. When a design is ready to build, Claude packages everything into a handoff bundle that you can pass to Claude Code with a single instruction.
Over the coming weeks, we’ll make it easier to build integrations with Claude Design, so you can connect it to more of the tools your team already uses.
Claude Design is available for Claude Pro, Max, Team, and Enterprise subscribers. Access is included with your plan and uses your subscription limits, with the option to continue beyond those limits by enabling extra usage.
For Enterprise organizations, Claude Design is off by default. Admins can enable it in Organization settings.
...
Read the original on www.anthropic.com »
TLDR: Despite claiming to backup all your data, Backblaze quietly stopped backing up OneDrive and Dropbox folders - along with potentially many other things.
For ten years I have been using Backblaze for my personal computer backup. Before 2015 I would backup files to one of two large external hard discs. I then rotated these drives between, first my father’s house, and after I moved to the UK, my office drawers.
In 2015 Backblaze seemed like a good bet. Unlike Crashplan their software wasn’t a bloated Java app, but they did have unlimited storage. If you could cram it into your PC they would back it up. With their yearly Hard Drive reviews making good press, a lot of personal recommendations from my friends and colleagues, their service sounded great. I installed the software, ran it for several weeks, and sure enough my data was safely stored in their cloud.
I had further reason to be impressed when several years later one of my hard drives failed. I made use of their “send me a hard drive with my stuff on it service”. A drive turned up filled with my precious data. That for me was proof that this system worked, and that it worked well.
And so I recommended Backblaze for years. What do you do for backup? I would extoll the virtues of Backblaze, and they made many sales from such recommendations.
There were a few things I didn’t like. The app, could use a lot of memory, especially after doing a large import of photographs. The website, which I often used to restore single files or folders, was slow and clunky to use. The windows app in particular was clunky with an early 2000s aesthetic and cramped lists. There was the time they leaked all your filenames to Facebook, but they probably fixed that.
But no matter, small problems for the peace of mind of having all my files backed up.
Backup software is meant to back up your files. Which files? Well the files you need. Given everyone is different, with different workflows and filetypes, the ideal thing is to back up all your files. No backup provider knows what I will need in the future. The provider must plan accordingly.
My first troubling discovery was in 2025, when I made several errors then did a push -f to GitHub and blew away the git history for a half decade old repo. No data was lost, but the log of changes was. No problem I thought, I’ll just restore this from Backblaze. Sadly it was not to be. At some point Backblaze had started to ignore .git folders.
This annoyed me. Firstly I needed that folder and Backblaze had let me down. Secondly within the Backblaze preferences I could find no way to re-enable this. In fact looking at the list of exclusions I could find no mention of .git whatsoever.
This made me wonder - I had checked the exclusions list when I installed Backblaze 9 years before, had I missed it? Had I missed anything else?
Well lesson learned I guess, but then a week ago I came across this thread on reddit: “Doesn’t back up Dropbox folder??”. A user was surprised to find their Dropbox folder no longer being backed up. Alarmed I logged into Backblaze, and lo and behold, my OneDrive folder was missing.
Backblaze has one job, and apparently they are unable to do that job. Back up my stuff. But they have decided not to.
Lets take an aside.
A reasonable person might point out those files on OneDrive are already being backed up - by OneDrive! No. Dropbox and OneDrive are for file syncing - syncing your files to the cloud. They offer limited protection. OneDrive and Dropbox only retain deleted files for one month. Backblaze has one year file retention, or if you pay per GB, unlimited retention. While OneDrive retains version changes for longer, Dropbox only retains version changes for a month - again unless you pay for more. Your files are less secure and less backed up when you stick them in a cloud storage provider folder compared to just being on your desktop.
And that’s assuming your cloud provider is playing ball. If Microsoft or Dropbox bans your account you may find yourself with no backup whatsoever.
For me the larger issue is they never told us. My OneDrive folder sits at 383GB. You would think that having decided to no longer back this up I might get an email, and alert or some other notification. Of course not.
Nestled into their release notes under “Improvements” we see:
The Backup Client now excludes popular cloud storage providers from backup, including both mount points and cache directories. This prevents performance issues, excessive data usage, and unintended uploads from services like OneDrive, Google Drive, Dropbox, Box, iDrive, and others. This change aligns with Backblaze’s policy to back up only local and directly connected storage.
First, I would hardly call this change in policy an improvement, its hard to imagine anyone reading this as anything other than a downgrade in service. Secondly does Backblaze believe most of its users are reading their release notes?
And if you joined today and looked at their list of file exclusions you would find no reference to Dropbox or OneDrive. No mention of Git either.
Here’s the thing, today they don’t back up Git or OneDrive. Who’s to say tomorrow they wont add to the list. Maybe some obscure file format that’s critical to your work flow. Or they will ignore a file extension that just happens be the same as one used by your DAW or 3D Modelling software. And they won’t tell you this. They wont even list it on their site.
By deciding not to back up everything, Backblaze has made it as if they are backing up nothing.
But really this feels like a promise broken. Back in 2015 their website proudly proclaimed:
All user data included by default No restrictions on file type or size
Protect the digital memories and files that matter most to you.
File backup is a matter of trust. You are paying a monthly fee so that if and when things go wrong you can get your data back. By silently changing the rules, Backblaze has not simply eroded my trust, but swept it away.
I wrote this to warn you - Backblaze is no longer doing their part, they are no longer backing up your data. Some of your data sure, but not all of it.
Finally let me leave you with Backblaze’s own words from 2015:
They promised to simplify backup. They succeeded - they don’t even do the backup part anymore.
...
Read the original on rareese.com »
The Photo page brings Hollywood’s most advanced color tools to still photography for the first time! Whether you’re a professional colorist looking to apply your skills to fashion shoots and weddings, or a photographer who wants to work beyond the limits of traditional photo applications, the Photo page unlocks the tools you need. Start with familiar photo tools including white balance, exposure and primary color adjustments, then switch to the Color page for access to the full DaVinci color grading toolset trusted by Hollywood’s best colorists! You can use DaVinci’s AI toolset as well as Resolve FX and Fusion FX. GPU acceleration lets you export faster than ever before!
For photographers, the Photo page offers a familiar set of tools alongside DaVinci’s powerful color grading capabilities. It includes native RAW support for Canon, Fujifilm, Nikon, Sony and even iPhone ProRAW. All image processing takes place at source resolution up to 32K, or over 400 megapixels, so you’re never limited to project resolution. Familiar basic adjustments including white balance, exposure, color and saturation give you a comfortable starting point. With non-destructive processing you can reframe, crop and re-interpret your original sensor data at any time. And with GPU acceleration, entire albums can be processed dramatically faster than conventional photo applications!
The Photo page Inspector gives you precise control over the transform and cropping parameters of your images. Reframe and crop non-destructively at the original source resolution and aspect ratio, so you’re never restricted to a fixed timeline size! Zoom, position, rotate and flip images with full transform controls and use the cropping parameters to trim the edges of any image with precision. Reframe a shot to improve composition, adjust for a specific ratio for print or social media use, or simply remove unwanted elements from the edges of a frame. All adjustments can be refined or reset at any time without ever affecting the original source file!
DaVinci Resolve is the world’s only post production software that lets everyone work together on the same project at the same time! Built on a powerful cloud based workflow, you can share albums, all associated metadata and tags, as well as grades and effects with colorists, photographers and retouchers anywhere in the world. Blackmagic Cloud syncing keeps every collaborator with the latest version of your image library in real time, and remote reviewers can approve grades offsite without needing to be in the same room. Hollywood colorists can even grade live fashion shoots remotely, all while the photographer is still on set!
The Photo page gives you everything you need to manage your entire image library from import to completion. You can import photos directly, from your Apple Photos library or Lightroom, and organize them with tags, ratings, favorites and keywords for fast, flexible management of even the largest libraries. It supports all standard RAW files and image types. AI IntelliSearch lets you instantly search across your entire project to find exactly what you’re looking for, from objects to people to animals! Albums allow you to build and manage collections for any project and with a single click you can switch between your photo library and your color grading workflow!
Albums are a powerful way to build and manage photo collections directly in DaVinci Resolve. You can add images manually to each album or organize by date, camera, star rating, EXIF data and more. Powerful filter and sort tools give you total control over how your collection is arranged. The thumbnail view displays each image’s graded version alongside its file name and source clip format so you can see your grades at a glance. Create multiple grade versions of any image, all referencing the original source file, so you can explore different looks without ever duplicating a file. Plus, grades applied to one photo can be instantly copied across others in the album for a fast, consistent look!
Connect Sony or Canon cameras directly to DaVinci Resolve for tethered shooting with full live view! Adjust camera settings including ISO, exposure and white balance without leaving the page and save image capture presets to establish a consistent look before you shoot. Images can be captured directly into an album, with albums created automatically during capture so your library is perfectly organized from the moment you start shooting. Grade images as they arrive using DaVinci Resolve’s extensive color toolset and use a hardware panel for hands-on creative control in a collaborative shoot. That means you can capture, grade and organize an entire shoot without leaving DaVinci Resolve!
The Photo page gives you access to over 100 GPU and CPU accelerated Resolve FX and specialty AI tools for still image work. They’re organized by category in the Open FX library and cover everything from color effects, blurs and glows to image repair, skin refinement and cinematic lighting tools. These are the same tools used by Hollywood colorists and VFX artists on the world’s biggest productions, now available for still images. To add an effect, drag it to any node. Whether you’re making subtle beauty refinements for a fashion shoot or applying dramatic film looks and atmospheric lighting effects emulating the looks of a Hollywood feature, the Photo page has the tools you need!
Magic Mask makes precise selections of subjects or backgrounds, while Depth Map generates a 3D map of your scene to separate foreground and background without manual masking. Use together to grade different depths of an image independently for results that have never before been possible for stills!
Add a realistic light source to any photo after capture with Relight FX. Relight analyzes the surfaces of faces and objects to reflect light naturally across the image. Combine with Magic Mask to light a subject independently from the background, turning flat portraits into stunning fashion images!
Face refinement automatically masks different parts of a face, saving countless hours of manual work. Sharpen eyes, remove dark circles, smooth skin, and color lips. Ultra Beauty separates skin texture from color for natural, high end results, while AI Blemish Removal handles fast skin repair!
The Film Look Creator lets you add cinematic looks that replicate film properties like halation, bloom, grain and vignetting. Adjust exposure in stops and use subtractive saturation, richness and split tone controls to achieve looks usually found on the big screen, now for your still images!
AI SuperScale uses the DaVinci AI Neural Engine to upscale low resolution images with exceptional quality. The enhanced mode is specifically designed to remove compression artifacts, making it the perfect tool for rescaling low quality photos or frame grabs up to 4x their original resolution!
UltraNR is a DaVinci AI Neural Engine driven denoise mode in the Color page’s spatial noise reduction palette. Use it to dramatically reduce digital noise from an image while maintaining image clarity. Use with spatial noise reduction to smooth out digital grain or scanner noise while keeping fine hair and eye edges sharp.
Sample an area of a scene to quickly cover up unwanted elements, like objects or even blemishes on a face. The patch replacer has a fantastic auto grading feature that will seamlessly blend the covered area with the surrounding color data. Perfect for removing sensor dust.
The Quick Export option makes it fast and easy to deliver finished images in a wide range of common formats including JPEG, PNG, HEIF and TIFF. Export either an entire album or just selected photos providing flexibility to meet your specific delivery needs. You can set the resolution, bit depth, quality and compression to ensure your images are optimized for their intended use. Whether you’re exporting standalone images for print, sharing on social media platforms or delivering graded files to a client, Quick Export has you covered. All exports preserve your original photo EXIF metadata, so camera settings, location data and other important information always travels with your files.
The Photo page uses GPU accelerated processing to deliver fast, accurate results across your entire workflow. Process hundreds of RAW files in seconds with GPU accelerated decoding and apply Resolve FX to your images in real time. GPU acceleration also means batch exports and conversions are dramatically faster than conventional photo applications. On Mac, DaVinci Resolve is optimized for Metal and Apple Silicon, taking full advantage of the latest hardware. On Windows and Linux, you get CUDA support for NVIDIA GPUs, while the Windows version also features full OpenCL support for AMD, Intel and Qualcomm GPUs. All this ensures you get high performance results on any system!
Hollywood colorists have always relied on hardware panels to work faster and more creatively and now photographers can too! The DaVinci Resolve Micro Color Panel is the perfect companion for photo grading as it is compact enough to sit next to a laptop and portable enough to take on location for shoots. It features three high quality trackballs for lift, gamma and gain adjustments, 12 primary correction knobs for contrast, saturation, hue, temperature and more. It even has a built in rechargeable battery! DaVinci Resolve color panels let you adjust multiple parameters at once, so you can create looks that are simply impossible with a mouse and keyboard.
Hollywood’s most popular solution for editing, visual effects, motion graphics, color correction and audio post production, for Mac, Windows and Linux. Now supports Blackmagic Cloud for collaboration!
The most powerful DaVinci Resolve adds DaVinci Neural Engine for automatic AI region tracking, stereoscopic tools, more Resolve FX filters, more Fairlight FX audio plugins and advanced HDR grading.
Includes large search dial in a design that includes only the specific keys needed for editing. Includes Bluetooth with battery for wireless use so it’s more portable than a full sized keyboard!
Editor panel specifically designed for multi-cam editing for news cutting and live sports replay. Includes buttons to make camera selection and editing extremely fast! Connects via Bluetooth or USB‑C.
Full sized traditional QWERTY editor keyboard in a premium metal design. Featuring a metal search dial with clutch, plus extra edit, trim and timecode keys. Can be installed inset for flush mounting.
Powerful color panel gives you all the control you need to create cinematic images. Includes controls for refined color grading including adding windows. Connects via Bluetooth or USB‑C.
Portable DaVinci color panel with 3 high resolution trackballs, 12 primary corrector knobs and LCDs with menus and buttons for switching tools, adding color nodes, HDR and secondary grading and more!
Designed in collaboration with professional Hollywood colorists, the DaVinci Resolve Advanced Panel features a massive number of controls for direct access to every DaVinci color correction feature.
Portable audio control surface includes 12 premium touch sensitive flying faders, channel LCDs for advanced processing, automation and transport controls plus HDMI for an external graphics display.
Get incredibly fast audio editing for sound engineers working on tight deadlines! Includes LCD screen, touch sensitive control knobs, built in search dial and full keyboard with multi function keys.
Used by Hollywood and broadcasters, these large consoles make it easy to mix large projects with a massive number of channels and tracks. Modular design allows customizing 2, 3, 4, or 5 bay consoles!
Fairlight studio console legs at 0º angle for when you require a flat working surface. Required for all Fairlight Studio Consoles.
Fairlight studio console legs at 8º angle for when you require a slightly angled working surface. Required for all Fairlight Studio Consoles.
Features 12 motorized faders, rotary control knobs illuminated buttons for pan, solo, mute and call, plus bank select buttons.
12 groups of touch sensitive rotary control knobs and illuminated buttons, assignable to fader strips, single channel or master bus.
Get quick access to virtually every Fairlight feature! Includes a 12” LCD, graphical keyboard, macro keys, transport controls and more.
Features HDMI, SDI inputs for video and computer monitoring and Ethernet for graphics display of channel status and meters.
Empty 2 bay Fairlight studio console chassis that can be populated with various faders, channel controls, edit and LCD monitors.
Empty 3 bay Fairlight studio console chassis that can be populated with various faders, channel controls, edit and LCD monitors.
Empty 4 bay Fairlight studio console chassis that can be populated with various faders, channel controls, edit and LCD monitors.
Empty 5 bay Fairlight studio console chassis that can be populated with various faders, channel controls, edit and LCD monitors.
Use alternative HDMI or SDI televisions and monitors when building a Fairlight studio console.
Mounting bar with locating pins to allow correct alignment of bay modules when building a custom 2 bay Fairlight console.
Mounting bar with locating pins to allow correct alignment of bay modules when building a custom 3 bay Fairlight console.
Mounting bar with locating pins to allow correct alignment of bay modules when building a custom 4 bay Fairlight console.
Mounting bar with locating pins to allow correct alignment of bay modules when building a custom 5 bay Fairlight console.
Side arm kit mounts into Fairlight console mounting bar and holds each fader, channel control and LCD monitor module.
Blank 1/3rd wide bay for building a custom console with the extra 1/3rd section. Includes blank infill panels.
Allows mounting standard 19 inch rack mount equipment in the channel control area of the Fairlight studio console.
Blank panel to fill in the channel control area of the Fairlight studio console.
Blank panel to fill in the LCD monitor area of the Fairlight studio console when you’re not using the standard Fairlight LCD monitor.
Blank panel to fill in the fader control area of the Fairlight studio console.
Adds 3 MADI I/O connections to the single MADI on the accelerator card, for a total of 256 inputs and outputs at 24 bit and 48kHz.
Add up to 2,000 tracks with real time processing of EQ, dynamics, 6 plug‑ins per track, plus MADI for extra 64 inputs and outputs.
Adds analog and digital connections, preamps for mics and instruments, sample rate conversion and sync at any standard frame rate.
...
Read the original on www.blackmagicdesign.com »
Flock Safety markets AI surveillance that goes far beyond reading license plates; color, bumper stickers, dents, and other features are used to build databases and identify movement patterns. These systems are spreading rapidly, often without oversight, and are accessible to police without a warrant. They raise serious privacy and legal concerns, and contribute to a nationwide trend toward mass surveillance.
While this and other systems like it claim to reduce crime, there is little evidence to support that claim - and significant risk of abuse. Real public safety comes from investing in communities, not stalking them.
Flock Safety markets AI surveillance that goes far beyond reading license plates; color, bumper stickers, dents, and other features are used to build databases and identify movement patterns. These systems are spreading rapidly, often without oversight, and are accessible to police without a warrant. They raise serious privacy and legal concerns, and contribute to a nationwide trend toward mass surveillance.
While this and other systems like it claim to reduce crime, there is little evidence to support that claim - and significant risk of abuse. Real public safety comes from investing in communities, not stalking them.
Flock Safety markets its devices as “AI-powered precision policing technology” - far beyond basic license plate readers (ALPRs) (Flock Safety). The system uses AI to create a “Vehicle Fingerprint” - identifying cars not only by license plate, but also by color, make and model, roof racks, dents/damage, wheel type, and more. Even bumper sticker placement is analyzed. This lets law enforcement search for a “blue sedan with damage on the left side” even without a license plate.
But the surveillance goes deeper. Using a feature called “Convoy Analysis”, the system can detect vehicles that frequently appear near each other - suggesting associations between drivers or accomplices. The platform can also flag vehicles that routinely travel to the same locations across time. Flock describes this as a way to “identify suspect vehicles traveling together” or “pinpoint associates” - functionality confirmed in both their marketing and police testimonials (GovTech, ACLU).
The data is logged and made searchable across a nationwide law enforcement network - which officers in subscribing agencies can access without a warrant. According to Flock, the system can automatically flag a vehicle based on its history, route, or presence in multiple locations linked to a crime (Flock HOA Marketing).
While these tools may aid in locating stolen cars or missing persons, they also create a detailed record of everyone’s movements, associations, and routines. That data has already been misused - like when a Kansas police chief used Flock cameras 228 times to stalk an ex-girlfriend and her new partner without cause (Local12).
The scope of this tracking becomes clear when you see real-world examples. In 2025, a journalist drove 300 miles across rural Virginia and was captured by nearly 50 surveillance cameras operated by 15 different law enforcement agencies. When he requested his own surveillance footage, he discovered the cameras had documented patterns that made his behavior “predictable to anyone looking at it.” Most troubling: while the journalist couldn’t remember specific dates he’d made certain trips, police would know instantly - without any warrant or suspicion of wrongdoing (Cardinal News).
See also:
EFF: How ALPRs Work,
The Secure Dad on Flock Cameras,
Compass IT: “Privacy Concerns with Flock”,
ACLU: Flock is building a new AI-driven mass surveillance system,
Wikipedia: Flock Safety
How Widespread Are These Cameras?
Understanding what Flock cameras are leads to a natural question: how common are they in our communities?
The crowdsourced map made available on DeFlock.me currently shows roughly half of the >100,000 Flock AI cameras nationwide. Here are examples from three major cities showing how pervasive this surveillance has become:
These systems are expanding rapidly, often with little public debate or oversight. The Atlas of Surveillance, maintained by the Electronic Frontier Foundation, has documented over 3,000 law enforcement and government agencies using Flock products as of 2025 - a number growing monthly.
The Fourth Amendment was written in response to the British Crown’s “general warrants” - broad authorizations to search anyone, anywhere, anytime. Mass surveillance revives that threat in digital form. Simply moving freely in public should not require that you be profiled and scrutinized.
It is important to point out that the courts have repeatedly ruled so-called “dragnet warrants,” often using cell phone GPS locations, unconstitutional under the Fourth Amendment. But Flock’s status as a private company means it can collect and sell data with fewer restrictions, exploiting a legal gray zone which courts have yet to fully address.
“If you’ve got nothing to hide, you’ve got nothing to fear” is a tempting thought - until someone misuses your information. Privacy isn’t about hiding wrongdoing. It’s about autonomy, dignity, and the ability to live free from unjust scrutiny. “Saying you don’t care about privacy because you have nothing to hide is like saying you don’t care about free speech because you have nothing to say.” - Edward Snowden
As one observer put it: “While today they are no threat to me…circumstances change, leadership changes, laws change. When you really boil this down, what is this nationwide system? What did Flock really make? It’s a weapon. A silent weapon. Right now it targets what many would agree are criminals. But with the flip of a switch this system can be used to target or oppress anybody the people in power decide is a threat.”
We are fast approaching a world in which going about one’s business in public means being entered into a law enforcement database. Automated license plate readers collect location data on millions of people with no suspicion of wrongdoing, creating vast databases of where we go and when.
Flock cameras and similar surveillance tools raise serious Fourth Amendment concerns by enabling broad, warrantless tracking of people’s movements. In 2024, a trial court held that the Flock network functioned as a “dragnet over the entire city.” The judge in the case equated it to placing GPS trackers on every vehicle - a practice that the U. S. Supreme Court has ruled requires a warrant (Virginia Mercury, The Virginian Pilot).
The American Civil Liberties Union (ACLU) warns that automatic license plate readers (ALPRs) are becoming tools for routine mass location tracking and surveillance, with too few rules governing their use. These systems can collect and store data on millions of innocent drivers, creating detailed records of people’s movements without their knowledge or consent. (ACLU)
Legal scholars have highlighted the broader implications of such surveillance. Neil Richards, writing in the Harvard Law Review, emphasizes that surveillance can chill the exercise of civil liberties, particularly intellectual privacy, and increase the risk of blackmail, coercion, and discrimination. (Harvard Law Review)
Flock’s data further enables already biased enforcement. In Oak Park, Illinois, 84% of drivers stopped using Flock camera alerts were Black - despite the town being only 21% Black. (Freedom to Thrive).
See also:
ACLU on Unaccountable Surveillance Tech
Mass surveillance isn’t just about policing; there are major business interests involved.
Flock Safety collaborates with law enforcement agencies to promote the adoption of its license plate recognition cameras by encouraging private entities such as businesses and HOAs to share their footage. This practice broadens the surveillance net by granting access to what would otherwise have been private data (Flock Safety FAQ).
Instances have been reported where HOAs installed Flock cameras on public roads, leading to debates over the extent of surveillance and the privacy rights of residents and visitors (Oaklandside), (Forest Brooke HOA).
The ACLU has highlighted that the expansive reach of these surveillance networks could enable law enforcement to construct detailed profiles of individuals’ movements and associations, underscoring the need for transparency and oversight (ACLU).
Additionally, Flock markets its surveillance technology to employers and retail establishments, further blurring the lines between public safety initiatives and profit-driven surveillance. For example, major retail property owners have entered into agreements to share AI-powered surveillance feeds directly with law enforcement, expanding the scope of monitoring beyond public spaces. (Forbes) [Mirror]
Lowe’s is a significant private client of Flock Safety, having implemented their systems in numerous locations to enhance security and deter theft.
While Flock specifically does not offer facial recognition (today), Lowe’s has faced legal troubles over its use of facial recognition systems from other vendors. In 2019, a class action lawsuit was filed in Cook County Circuit Court, alleging that Lowe’s used facial recognition software to track customers’ movements without their consent, violating Illinois’ Biometric Information Privacy Act (BIPA). The lawsuit claimed that Lowe’s collected and stored biometric data from customers and shared it with other retailers. (Security InfoWatch)
Some justify these systems as making us safer, but the reality is more complicated.
Flock advertises a drop in crime, but the true cost is a culture of mistrust and preemptive suspicion. As the EFF warns, communities are being sold a false promise of safety - at the expense of civil rights*
(EFF).
A 2019 report by the NAACP Legal Defense Fund warned that predictive policing tools premised on biased data will reflect that bias, reinforcing existing discrimination in the criminal justice system. These tools may appear objective, but instead often amplify historic injustice under a veneer of scientific credibility (NAACP LDF).
True safety comes from healthy, empowered communities; not automated suspicion. Community-led safety initiatives have demonstrated significant results: North Lawndale saw a 58% decrease in gun violence after READI Chicago began implementing their program there. In cities nationwide, the presence of local nonprofits has been statistically linked to reductions in homicide, violent crime, and property crime (Brennan Center, The DePaulia, American Sociological Association).
Zooming out, Flock is just one part of a larger movement toward ubiquitous surveillance.
Flock’s expansion is part of a broader movement toward ubiquitous mass surveillance - where your associations, online comments, purchases, movements, and more may be logged, indexed, analyzed by AI, and made easily searchable by almost any government agency at any time.
This progression from data collection to surveillance follows a familiar pattern in tech: tools sold for convenience often evolve into tools of control.
Bruce Schneier, a prominent cryptographer and privacy advocate, put it simply: “Surveillance is the business model of the Internet.” What begins as data collection for convenience or security often evolves into persistent monitoring, normalization of tracking, and the loss of autonomy.
As Edward Snowden warned: “A child born today will grow up with no conception of privacy at all. They’ll never know what it means to have a private moment to themselves - an unrecorded, unanalyzed thought.”
In Dunwoody, Georgia, drones are now dispatched from Flock Safety “nests” to respond to 911 calls autonomously, often arriving in under 90 seconds (Axios).
In California, 480 high-tech cameras were recently installed to surveil Oakland’s highways - tracking license plates, bumper stickers, and vehicle types - with alerts sent to law enforcement in real-time (AP News).
This surveillance infrastructure extends far beyond law enforcement. The U. S. military has spent at least $3.5 million on a tool called “Augury” that monitors “93% of internet traffic,” capturing browsing history, email data, and sensitive cookies from Americans - all “without informed consent.” Senator Ron Wyden has received whistleblower complaints about this warrantless surveillance program (VICE).
Meanwhile, the current administration is working with Palantir Technologies to create what Ron Paul calls a “big ugly database” - a comprehensive collection of all information held by federal agencies on all U.S. citizens. This would include health records, education records, tax returns, firearm purchases, and associations with any groups labeled “extremist.” Palantir, funded by the CIA’s In-Q-Tel venture capital firm, is “literally the creation of the surveillance state” (OC Register).
Even basic tools we use daily are being transformed into surveillance instruments. Recent court rulings now allow the government to order companies like OpenAI to indefinitely preserve all ChatGPT conversations. Users who thought they were having private conversations - like “talking to a friend who can keep a secret” - discovered this only through web forums, not company disclosure. The judge’s order enables what one user called a “nationwide mass surveillance program” disguised as a civil discovery process (TechRadar).
This pattern repeats throughout history: people abandon liberty for promises of safety. After 9/11, many supported the PATRIOT Act. During COVID, many embraced mask and vaccine mandates. After the 2008 financial crisis, many supported bailouts because leaders said they had to “abandon free-market principles to save the free-market system.” Today, some support mass surveillance because they believe it will target only “the right people” - but circumstances change, leadership changes, laws change.
See also:
Ars Technica: “AI Cameras to Ensure Good Behavior”,
Video: Predictive Surveillance Trends
So where is all of this heading? The trajectory is troubling.
Flock’s cameras capture detailed information about the daily lives of anyone passing by, without offering a genuine opt-out mechanism. Concurrently, Palantir Technologies has secured a $30 million contract with ICE, aiming to develop a system that consolidates sensitive personal data such as biometrics, geolocation, and other personal identifiers from various federal agencies, facilitating near real-time tracking and categorization of individuals for immigration enforcement purposes (Wired). It should be no surprise that this will also not offer any meaningful opt-out mechanism.
The integration of surveillance technologies such as Flock Safety’s license plate readers and Palantir’s ImmigrationOS platform signifies a shift toward comprehensive monitoring of individuals’ movements and behaviors. It is not difficult to imagine the scope of such systems’ usage growing with time.
These developments raise concerns about the erosion of privacy and the potential for misuse of aggregated data. The pervasive nature of such surveillance systems means that individuals are monitored without explicit consent, and the data collected can be repurposed beyond its original intent. As these technologies become more entrenched, the line between public safety and invasive oversight blurs, prompting critical discussions about the balance between security and individual freedoms.
Some of the most chilling validations of mass surveillance come not from critics - but from the very people promoting it. These aren’t out-of-context slips; they are open endorsements of a world where privacy is sidelined in favor of control, compliance, and convenient enforcement.
“Anything technology they think, ‘Oh it’s a boogeyman. It’s Big Brother watching you,’ … No, Big Brother is protecting you.”
- Eric Adams, NYC Mayor (Politico, 2022)
New York’s mayor casually rebrands Orwell’s authoritarian icon as a guardian figure. It’s a startling reversal - not a warning about overreach, but a defense of it.
“Instead of being reactive, we are going to be proactive… [we] use data to predict where future crimes are likely to take place and who is likely to commit them… then deputies would find those people and take them out.”
- Chris Nocco, Pasco County Sheriff (Tampa Bay Times, 2020)
This “Minority Report”-style program led to harassment of innocent people - and was ultimately found unconstitutional in court (Institute for Justice). A rare win, but a stark example of where unchecked surveillance can go.
“The use of net flow data by NCIS does not require a warrant.”
- Charles E. Spirtos, Navy Office of Information (VICE, 2024)
The military’s position on monitoring Americans’ internet traffic without judicial oversight. This statement came after a whistleblower complained about warrantless surveillance activities to Senator Ron Wyden’s office.
“Tech firms should not develop their systems and services, including end-to-end encryption, in ways that empower criminals or put vulnerable people at risk.”
- Priti Patel, UK Home Secretary UK Govt, 2019, (Infosecurity Magazine)
The logic: protecting everyone’s privacy is dangerous. This kind of framing justifies backdoors into secure systems - which inevitably get abused.
“The risk [of built-in weaknesses]… is acceptable because we are talking about consumer products… and not nuclear launch codes.”
- William Barr, U. S. Attorney General (TechCrunch, 2019)
A clear “rules for thee but not for me” mentality. Your data, messages, and devices don’t deserve the same protections as the government’s - because you’re just a civilian.
China exploited a covert surveillance interface - originally built for lawful access by U.S. law enforcement - to tap into Americans’ private phone records, messages, and geolocation data. (CISA)
Telecom providers are required by law to build these backdoors for law enforcement. The “Salt Typhoon” incident shows the risk: once a backdoor exists, it can be discovered and abused - and not just by “the good guys.” (EFF, Reason)
...
Read the original on stopflock.com »
Today, we are expanding our spam policies
to address a deceptive practice known as “back button hijacking”, which will become an explicit violation of the “malicious practices” of spam policies, leading to potential spam actions.
When a user clicks the “back” button in the browser, they have a clear expectation: they want to return to the previous page. Back button hijacking breaks this fundamental expectation. It occurs when a site interferes with a user’s browser navigation and prevents them from using their back button to immediately get back to the page they came from. Instead, users might be sent to pages they never visited before, be presented with unsolicited recommendations or ads, or are otherwise just prevented from normally browsing the web.
Why are we taking action?
We believe that the user experience comes first. Back button hijacking interferes with the browser’s functionality, breaks the expected user journey, and results in user frustration. People report feeling manipulated and eventually less willing to visit unfamiliar sites. As we’ve stated before, inserting deceptive or manipulative pages into a user’s browser history has always been against our Google Search Essentials.
We’ve seen a rise of this type of behavior, which is why we’re designating this an explicit violation of our malicious practices
policy, which says:
Malicious practices create a mismatch between user expectations and the actual outcome,
leading to a negative and deceptive user experience, or compromised user security or privacy.
Pages that are engaging in back button hijacking may be subject to manual spam actions
or automated demotions, which can impact the site’s performance in Google Search results. To give site owners time to make any needed changes, we’re publishing this policy two months in advance of enforcement on June 15, 2026.
What should site owners do?
Ensure you are not doing anything to interfere with a user’s ability to navigate their browser history.
If you’re currently using any script or technique that inserts or replaces deceptive or manipulative pages into a user’s browser history that prevents them from using their back button to immediately get back to the page they came from, you are expected to remove or disable it.
Notably, some instances of back button hijacking may originate from the site’s included libraries or advertising platform. We encourage site owners to thoroughly review their technical implementation and remove or disable any code, imports or any configurations that are responsible for back button hijacking, to ensure a helpful and non-deceptive experience for users.
If your site has been impacted by a manual action and you have fixed the issue, you can always let us know by submitting a reconsideration request
in Search Console. For questions or feedback, feel free to reach out on social media or discuss in our help community.
...
Read the original on developers.google.com »
A real-world production migration from DigitalOcean to Hetzner dedicated, handling 248 GB of MySQL data across 30 databases, 34 Nginx sites, GitLab EE, Neo4j, and live mobile app traffic — with zero downtime.
Running a software company in Turkey has become increasingly expensive over the last few years. Skyrocketing inflation and a dramatically weakening Turkish Lira against the US dollar have turned dollar-denominated infrastructure costs into a serious burden. A bill that felt manageable two years ago now hits very differently when the exchange rate has multiplied several times over.
Every month, we were paying $1,432 to DigitalOcean for a droplet with 192GB RAM, 32 vCPUs, 600GB SSD, two block volumes (1TB each), and backups enabled. The server was fine — but the price-to-performance ratio had stopped making sense.
Then we discovered the Hetzner AX162-R.
That’s $14,388 saved per year — for a server that’s objectively more powerful in every dimension. The decision was easy.
I’ve been a DigitalOcean customer for nearly 8 years. They have a great product and I have no complaints about reliability or developer experience. But looking at those numbers now, I cannot help feeling a bit sad about all the extra money I left on the table over the years. If you are running steady-state workloads and not actively using DO’s ecosystem features, do yourself a favor and check dedicated server pricing before your next renewal.
* Several live mobile apps serving hundreds of thousands of users
Old server: CentOS 7 — long past its end-of-life, but still running in production. New server: AlmaLinux 9.7 — a RHEL 9 compatible distribution and the natural successor to CentOS. This migration was also an opportunity to finally escape an OS that hadn’t received security updates in years.
The naive approach — change DNS, restart everything, hope for the best — wasn’t acceptable. Instead, we designed a proper migration path with six phases:
Phase 1 — Full stack installation on the new server
Nginx (compiled from source with identical flags), PHP (via Remi repo, with the same .ini config files from the old server), MySQL 8.0, Neo4J Graph DB, GitLab EE, Node.js, Supervisor, and Gearman. Every service had to be configured to match the old server’s behavior before we touched a single DNS record.
SSL certificates were handled by rsyncing the entire /etc/letsencrypt/ directory from the old server to the new one. After the migration was complete and all traffic was flowing through the new server, we force-renewed all certificates in one shot:
Phase 2 — Web files cloned with rsync
The entire /var/www/html directory (~65 GB, 1.5 million files) was cloned to the new server using rsync over SSH with the –checksum flag for integrity verification. We ran a final incremental sync right before cutover to catch any files changed after the initial clone.
Phase 3 — MySQL master to slave replication
Rather than taking the database offline for a dump-and-restore, we set up live replication. The old server became master, the new server a read-only slave. We used mydumper for the initial bulk load, then started replication from the exact binlog position recorded in the dump metadata. This kept both databases in real-time sync until the moment of cutover.
Phase 4 — DNS TTL reduction
We scripted the DigitalOcean DNS API to lower all A and AAAA record TTLs from 3600 to 300 seconds — without touching MX or TXT records (changing mail record TTLs can cause deliverability issues). After waiting one hour for old TTLs to expire globally, we were ready to cut over in under 5 minutes.
Phase 5 — Old server nginx converted to reverse proxy
We wrote a Python script that parsed every server {} block across all 34 Nginx site configs, backed up the originals, and replaced them with proxy configurations pointing to the new server. This meant that during DNS propagation, any request still hitting the old IP was silently forwarded. No user would see a disruption.
Phase 6 — DNS cutover and decommission
A single Python script hit the DigitalOcean API and flipped all A records to the new server IP in seconds. The old server remained as a cold standby for one week, then was shut down.
The key insight: at no point did we have a window where the service was unavailable. Traffic was always being served — either directly or through the proxy.
This was the most complex part of the entire operation.
We used mydumper instead of the standard mysqldump — and it made an enormous difference. By leveraging the new server’s 48 CPU cores for parallel export and import, what would have taken days with a traditional single-threaded mysqldump was completed in hours. If you’re migrating a large MySQL database and you’re not using mydumper/myloader, you’re doing it the hard way.
The main dump’s metadata file recorded the binlog position at the time of the snapshot:
File: mysql-bin.000004
Position: 21834307
This would be our replication starting point.
Once the dump was complete, we transferred it to the new server using rsync over SSH. With 248 GB of compressed chunks, this was significantly faster than any other transfer method:
The –compress flag in mydumper paid off here — compressed chunks transferred much faster over the wire.
Being stuck on CentOS 7 meant we were also stuck on MySQL 5.7 — an outdated version that had been running in production for years. Before the migration, we ran mysqlcheck –check-upgrade to verify that our data was compatible with MySQL 8.0. It came back clean, so we installed the latest MySQL 8.0 Community on the new server. The performance improvement across all our projects was immediately noticeable — query execution times dropped significantly thanks to MySQL 8.0’s improved optimizer and InnoDB enhancements.
That said, the version jump did introduce one tricky problem.
After import, the mysql.user table had the wrong column structure — 45 columns instead of the expected 51. This caused mysql.infoschema to be missing, breaking user authentication.
But this failed the first time with:
ERROR: ‘sys.innodb_buffer_stats_by_schema’ is not VIEW
The sys schema had been imported as regular tables instead of views. Solution:
With both dumps imported, we configured the new server as a replica of the old one:
Almost immediately, replication stopped with error 1062 (Duplicate Key). This happened because our dump was taken in two passes — during the gap between them, rows were written to certain tables, and now both the imported dump and the binlog replay were trying to insert the same rows.
IDEMPOTENT mode silently skips duplicate key and missing row errors. All critical databases synced without a single error. Within a few minutes, Seconds_Behind_Master dropped to 0.
Before touching a single DNS record, we needed to verify that all services were working correctly on the new server. The trick: we temporarily edited the /etc/hosts file on our local machine to point our domain names to the new server’s IP.
# /etc/hosts (local machine)
NEW_SERVER_IP yourdomain1.com
NEW_SERVER_IP yourdomain2.com
# … and so on for all your domains
With this in place, our browsers and Postman would hit the new server while the rest of the world was still going to the old one. We ran through our API endpoints, checked admin panels, and verified that every service was responding correctly. Only after this confirmation did we proceed with the cutover.
Once master-slave replication was fully synchronized, we noticed that INSERT statements were succeeding on the new server when they shouldn’t have been — read_only = 1 was set, but writes were going through.
The reason: all PHP application users had been granted SUPER privilege. In MySQL, SUPER bypasses read_only.
We revoked it from all 24 application users:
After this, read_only = 1 correctly blocked all writes from application users while allowing replication to continue.
All domains were managed through DigitalOcean DNS (with nameservers pointed from GoDaddy). We scripted the TTL reduction against the DigitalOcean API, only touching A and AAAA records — not MX or TXT records, since changing mail record TTLs can cause deliverability issues with Google Workspace.
After waiting one hour for old TTLs to expire, we were ready.
Rather than editing 34 config files by hand, we wrote a Python script that parsed every server {} block in every config file, identified the main content blocks, replaced them with proxy configs, and backed up originals as .backup files.
The key: proxy_ssl_verify off — the new server’s SSL cert is valid for the domain, not for the IP address. Disabling verification here is fine because we control both ends.
With replication at Seconds_Behind_Master: 0 and the reverse proxy ready, we executed the cutover in order:
1. New server: STOP SLAVE;
2. New server: SET GLOBAL read_only = 0;
3. New server: RESET SLAVE ALL;
4. New server: supervisorctl start all
5. Old server: nginx -t && systemctl reload nginx (proxy goes live)
6. Old server: supervisorctl stop all
7. Mac: python3 do_cutover.py (DNS: all A records to new server IP)
8. Wait: ~5 minutes for propagation
9. Old server: comment out all crontab entries
The DNS cutover script hit the DigitalOcean API and changed every A record to the new server IP — in about 10 seconds.
After migration, we discovered many GitLab project webhooks were still pointing to the old server IP. We wrote a script to scan all projects via the GitLab API and update them in bulk.
We went from $1,432/month down to $233/month — saving $14,388 per year. And we ended up with a more powerful machine:
The entire migration took roughly 24 hours. No users were affected.
MySQL replication is your best friend for zero-downtime migrations. Set it up early, let it catch up, then cut over with confidence.
Check your MySQL user privileges before migration. SUPER privilege bypasses read_only — if your app users have it, your slave environment isn’t actually read-only.
Script everything. DNS updates, nginx config rewrites, webhook updates — doing these by hand across 34+ sites would have taken hours and introduced errors.
mydumper + myloader dramatically outperforms mysqldump for large datasets. Parallel dump/restore with 32 threads cut what would have been days of work down to hours.
Cloud providers are expensive for steady-state workloads. If you’re not using autoscaling or ephemeral infrastructure, a dedicated server often delivers better performance at a fraction of the cost.
All Python scripts used in this migration are open-sourced and available on GitHub:
* do_list_domains_ttl.py — List all DigitalOcean domains with their A records, IPs, and TTLs
* do_to_hetzner_bulk_dns_records_import.py — Migrate all DNS zones from DigitalOcean to Hetzner DNS
* do_cutover_to_new_ip.py — Flip all A records from old server IP to new server IP
* mysql_compare.py — Compare row counts across all tables on two MySQL servers
* final_gitlab_webhook_update.py — Update all GitLab project webhooks to the new server IP
All scripts support a DRY_RUN = True mode so you can safely preview changes before applying them.
...
Read the original on isayeter.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.