10 interesting stories served every morning and every evening.
Tim Cook to become :br(s): :br(m): :br(l): :br(xl):Apple Executive Chairman
John Ternus
to become Apple CEO
CUPERTINO, CALIFORNIA Apple announced that Tim Cook will become executive chairman of Apple’s board of directors and John Ternus, senior vice president of Hardware Engineering, will become Apple’s next chief executive officer effective on September 1, 2026. The transition, which was approved unanimously by the Board of Directors, follows a thoughtful, long-term succession planning process.
Cook will continue in his role as CEO through the summer as he works closely with Ternus on a smooth transition. As executive chairman, Cook will assist with certain aspects of the company, including engaging with policymakers around the world.
“It has been the greatest privilege of my life to be the CEO of Apple and to have been trusted to lead such an extraordinary company. I love Apple with all of my being, and I am so grateful to have had the opportunity to work with a team of such ingenious, innovative, creative, and deeply caring people who have been unwavering in their dedication to enriching the lives of our customers and creating the best products and services in the world,” said Cook. “John Ternus has the mind of an engineer, the soul of an innovator, and the heart to lead with integrity and with honor. He is a visionary whose contributions to Apple over 25 years are already too numerous to count, and he is without question the right person to lead Apple into the future. I could not be more confident in his abilities and his character, and I look forward to working closely with him on this transition and in my new role as executive chairman.”
“I am profoundly grateful for this opportunity to carry Apple’s mission forward,” said Ternus. “Having spent almost my entire career at Apple, I have been lucky to have worked under Steve Jobs and to have had Tim Cook as my mentor. It has been a privilege to help shape the products and experiences that have changed so much of how we interact with the world and with one another. I am filled with optimism about what we can achieve in the years to come, and I am so happy to know that the most talented people on earth are here at Apple, determined to be part of something bigger than any one of us. I am humbled to step into this role, and I promise to lead with the values and vision that have come to define this special place for half a century.”
Arthur Levinson, who has been Apple’s non-executive chairman for the past 15 years, will become its lead independent director on September 1, 2026. Ternus will join the board of directors, also effective September 1, 2026.
“Tim’s unprecedented and outstanding leadership has transformed Apple into the world’s best company. He’s introduced groundbreaking products and services time and again, and his integrity and values are infused into everything Apple does,” said Levinson. “On behalf of the entire board of directors, we are incredibly grateful for his countless contributions to Apple and the world, and we are thrilled he will now be executive chairman. We believe John is the best possible leader to succeed Tim and as he transitions to CEO we know his love of Apple, his leadership, deep technical knowledge, and relentless focus on creating great products will help lead Apple to an extraordinary future.”
“I want to thank Art for the incredible work he has done leading the board of directors for the past 15 years,” said Cook. “I have always found his advice to be invaluable and I appreciate his thoughtfulness and his unwavering dedication to the company. I am grateful he will serve as our lead independent director, and I look forward to working with him in my new role.”
Tim Cook joined Apple in 1998. He became CEO in 2011 and has overseen the introduction of numerous products and services, including new categories like Apple Watch, AirPods, and Apple Vision Pro, and services ranging from iCloud and Apple Pay to Apple TV and Apple Music. He was also instrumental in expanding existing product lines. Under Cook’s leadership Apple has grown from a market capitalization of approximately $350 billion to $4 trillion, representing a more than 1,000% increase, and yearly revenue has nearly quadrupled, from $108 billion in fiscal year 2011 to more than $416 billion in fiscal year 2025. The company has expanded its global footprint substantially, particularly in emerging markets; it is now in more than 200 countries and territories. Apple operates over 500 retail stores and has more than doubled the number of countries in which its customers can visit an Apple Store. During his tenure, Apple has grown by more than 100,000 team members and increased its active installed base to more than 2.5 billion devices.
Apple Services has been a major focus area of Cook’s, and during his tenure the category has grown to become a more than $100 billion business, the equivalent of a Fortune 40 company. Cook was also instrumental in creating the wearables category at Apple, which now includes the world’s most popular watch and headphones, and which has served as the foundation for Apple’s remarkable impact on the health and safety of its users. Under Cook’s leadership, Apple also transitioned to Apple-designed silicon, enabling the company to own more of its primary technology and deliver industry-leading gains in power efficiency and performance that directly benefit users across its products.
Cook has made Apple’s core values even more central to the company’s decision making and product development. Under his leadership, the company reduced its carbon footprint by more than 60 percent below 2015 levels during a period in which revenue nearly doubled. Cook, who has long advocated for privacy as a fundamental human right, has made privacy and security imperative at Apple, setting a standard for user protection that continues to set the company apart from the rest of the technology industry. He has also pushed for continued innovation in the accessibility space, believing that Apple products should be made for everyone. And he has made central to his leadership the notion that Apple should be a place where everyone can feel they belong and where everyone is treated with dignity and respect.
Ternus joined Apple’s product design team in 2001 and became a vice president of Hardware Engineering in 2013. He joined the executive team in 2021 as senior vice president of Hardware Engineering. Throughout his tenure at Apple, Ternus has overseen hardware engineering work on a variety of groundbreaking products across every category. He was instrumental in the introduction of multiple new product lines, including iPad and AirPods, as well as many generations of products across iPhone, Mac, and Apple Watch.
Ternus’s work on Mac has helped the category become more powerful and more popular globally than at any time in its 40-year history. That includes the recent introduction of MacBook Neo, an all-new laptop that makes the Mac experience even more accessible to more people around the world. This past fall, his team’s efforts were on full display with the introduction of a redefined iPhone lineup, including the incredibly powerful iPhone 17 Pro and Pro Max, the radically thin and durable iPhone Air, and the iPhone 17, which has been an incredible upgrade for users. Under his leadership, his team also drove advancements in AirPods to make them the world’s best in-ear headphones, with unprecedented active noise cancellation, as well as the capability to become an all-in-one hearing health system that can serve as over-the-counter hearing aids.
Ternus led much of the company’s focus in areas like reliability and durability, introducing new techniques that have made Apple products remarkably resilient. He has also driven much of Apple’s innovation in materials and hardware design that have reduced the carbon footprint of its products, including the creation of a new, recycled aluminum compound that has been introduced across multiple product lines, the use of 3-D printed titanium in Apple Watch Ultra 3, and innovations in repairability that have increased the lifespans of several Apple products.
Prior to Apple, Ternus worked as a mechanical engineer at Virtual Research Systems. He holds a bachelor’s degree in Mechanical Engineering from the University of Pennsylvania.
This press release contains forward-looking statements, within the meaning of the Private Securities Litigation Reform Act of 1995. These forward-looking statements include without limitation those about Apple’s executive succession plans. These statements involve risks and uncertainties, and actual results may differ materially from any future results expressed or implied by the forward-looking statements. More information regarding potential risks and other factors that could affect the company are included in Apple’s filings with the SEC, including in the “Risk Factors” and “Management’s Discussion and Analysis of Financial Condition and Results of Operations” sections of Apple’s most recently filed periodic reports on Form 10-K and Form 10-Q and subsequent filings. Apple assumes no obligation to update any forward-looking statements or information, which speak only as of the date they are made.
About Apple
Apple revolutionized personal technology with the introduction of the Macintosh in 1984. Today, Apple leads the world in innovation with iPhone, iPad, Mac, AirPods, Apple Watch, and Apple Vision Pro. Apple’s six software platforms — iOS, iPadOS, macOS, watchOS, visionOS, and tvOS — provide seamless experiences across all Apple devices and empower people with breakthrough services including the App Store, Apple Music, Apple Pay, iCloud, and Apple TV+. Apple’s more than 150,000 employees are dedicated to making the best products on earth and to leaving the world better than we found it.
© 2026 Apple Inc. All rights reserved. Apple, the Apple logo, Apple Watch, AirPods, Apple Vision Pro, iCloud, Apple Pay, Apple TV, Apple Music, Apple Store, iPad, iPhone, Mac, MacBook Neo, and iPhone Air are trademarks of Apple. Other company and product names may be trademarks of their respective owners.
...
Read the original on www.apple.com »
Our latest model, Claude Opus 4.7, is now generally available. Opus 4.7 is a notable improvement on Opus 4.6 in advanced software engineering, with particular gains on the most difficult tasks. Users report being able to hand off their hardest coding work—the kind that previously needed close supervision—to Opus 4.7 with confidence. Opus 4.7 handles complex, long-running tasks with rigor and consistency, pays precise attention to instructions, and devises ways to verify its own outputs before reporting back.The model also has substantially better vision: it can see images in greater resolution. It’s more tasteful and creative when completing professional tasks, producing higher-quality interfaces, slides, and docs. And—although it is less broadly capable than our most powerful model, Claude Mythos Preview—it shows better results than Opus 4.6 across a range of benchmarks:Last week we announced Project Glasswing, highlighting the risks—and benefits—of AI models for cybersecurity. We stated that we would keep Claude Mythos Preview’s release limited and test new cyber safeguards on less capable models first. Opus 4.7 is the first such model: its cyber capabilities are not as advanced as those of Mythos Preview (indeed, during its training we experimented with efforts to differentially reduce these capabilities). We are releasing Opus 4.7 with safeguards that automatically detect and block requests that indicate prohibited or high-risk cybersecurity uses. What we learn from the real-world deployment of these safeguards will help us work towards our eventual goal of a broad release of Mythos-class models.Security professionals who wish to use Opus 4.7 for legitimate cybersecurity purposes (such as vulnerability research, penetration testing, and red-teaming) are invited to join our new Cyber Verification Program.Opus 4.7 is available today across all Claude products and our API, Amazon Bedrock, Google Cloud’s Vertex AI, and Microsoft Foundry. Pricing remains the same as Opus 4.6: $5 per million input tokens and $25 per million output tokens. Developers can use claude-opus-4-7 via the Claude API.Claude Opus 4.7 has garnered strong feedback from our early-access testers:In early testing, we’re seeing the potential for a significant leap for our developers with Claude Opus 4.7. It catches its own logical faults during the planning phase and accelerates execution, far beyond previous Claude models. As a financial technology platform serving millions of consumers and businesses at significant scale, this combination of speed and precision could be game-changing: accelerating development velocity for faster delivery of the trusted financial solutions our customers rely on every day.Anthropic has already set the standard for coding models, and Claude Opus 4.7 pushes that further in a meaningful way as the state-of-the-art model on the market. In our internal evals, it stands out not just for raw capability, but for how well it handles real-world async workflows—automations, CI/CD, and long-running tasks. It also thinks more deeply about problems and brings a more opinionated perspective, rather than simply agreeing with the user.Claude Opus 4.7 is the strongest model Hex has evaluated. It correctly reports when data is missing instead of providing plausible-but-incorrect fallbacks, and it resists dissonant-data traps that even Opus 4.6 falls for. It’s a more intelligent, more efficient Opus 4.6: low-effort Opus 4.7 is roughly equivalent to medium-effort Opus 4.6.On our 93-task coding benchmark, Claude Opus 4.7 lifted resolution by 13% over Opus 4.6, including four tasks neither Opus 4.6 nor Sonnet 4.6 could solve. Combined with faster median latency and strict instruction following, it’s particularly meaningful for complex, long-running coding workflows. It cuts the friction from those multi-step tasks so developers can stay in the flow and focus on building.Based on our internal research-agent benchmark, Claude Opus 4.7 has the strongest efficiency baseline we’ve seen for multi-step work. It tied for the top overall score across our six modules at 0.715 and delivered the most consistent long-context performance of any model we tested. On General Finance—our largest module—it improved meaningfully on Opus 4.6, scoring 0.813 versus 0.767, while also showing the best disclosure and data discipline in the group. And on deductive logic, an area where Opus 4.6 struggled, Opus 4.7 is solid.Claude Opus 4.7 extends the limit of what models can do to investigate and get tasks done. Anthropic has clearly optimized for sustained reasoning over long runs, and it shows with market-leading performance. As engineers shift from working 1:1 with agents to managing them in parallel, this is exactly the kind of frontier capability that unlocks new workflows.We’re seeing major improvements in Claude Opus 4.7’s multimodal understanding, from reading chemical structures to interpreting complex technical diagrams. The higher resolution support is helping Solve Intelligence build best-in-class tools for life sciences patent workflows, from drafting and prosecution to infringement detection and invalidity charting.Claude Opus 4.7 takes long-horizon autonomy to a new level in Devin. It works coherently for hours, pushes through hard problems rather than giving up, and unlocks a class of deep investigation work we couldn’t reliably run before.For Replit, Claude Opus 4.7 was an easy upgrade decision. For the work our users do every day, we observed it achieving the same quality at lower cost—more efficient and precise at tasks like analyzing logs and traces, finding bugs, and proposing fixes. Personally, I love how it pushes back during technical discussions to help me make better decisions. It really feels like a better coworker.Claude Opus 4.7 demonstrates strong substantive accuracy on BigLaw Bench for Harvey, scoring 90.9% at high effort with better reasoning calibration on review tables and noticeably smarter handling of ambiguous document editing tasks. It correctly distinguishes assignment provisions from change-of-control provisions, a task that has historically challenged frontier models. Substance was consistently rated as a strength across our evaluations: correct, thorough, and well-cited.Claude Opus 4.7 is a very impressive coding model, particularly for its autonomy and more creative reasoning. On CursorBench, Opus 4.7 is a meaningful jump in capabilities, clearing 70% versus Opus 4.6 at 58%.For complex multi-step workflows, Claude Opus 4.7 is a clear step up: plus 14% over Opus 4.6 at fewer tokens and a third of the tool errors. It’s the first model to pass our implicit-need tests, and it keeps executing through tool failures that used to stop Opus cold. This is the reliability jump that makes Notion Agent feel like a true teammate.In our evals, we saw a double-digit jump in accuracy of tool calls and planning in our core orchestrator agents. As users leverage Hebbia to plan and execute on use cases like retrieval, slide creation, or document generation, Claude Opus 4.7 shows the potential to improve agent decision-making in these workflows.On Rakuten-SWE-Bench, Claude Opus 4.7 resolves 3x more production tasks than Opus 4.6, with double-digit gains in Code Quality and Test Quality. This is a meaningful lift and a clear upgrade for the engineering work our teams are shipping every day.For CodeRabbit’s code review workloads, Claude Opus 4.7 is the sharpest model we’ve tested. Recall improved by over 10%, surfacing some of the most difficult-to-detect bugs in our most complex PRs, while precision remained stable despite the increased coverage. It’s a bit faster than GPT-5.4 xhigh on our harness, and we’re lining it up for our heaviest review work at launch.For Genspark’s Super Agent, Claude Opus 4.7 nails the three production differentiators that matter most: loop resistance, consistency, and graceful error recovery. Loop resistance is the most critical. A model that loops indefinitely on 1 in 18 queries wastes compute and blocks users. Lower variance means fewer surprises in prod. And Opus 4.7 achieves the highest quality-per-tool-call ratio we’ve measured.Claude Opus 4.7 is a meaningful step up for Warp. Opus 4.6 is one of the best models out there for developers, and this model is measurably more thorough on top of that. It passed Terminal Bench tasks that prior Claude models had failed, and worked through a tricky concurrency bug Opus 4.6 couldn’t crack. For us, that’s the signal.Claude Opus 4.7 is the best model in the world for building dashboards and data-rich interfaces. The design taste is genuinely surprising—it makes choices I’d actually ship. It’s my default daily driver now.Claude Opus 4.7 is the most capable model we’ve tested at Quantium. Evaluated against leading AI models through our proprietary benchmarking solution, the biggest gains showed up where they matter most: reasoning depth, structured problem-framing, and complex technical work. Fewer corrections, faster iterations, and stronger outputs to solve the hardest problems our clients bring us.Claude Opus 4.7 feels like a real step up in intelligence. Code quality is noticeably improved, it’s cutting out the meaningless wrapper functions and fallback scaffolding that used to pile up, and fixes its own code as it goes. It’s the cleanest jump we’ve seen since the move from Sonnet 3.7 to the Claude 4 series.For the computer-use work that sits at the heart of XBOW’s autonomous penetration testing, the new Claude Opus 4.7 is a step change: 98.5% on our visual-acuity benchmark versus 54.5% for Opus 4.6. Our single biggest Opus pain point effectively disappeared, and that unlocks its use for a whole class of work where we couldn’t use it before.Claude Opus 4.7 is a solid upgrade with no regressions for Vercel. It’s phenomenal on one-shot coding tasks, more correct and complete than Opus 4.6, and noticeably more honest about its own limits. It even does proofs on systems code before starting work, which is new behavior we haven’t seen from earlier Claude models.Claude Opus 4.7 is very strong and outperforms Opus 4.6 with a 10% to 15% lift in task success for Factory Droids, with fewer tool errors and more reliable follow-through on validation steps. It carries work all the way through instead of stopping halfway, which is exactly what enterprise engineering teams need.Claude Opus 4.7 autonomously built a complete Rust text-to-speech engine from scratch—neural model, SIMD kernels, browser demo—then fed its own output through a speech recognizer to verify it matched the Python reference. Months of senior engineering, delivered autonomously. The step up from Opus 4.6 is clear, and the codebase is public.Claude Opus 4.7 passed three TBench tasks that prior Claude models couldn’t, and it’s landing fixes our previous best model missed, including a race condition. It demonstrates strong precision in identifying real issues, and surfaces important findings that other models either gave up on or didn’t resolve. In Qodo’s real-world code review benchmark, we observed top-tier precision.On Databricks’ OfficeQA Pro, Claude Opus 4.7 shows meaningfully stronger document reasoning, with 21% fewer errors than Opus 4.6 when working with source information. Across our agentic reasoning over data benchmarks, it is the best-performing Claude model for enterprise document analysis.For Ramp, Claude Opus 4.7 stands out in agent-team workflows. We’re seeing stronger role fidelity, instruction-following, coordination, and complex reasoning, especially on engineering tasks that span tools, codebases, and debugging context. Compared with Opus 4.6, it needs much less step-by-step guidance, helping us scale the internal agent workflows our engineering teams run.Claude Opus 4.7 is measurably better than Opus 4.6 for Bolt’s longer-running app-building work, up to 10% better in the best cases, without the regressions we’ve come to expect from very agentic models. It pushes the ceiling on what our users can ship in a single session.Below are some highlights and notes from our early testing of Opus 4.7:Instruction following. Opus 4.7 is substantially better at following instructions. Interestingly, this means that prompts written for earlier models can sometimes now produce unexpected results: where previous models interpreted instructions loosely or skipped parts entirely, Opus 4.7 takes the instructions literally. Users should re-tune their prompts and harnesses accordingly.Improved multimodal support. Opus 4.7 has better vision for high-resolution images: it can accept images up to 2,576 pixels on the long edge (~3.75 megapixels), more than three times as many as prior Claude models. This opens up a wealth of multimodal uses that depend on fine visual detail: computer-use agents reading dense screenshots, data extractions from complex diagrams, and work that needs pixel-perfect references.1Real-world work. As well as its state-of-the-art score on the Finance Agent evaluation (see table above), our internal testing showed Opus 4.7 to be a more effective finance analyst than Opus 4.6, producing rigorous analyses and models, more professional presentations, and tighter integration across tasks. Opus 4.7 is also state-of-the-art on GDPval-AA, a third-party evaluation of economically valuable knowledge work across finance, legal, and other domains.Memory. Opus 4.7 is better at using file system-based memory. It remembers important notes across long, multi-session work, and uses them to move on to new tasks that, as a result, need less up-front context.The charts below display more evaluation results from our pre-release testing, across a range of different domains:Overall, Opus 4.7 shows a similar safety profile to Opus 4.6: our evaluations show low rates of concerning behavior such as deception, sycophancy, and cooperation with misuse. On some measures, such as honesty and resistance to malicious “prompt injection” attacks, Opus 4.7 is an improvement on Opus 4.6; in others (such as its tendency to give overly detailed harm-reduction advice on controlled substances), Opus 4.7 is modestly weaker. Our alignment assessment concluded that the model is “largely well-aligned and trustworthy, though not fully ideal in its behavior”. Note that Mythos Preview remains the best-aligned model we’ve trained according to our evaluations. Our safety evaluations are discussed in full in the Claude Opus 4.7 System Card.Overall misaligned behavior score from our automated behavioral audit. On this evaluation, Opus 4.7 is a modest improvement on Opus 4.6 and Sonnet 4.6, but Mythos Preview still shows the lowest rates of misaligned behavior.In addition to Claude Opus 4.7 itself, we’re launching the following updates:More effort control: Opus 4.7 introduces a new xhigh (“extra high”) effort level between high and max, giving users finer control over the tradeoff between reasoning and latency on hard problems. In Claude Code, we’ve raised the default effort level to xhigh for all plans. When testing Opus 4.7 for coding and agentic use cases, we recommend starting with high or xhigh effort.On the Claude Platform (API): as well as support for higher-resolution images, we’re also launching task budgets in public beta, giving developers a way to guide Claude’s token spend so it can prioritize work across longer runs.In Claude Code: The new /ultrareview slash command produces a dedicated review session that reads through changes and flags bugs and design issues that a careful reviewer would catch. We’re giving Pro and Max Claude Code users three free ultrareviews to try it out. In addition, we’ve extended auto mode to Max users. Auto mode is a new permissions option where Claude makes decisions on your behalf, meaning that you can run longer tasks with fewer interruptions—and with less risk than if you had chosen to skip all permissions.Opus 4.7 is a direct upgrade to Opus 4.6, but two changes are worth planning for because they affect token usage. First, Opus 4.7 uses an updated tokenizer that improves how the model processes text. The tradeoff is that the same input can map to more tokens—roughly 1.0–1.35× depending on the content type. Second, Opus 4.7 thinks more at higher effort levels, particularly on later turns in agentic settings. This improves its reliability on hard problems, but it does mean it produces more output tokens. Users can control token usage in various ways: by using the effort parameter, adjusting their task budgets, or prompting the model to be more concise. In our own testing, the net effect is favorable—token usage across all effort levels is improved on an internal coding evaluation, as shown below—but we recommend measuring the difference on real traffic. We’ve written a migration guide that provides further advice on upgrading from Opus 4.6 to Opus 4.7.Score on an internal agentic coding evaluation as a function of token usage at each effort level. In this evaluation, the model works autonomously from a single user prompt, and results may not be representative of token usage in interactive coding. See the migration guide for more on tuning effort levels.
...
Read the original on www.anthropic.com »
In September 2024, Amandla Thomas-Johnson was a Ph. D. candidate studying in the U.S. on a student visa when he briefly attended a pro-Palestinian protest. In April 2025, Immigration and Customs Enforcement (ICE) sent Google an administrative subpoena requesting his data. The next month, Google gave Thomas-Johnson’s information to ICE without giving him the chance to challenge the subpoena, breaking a nearly decade-long promise to notify users before handing their data to law enforcement.
Google names a handful of exceptions to this promise (such as if Google receives a gag order from a court) that do not apply to Thomas-Johnson’s case. While ICE “requested” that Google not notify Thomas-Johnson, the request was not enforceable or mandated by a court. Today, the Electronic Frontier Foundation sent complaints to the California and New York Attorneys General asking them to investigate Google for deceptive trade practices for breaking that promise. You can read about the complaints here. Below is Thomas-Johnson’s account of his ordeal.
I thought my ordeal with U. S. immigration authorities was over a year ago, when I left the country, crossing into Canada at Niagara Falls.
By that point, the Trump administration had effectively turned federal power against international students like me. After I attended a pro-Palestine protest at Cornell University—for all of five minutes—the administration’s rhetoric about cracking down on students protesting what we saw as genocide forced me into hiding for three months. Federal agents came to my home looking for me. A friend was detained at an airport in Tampa and interrogated about my whereabouts.
I’m currently a Ph. D. student. Before that, I was a reporter. I’m a dual British and Trinadad and Tobago citizen. I have not been accused of any crime.
I believed that once I left U. S. territory, I had also left the reach of its authorities. I was wrong.
Weeks later, in Geneva, Switzerland, I received what looked like a routine email from Google. It informed me that the company had already handed over my account data to the Department of Homeland Security.
At first, I wasn’t alarmed. I had seen something similar before. An associate of mine, Momodou Taal, had received advance notice from Google and Facebook that his data had been requested. He was given advanced notice of the subpoenas, and law enforcement eventually withdrew them before the companies turned over his data.
Google had already disclosed my data without telling me.
I assumed I would be given the same opportunity. But the language in my email was different. It was final: “Google has received and responded to legal process from a law enforcement authority compelling the release of information related to your Google Account.”
Google had already disclosed my data without telling me. There was no opportunity to contest it.
To be clear, this should not have happened this way. Google promises that it will notify users before their data is handed over in response to legal processes, including administrative subpoenas. That notice is meant to provide a chance to challenge the request. In my case, that safeguard was bypassed. My data was handed over without warning—at the request of an administration targeting students engaged in protected political speech.
Months later, my lawyer at the Electronic Frontier Foundation obtained the subpoena itself. On paper, the request focused largely on subscriber information: IP addresses, physical address, other identifiers, and session times and durations.
But taken together, these fragments form something far more powerful—a detailed surveillance profile. IP logs can be used to approximate location. Physical addresses show where you sleep. Session times would show when you were communicating with friends or family. Even without message content, the picture that emerges is intimate and invasive.
What this experience has made clear is that anyone can be targeted by law enforcement. And with their massive stores of data, technology companies can facilitate those arbitrary investigations. Together, they can combine state power, corporate data, and algorithmic inference in ways that are difficult to see—and even harder to challenge.
The consequences of what happened to me are not abstract. I left the United States. But I do not feel that I have left its reach. Being investigated by the federal government is intimidating. Questions run through your head. Am I now a marked individual? Will I face heightened scrutiny if I continue my reporting? Can I travel safely to see family in the Caribbean?
Who, exactly, can I hold accountable?
Update: This post has been updated to include more information about Google’s exceptions to their notification policy, none of which applied to the subpoena targeting Thomas-Johnson.
...
Read the original on www.eff.org »
Today, we’re launching Claude Design, a new Anthropic Labs product that lets you collaborate with Claude to create polished visual work like designs, prototypes, slides, one-pagers, and more.
Claude Design is powered by our most capable vision model, Claude Opus 4.7, and is available in research preview for Claude Pro, Max, Team, and Enterprise subscribers. We’re rolling out to users gradually throughout the day.
Even experienced designers have to ration exploration—there’s rarely time to prototype a dozen directions, so you limit yourself to a few. And for founders, product managers, and marketers with an idea but not a design background, creating and sharing those ideas can be daunting.
Claude Design gives designers room to explore widely and everyone else a way to produce visual work. Describe what you need and Claude builds a first version. From there, you refine through conversation, inline comments, direct edits, or custom sliders (made by Claude) until it’s right. When given access, Claude can also apply your team’s design system to every project automatically, so the output is consistent with the rest of your company’s designs.
Teams have been using Claude Design for:
* Realistic prototypes: Designers can turn static mockups into easily-shareable interactive prototypes to gather feedback and user-test, without code review or PRs.
* Product wireframes and mockups: Product Managers can sketch out feature flows and hand them off to Claude Code for implementation, or share them with designers to refine further.
* Design explorations: Designers can quickly create a wide range of directions to explore.
* Pitch decks and presentations: Founders and Account Executives can go from a rough outline to a complete, on-brand deck in minutes, and then export as a PPTX or send to Canva.
* Marketing collateral: Marketers can create landing pages, social media assets, and campaign visuals, then loop in designers to polish.
* Frontier design: Anyone can build code-powered prototypes with voice, video, shaders, 3D and built-in AI.
Your brand, built in. During onboarding, Claude builds a design system for your team by reading your codebase and design files. Every project after that uses your colors, typography, and components automatically. You can refine the system over time, and teams can maintain more than one.
Import from anywhere. Start from a text prompt, upload images and documents (DOCX, PPTX, XLSX), or point Claude at your codebase. You can also use the web capture tool to grab elements directly from your website so prototypes look like the real product.
Refine with fine-grained controls. Comment inline on specific elements, edit text directly, or use adjustment knobs to tweak spacing, color, and layout live. Then ask Claude to apply your changes across the full design.
Collaborate. Designs have organization-scoped sharing. You can keep a document private, share it so anyone in your organization with the link can view it, or grant edit access so colleagues can modify the design and chat with Claude together in a group conversation.
Export anywhere. Share designs as an internal URL within your organization, save as a folder, or export to Canva, PDF, PPTX, or standalone HTML files.
Handoff to Claude Code. When a design is ready to build, Claude packages everything into a handoff bundle that you can pass to Claude Code with a single instruction.
Over the coming weeks, we’ll make it easier to build integrations with Claude Design, so you can connect it to more of the tools your team already uses.
Claude Design is available for Claude Pro, Max, Team, and Enterprise subscribers. Access is included with your plan and uses your subscription limits, with the option to continue beyond those limits by enabling extra usage.
For Enterprise organizations, Claude Design is off by default. Admins can enable it in Organization settings.
...
Read the original on www.anthropic.com »
Flock Safety markets AI surveillance that goes far beyond reading license plates; color, bumper stickers, dents, and other features are used to build databases and identify movement patterns. These systems are spreading rapidly, often without oversight, and are accessible to police without a warrant. They raise serious privacy and legal concerns, and contribute to a nationwide trend toward mass surveillance.
While this and other systems like it claim to reduce crime, there is little evidence to support that claim - and significant risk of abuse. Real public safety comes from investing in communities, not stalking them.
Flock Safety markets AI surveillance that goes far beyond reading license plates; color, bumper stickers, dents, and other features are used to build databases and identify movement patterns. These systems are spreading rapidly, often without oversight, and are accessible to police without a warrant. They raise serious privacy and legal concerns, and contribute to a nationwide trend toward mass surveillance.
While this and other systems like it claim to reduce crime, there is little evidence to support that claim - and significant risk of abuse. Real public safety comes from investing in communities, not stalking them.
Flock Safety markets its devices as “AI-powered precision policing technology” - far beyond basic license plate readers (ALPRs) (Flock Safety). The system uses AI to create a “Vehicle Fingerprint” - identifying cars not only by license plate, but also by color, make and model, roof racks, dents/damage, wheel type, and more. Even bumper sticker placement is analyzed. This lets law enforcement search for a “blue sedan with damage on the left side” even without a license plate.
But the surveillance goes deeper. Using a feature called “Convoy Analysis”, the system can detect vehicles that frequently appear near each other - suggesting associations between drivers or accomplices. The platform can also flag vehicles that routinely travel to the same locations across time. Flock describes this as a way to “identify suspect vehicles traveling together” or “pinpoint associates” - functionality confirmed in both their marketing and police testimonials (GovTech, ACLU).
The data is logged and made searchable across a nationwide law enforcement network - which officers in subscribing agencies can access without a warrant. According to Flock, the system can automatically flag a vehicle based on its history, route, or presence in multiple locations linked to a crime (Flock HOA Marketing).
While these tools may aid in locating stolen cars or missing persons, they also create a detailed record of everyone’s movements, associations, and routines. That data has already been misused - like when a Kansas police chief used Flock cameras 228 times to stalk an ex-girlfriend and her new partner without cause (Local12).
The scope of this tracking becomes clear when you see real-world examples. In 2025, a journalist drove 300 miles across rural Virginia and was captured by nearly 50 surveillance cameras operated by 15 different law enforcement agencies. When he requested his own surveillance footage, he discovered the cameras had documented patterns that made his behavior “predictable to anyone looking at it.” Most troubling: while the journalist couldn’t remember specific dates he’d made certain trips, police would know instantly - without any warrant or suspicion of wrongdoing (Cardinal News).
See also:
EFF: How ALPRs Work,
The Secure Dad on Flock Cameras,
Compass IT: “Privacy Concerns with Flock”,
ACLU: Flock is building a new AI-driven mass surveillance system,
Wikipedia: Flock Safety
How Widespread Are These Cameras?
Understanding what Flock cameras are leads to a natural question: how common are they in our communities?
The crowdsourced map made available on DeFlock.me currently shows roughly half of the >100,000 Flock AI cameras nationwide. Here are examples from three major cities showing how pervasive this surveillance has become:
These systems are expanding rapidly, often with little public debate or oversight. The Atlas of Surveillance, maintained by the Electronic Frontier Foundation, has documented over 3,000 law enforcement and government agencies using Flock products as of 2025 - a number growing monthly.
The Fourth Amendment was written in response to the British Crown’s “general warrants” - broad authorizations to search anyone, anywhere, anytime. Mass surveillance revives that threat in digital form. Simply moving freely in public should not require that you be profiled and scrutinized.
It is important to point out that the courts have repeatedly ruled so-called “dragnet warrants,” often using cell phone GPS locations, unconstitutional under the Fourth Amendment. But Flock’s status as a private company means it can collect and sell data with fewer restrictions, exploiting a legal gray zone which courts have yet to fully address.
“If you’ve got nothing to hide, you’ve got nothing to fear” is a tempting thought - until someone misuses your information. Privacy isn’t about hiding wrongdoing. It’s about autonomy, dignity, and the ability to live free from unjust scrutiny. “Saying you don’t care about privacy because you have nothing to hide is like saying you don’t care about free speech because you have nothing to say.” - Edward Snowden
As one observer put it: “While today they are no threat to me…circumstances change, leadership changes, laws change. When you really boil this down, what is this nationwide system? What did Flock really make? It’s a weapon. A silent weapon. Right now it targets what many would agree are criminals. But with the flip of a switch this system can be used to target or oppress anybody the people in power decide is a threat.”
We are fast approaching a world in which going about one’s business in public means being entered into a law enforcement database. Automated license plate readers collect location data on millions of people with no suspicion of wrongdoing, creating vast databases of where we go and when.
Flock cameras and similar surveillance tools raise serious Fourth Amendment concerns by enabling broad, warrantless tracking of people’s movements. In 2024, a trial court held that the Flock network functioned as a “dragnet over the entire city.” The judge in the case equated it to placing GPS trackers on every vehicle - a practice that the U. S. Supreme Court has ruled requires a warrant (Virginia Mercury, The Virginian Pilot).
The American Civil Liberties Union (ACLU) warns that automatic license plate readers (ALPRs) are becoming tools for routine mass location tracking and surveillance, with too few rules governing their use. These systems can collect and store data on millions of innocent drivers, creating detailed records of people’s movements without their knowledge or consent. (ACLU)
Legal scholars have highlighted the broader implications of such surveillance. Neil Richards, writing in the Harvard Law Review, emphasizes that surveillance can chill the exercise of civil liberties, particularly intellectual privacy, and increase the risk of blackmail, coercion, and discrimination. (Harvard Law Review)
Flock’s data further enables already biased enforcement. In Oak Park, Illinois, 84% of drivers stopped using Flock camera alerts were Black - despite the town being only 21% Black. (Freedom to Thrive).
See also:
ACLU on Unaccountable Surveillance Tech
Mass surveillance isn’t just about policing; there are major business interests involved.
Flock Safety collaborates with law enforcement agencies to promote the adoption of its license plate recognition cameras by encouraging private entities such as businesses and HOAs to share their footage. This practice broadens the surveillance net by granting access to what would otherwise have been private data (Flock Safety FAQ).
Instances have been reported where HOAs installed Flock cameras on public roads, leading to debates over the extent of surveillance and the privacy rights of residents and visitors (Oaklandside), (Forest Brooke HOA).
The ACLU has highlighted that the expansive reach of these surveillance networks could enable law enforcement to construct detailed profiles of individuals’ movements and associations, underscoring the need for transparency and oversight (ACLU).
Additionally, Flock markets its surveillance technology to employers and retail establishments, further blurring the lines between public safety initiatives and profit-driven surveillance. For example, major retail property owners have entered into agreements to share AI-powered surveillance feeds directly with law enforcement, expanding the scope of monitoring beyond public spaces. (Forbes) [Mirror]
Lowe’s is a significant private client of Flock Safety, having implemented their systems in numerous locations to enhance security and deter theft.
While Flock specifically does not offer facial recognition (today), Lowe’s has faced legal troubles over its use of facial recognition systems from other vendors. In 2019, a class action lawsuit was filed in Cook County Circuit Court, alleging that Lowe’s used facial recognition software to track customers’ movements without their consent, violating Illinois’ Biometric Information Privacy Act (BIPA). The lawsuit claimed that Lowe’s collected and stored biometric data from customers and shared it with other retailers. (Security InfoWatch)
Some justify these systems as making us safer, but the reality is more complicated.
Flock advertises a drop in crime, but the true cost is a culture of mistrust and preemptive suspicion. As the EFF warns, communities are being sold a false promise of safety - at the expense of civil rights*
(EFF).
A 2019 report by the NAACP Legal Defense Fund warned that predictive policing tools premised on biased data will reflect that bias, reinforcing existing discrimination in the criminal justice system. These tools may appear objective, but instead often amplify historic injustice under a veneer of scientific credibility (NAACP LDF).
True safety comes from healthy, empowered communities; not automated suspicion. Community-led safety initiatives have demonstrated significant results: North Lawndale saw a 58% decrease in gun violence after READI Chicago began implementing their program there. In cities nationwide, the presence of local nonprofits has been statistically linked to reductions in homicide, violent crime, and property crime (Brennan Center, The DePaulia, American Sociological Association).
Zooming out, Flock is just one part of a larger movement toward ubiquitous surveillance.
Flock’s expansion is part of a broader movement toward ubiquitous mass surveillance - where your associations, online comments, purchases, movements, and more may be logged, indexed, analyzed by AI, and made easily searchable by almost any government agency at any time.
This progression from data collection to surveillance follows a familiar pattern in tech: tools sold for convenience often evolve into tools of control.
Bruce Schneier, a prominent cryptographer and privacy advocate, put it simply: “Surveillance is the business model of the Internet.” What begins as data collection for convenience or security often evolves into persistent monitoring, normalization of tracking, and the loss of autonomy.
As Edward Snowden warned: “A child born today will grow up with no conception of privacy at all. They’ll never know what it means to have a private moment to themselves - an unrecorded, unanalyzed thought.”
In Dunwoody, Georgia, drones are now dispatched from Flock Safety “nests” to respond to 911 calls autonomously, often arriving in under 90 seconds (Axios).
In California, 480 high-tech cameras were recently installed to surveil Oakland’s highways - tracking license plates, bumper stickers, and vehicle types - with alerts sent to law enforcement in real-time (AP News).
This surveillance infrastructure extends far beyond law enforcement. The U. S. military has spent at least $3.5 million on a tool called “Augury” that monitors “93% of internet traffic,” capturing browsing history, email data, and sensitive cookies from Americans - all “without informed consent.” Senator Ron Wyden has received whistleblower complaints about this warrantless surveillance program (VICE).
Meanwhile, the current administration is working with Palantir Technologies to create what Ron Paul calls a “big ugly database” - a comprehensive collection of all information held by federal agencies on all U.S. citizens. This would include health records, education records, tax returns, firearm purchases, and associations with any groups labeled “extremist.” Palantir, funded by the CIA’s In-Q-Tel venture capital firm, is “literally the creation of the surveillance state” (OC Register).
Even basic tools we use daily are being transformed into surveillance instruments. Recent court rulings now allow the government to order companies like OpenAI to indefinitely preserve all ChatGPT conversations. Users who thought they were having private conversations - like “talking to a friend who can keep a secret” - discovered this only through web forums, not company disclosure. The judge’s order enables what one user called a “nationwide mass surveillance program” disguised as a civil discovery process (TechRadar).
This pattern repeats throughout history: people abandon liberty for promises of safety. After 9/11, many supported the PATRIOT Act. During COVID, many embraced mask and vaccine mandates. After the 2008 financial crisis, many supported bailouts because leaders said they had to “abandon free-market principles to save the free-market system.” Today, some support mass surveillance because they believe it will target only “the right people” - but circumstances change, leadership changes, laws change.
See also:
Ars Technica: “AI Cameras to Ensure Good Behavior”,
Video: Predictive Surveillance Trends
So where is all of this heading? The trajectory is troubling.
Flock’s cameras capture detailed information about the daily lives of anyone passing by, without offering a genuine opt-out mechanism. Concurrently, Palantir Technologies has secured a $30 million contract with ICE, aiming to develop a system that consolidates sensitive personal data such as biometrics, geolocation, and other personal identifiers from various federal agencies, facilitating near real-time tracking and categorization of individuals for immigration enforcement purposes (Wired). It should be no surprise that this will also not offer any meaningful opt-out mechanism.
The integration of surveillance technologies such as Flock Safety’s license plate readers and Palantir’s ImmigrationOS platform signifies a shift toward comprehensive monitoring of individuals’ movements and behaviors. It is not difficult to imagine the scope of such systems’ usage growing with time.
These developments raise concerns about the erosion of privacy and the potential for misuse of aggregated data. The pervasive nature of such surveillance systems means that individuals are monitored without explicit consent, and the data collected can be repurposed beyond its original intent. As these technologies become more entrenched, the line between public safety and invasive oversight blurs, prompting critical discussions about the balance between security and individual freedoms.
Some of the most chilling validations of mass surveillance come not from critics - but from the very people promoting it. These aren’t out-of-context slips; they are open endorsements of a world where privacy is sidelined in favor of control, compliance, and convenient enforcement.
“Anything technology they think, ‘Oh it’s a boogeyman. It’s Big Brother watching you,’ … No, Big Brother is protecting you.”
- Eric Adams, NYC Mayor (Politico, 2022)
New York’s mayor casually rebrands Orwell’s authoritarian icon as a guardian figure. It’s a startling reversal - not a warning about overreach, but a defense of it.
“Instead of being reactive, we are going to be proactive… [we] use data to predict where future crimes are likely to take place and who is likely to commit them… then deputies would find those people and take them out.”
- Chris Nocco, Pasco County Sheriff (Tampa Bay Times, 2020)
This “Minority Report”-style program led to harassment of innocent people - and was ultimately found unconstitutional in court (Institute for Justice). A rare win, but a stark example of where unchecked surveillance can go.
“The use of net flow data by NCIS does not require a warrant.”
- Charles E. Spirtos, Navy Office of Information (VICE, 2024)
The military’s position on monitoring Americans’ internet traffic without judicial oversight. This statement came after a whistleblower complained about warrantless surveillance activities to Senator Ron Wyden’s office.
“Tech firms should not develop their systems and services, including end-to-end encryption, in ways that empower criminals or put vulnerable people at risk.”
- Priti Patel, UK Home Secretary UK Govt, 2019, (Infosecurity Magazine)
The logic: protecting everyone’s privacy is dangerous. This kind of framing justifies backdoors into secure systems - which inevitably get abused.
“The risk [of built-in weaknesses]… is acceptable because we are talking about consumer products… and not nuclear launch codes.”
- William Barr, U. S. Attorney General (TechCrunch, 2019)
A clear “rules for thee but not for me” mentality. Your data, messages, and devices don’t deserve the same protections as the government’s - because you’re just a civilian.
China exploited a covert surveillance interface - originally built for lawful access by U.S. law enforcement - to tap into Americans’ private phone records, messages, and geolocation data. (CISA)
Telecom providers are required by law to build these backdoors for law enforcement. The “Salt Typhoon” incident shows the risk: once a backdoor exists, it can be discovered and abused - and not just by “the good guys.” (EFF, Reason)
...
Read the original on stopflock.com »
A real-world production migration from DigitalOcean to Hetzner dedicated, handling 248 GB of MySQL data across 30 databases, 34 Nginx sites, GitLab EE, Neo4j, and live mobile app traffic — with zero downtime.
Running a software company in Turkey has become increasingly expensive over the last few years. Skyrocketing inflation and a dramatically weakening Turkish Lira against the US dollar have turned dollar-denominated infrastructure costs into a serious burden. A bill that felt manageable two years ago now hits very differently when the exchange rate has multiplied several times over.
Every month, we were paying $1,432 to DigitalOcean for a droplet with 192GB RAM, 32 vCPUs, 600GB SSD, two block volumes (1TB each), and backups enabled. The server was fine — but the price-to-performance ratio had stopped making sense.
Then we discovered the Hetzner AX162-R.
That’s $14,388 saved per year — for a server that’s objectively more powerful in every dimension. The decision was easy.
I’ve been a DigitalOcean customer for nearly 8 years. They have a great product and I have no complaints about reliability or developer experience. But looking at those numbers now, I cannot help feeling a bit sad about all the extra money I left on the table over the years. If you are running steady-state workloads and not actively using DO’s ecosystem features, do yourself a favor and check dedicated server pricing before your next renewal.
* Several live mobile apps serving hundreds of thousands of users
Old server: CentOS 7 — long past its end-of-life, but still running in production. New server: AlmaLinux 9.7 — a RHEL 9 compatible distribution and the natural successor to CentOS. This migration was also an opportunity to finally escape an OS that hadn’t received security updates in years.
The naive approach — change DNS, restart everything, hope for the best — wasn’t acceptable. Instead, we designed a proper migration path with six phases:
Phase 1 — Full stack installation on the new server
Nginx (compiled from source with identical flags), PHP (via Remi repo, with the same .ini config files from the old server), MySQL 8.0, Neo4J Graph DB, GitLab EE, Node.js, Supervisor, and Gearman. Every service had to be configured to match the old server’s behavior before we touched a single DNS record.
SSL certificates were handled by rsyncing the entire /etc/letsencrypt/ directory from the old server to the new one. After the migration was complete and all traffic was flowing through the new server, we force-renewed all certificates in one shot:
Phase 2 — Web files cloned with rsync
The entire /var/www/html directory (~65 GB, 1.5 million files) was cloned to the new server using rsync over SSH with the –checksum flag for integrity verification. We ran a final incremental sync right before cutover to catch any files changed after the initial clone.
Phase 3 — MySQL master to slave replication
Rather than taking the database offline for a dump-and-restore, we set up live replication. The old server became master, the new server a read-only slave. We used mydumper for the initial bulk load, then started replication from the exact binlog position recorded in the dump metadata. This kept both databases in real-time sync until the moment of cutover.
Phase 4 — DNS TTL reduction
We scripted the DigitalOcean DNS API to lower all A and AAAA record TTLs from 3600 to 300 seconds — without touching MX or TXT records (changing mail record TTLs can cause deliverability issues). After waiting one hour for old TTLs to expire globally, we were ready to cut over in under 5 minutes.
Phase 5 — Old server nginx converted to reverse proxy
We wrote a Python script that parsed every server {} block across all 34 Nginx site configs, backed up the originals, and replaced them with proxy configurations pointing to the new server. This meant that during DNS propagation, any request still hitting the old IP was silently forwarded. No user would see a disruption.
Phase 6 — DNS cutover and decommission
A single Python script hit the DigitalOcean API and flipped all A records to the new server IP in seconds. The old server remained as a cold standby for one week, then was shut down.
The key insight: at no point did we have a window where the service was unavailable. Traffic was always being served — either directly or through the proxy.
This was the most complex part of the entire operation.
We used mydumper instead of the standard mysqldump — and it made an enormous difference. By leveraging the new server’s 48 CPU cores for parallel export and import, what would have taken days with a traditional single-threaded mysqldump was completed in hours. If you’re migrating a large MySQL database and you’re not using mydumper/myloader, you’re doing it the hard way.
The main dump’s metadata file recorded the binlog position at the time of the snapshot:
File: mysql-bin.000004
Position: 21834307
This would be our replication starting point.
Once the dump was complete, we transferred it to the new server using rsync over SSH. With 248 GB of compressed chunks, this was significantly faster than any other transfer method:
The –compress flag in mydumper paid off here — compressed chunks transferred much faster over the wire.
Being stuck on CentOS 7 meant we were also stuck on MySQL 5.7 — an outdated version that had been running in production for years. Before the migration, we ran mysqlcheck –check-upgrade to verify that our data was compatible with MySQL 8.0. It came back clean, so we installed the latest MySQL 8.0 Community on the new server. The performance improvement across all our projects was immediately noticeable — query execution times dropped significantly thanks to MySQL 8.0’s improved optimizer and InnoDB enhancements.
That said, the version jump did introduce one tricky problem.
After import, the mysql.user table had the wrong column structure — 45 columns instead of the expected 51. This caused mysql.infoschema to be missing, breaking user authentication.
But this failed the first time with:
ERROR: ‘sys.innodb_buffer_stats_by_schema’ is not VIEW
The sys schema had been imported as regular tables instead of views. Solution:
With both dumps imported, we configured the new server as a replica of the old one:
Almost immediately, replication stopped with error 1062 (Duplicate Key). This happened because our dump was taken in two passes — during the gap between them, rows were written to certain tables, and now both the imported dump and the binlog replay were trying to insert the same rows.
IDEMPOTENT mode silently skips duplicate key and missing row errors. All critical databases synced without a single error. Within a few minutes, Seconds_Behind_Master dropped to 0.
Before touching a single DNS record, we needed to verify that all services were working correctly on the new server. The trick: we temporarily edited the /etc/hosts file on our local machine to point our domain names to the new server’s IP.
# /etc/hosts (local machine)
NEW_SERVER_IP yourdomain1.com
NEW_SERVER_IP yourdomain2.com
# … and so on for all your domains
With this in place, our browsers and Postman would hit the new server while the rest of the world was still going to the old one. We ran through our API endpoints, checked admin panels, and verified that every service was responding correctly. Only after this confirmation did we proceed with the cutover.
Once master-slave replication was fully synchronized, we noticed that INSERT statements were succeeding on the new server when they shouldn’t have been — read_only = 1 was set, but writes were going through.
The reason: all PHP application users had been granted SUPER privilege. In MySQL, SUPER bypasses read_only.
We revoked it from all 24 application users:
After this, read_only = 1 correctly blocked all writes from application users while allowing replication to continue.
All domains were managed through DigitalOcean DNS (with nameservers pointed from GoDaddy). We scripted the TTL reduction against the DigitalOcean API, only touching A and AAAA records — not MX or TXT records, since changing mail record TTLs can cause deliverability issues with Google Workspace.
After waiting one hour for old TTLs to expire, we were ready.
Rather than editing 34 config files by hand, we wrote a Python script that parsed every server {} block in every config file, identified the main content blocks, replaced them with proxy configs, and backed up originals as .backup files.
The key: proxy_ssl_verify off — the new server’s SSL cert is valid for the domain, not for the IP address. Disabling verification here is fine because we control both ends.
With replication at Seconds_Behind_Master: 0 and the reverse proxy ready, we executed the cutover in order:
1. New server: STOP SLAVE;
2. New server: SET GLOBAL read_only = 0;
3. New server: RESET SLAVE ALL;
4. New server: supervisorctl start all
5. Old server: nginx -t && systemctl reload nginx (proxy goes live)
6. Old server: supervisorctl stop all
7. Mac: python3 do_cutover.py (DNS: all A records to new server IP)
8. Wait: ~5 minutes for propagation
9. Old server: comment out all crontab entries
The DNS cutover script hit the DigitalOcean API and changed every A record to the new server IP — in about 10 seconds.
After migration, we discovered many GitLab project webhooks were still pointing to the old server IP. We wrote a script to scan all projects via the GitLab API and update them in bulk.
We went from $1,432/month down to $233/month — saving $14,388 per year. And we ended up with a more powerful machine:
The entire migration took roughly 24 hours. No users were affected.
MySQL replication is your best friend for zero-downtime migrations. Set it up early, let it catch up, then cut over with confidence.
Check your MySQL user privileges before migration. SUPER privilege bypasses read_only — if your app users have it, your slave environment isn’t actually read-only.
Script everything. DNS updates, nginx config rewrites, webhook updates — doing these by hand across 34+ sites would have taken hours and introduced errors.
mydumper + myloader dramatically outperforms mysqldump for large datasets. Parallel dump/restore with 32 threads cut what would have been days of work down to hours.
Cloud providers are expensive for steady-state workloads. If you’re not using autoscaling or ephemeral infrastructure, a dedicated server often delivers better performance at a fraction of the cost.
All Python scripts used in this migration are open-sourced and available on GitHub:
* do_list_domains_ttl.py — List all DigitalOcean domains with their A records, IPs, and TTLs
* do_to_hetzner_bulk_dns_records_import.py — Migrate all DNS zones from DigitalOcean to Hetzner DNS
* do_cutover_to_new_ip.py — Flip all A records from old server IP to new server IP
* mysql_compare.py — Compare row counts across all tables on two MySQL servers
* final_gitlab_webhook_update.py — Update all GitLab project webhooks to the new server IP
All scripts support a DRY_RUN = True mode so you can safely preview changes before applying them.
...
Read the original on isayeter.com »
We’ve identified a security incident that involved unauthorized access to certain internal Vercel systems. We are actively investigating, and we have engaged incident response experts to help investigate and remediate. We have notified law enforcement and will update this page as the investigation progresses.
At this time, we have identified a limited subset of customers that were impacted and are engaging with them directly.
Our services remain operational, and we will continue to update this page with new information.
We are taking actions to protect Vercel systems and customers.
Our investigation is ongoing. In the meantime, here are best practices you can follow for peace of mind:
* Review the activity log for your account and environments for suspicious activity.
* Review and rotate environment variables. Environment variables marked as “sensitive” in Vercel are stored in a manner that prevents them from being read, and we currently do not have evidence that those values were accessed. However, if any of your environment variables contain secrets (API keys, tokens, database credentials, signing keys) that were not marked as sensitive, those values should be treated as potentially exposed and rotated as a priority.
* Take advantage of the sensitive environment variables feature going forward, so that secret values are protected from being read in the future.
For support rotating your secrets or other technical support, contact us through vercel.com/help.
Our investigation has revealed that the incident originated from a third-party AI tool whose Google Workspace OAuth app was the subject of a broader compromise, potentially affecting hundreds of its users across many organizations.
We are publishing the following IOC to support the wider community in the investigation and vetting of potential malicious activity in their environments. We recommend that Google Workspace Administrators and Google Account owners check for usage of this app immediately.
...
Read the original on vercel.com »
Skip to content
Google collects statistics about IPv6 adoption in the Internet on an ongoing basis. We hope that publishing this information will help Internet providers, website owners, and policy makers as the industry rolls out IPv6.
We are continuously measuring the availability of IPv6 connectivity among Google users. The graph shows the percentage of users that access Google over IPv6.
The chart above shows the availability of IPv6 connectivity around the world.
Regions where IPv6 is more widely deployed (the darker the green, the greater the deployment) and users experience infrequent issues connecting to IPv6-enabled websites.
Regions where IPv6 is more widely deployed but users still experience significant reliability or latency issues connecting to IPv6-enabled websites.
Regions where IPv6 is not widely deployed and users experience significant reliability or latency issues connecting to IPv6-enabled websites.
...
Read the original on www.google.com »
Six million fake stars, $0.06 per click, and a VC funding pipeline that treats GitHub popularity as proof of traction. We ran our own analysis on 20 repos and found the fingerprints.
Six million fake stars, $0.06 per click, and a VC funding pipeline that treats GitHub popularity as proof of traction. We ran our own analysis on 20 repos and found the fingerprints.
A GitHub star costs $0.06 at the low end. A seed round unlocks $1 million to $10 million. The math is obvious, and thousands of repositories are exploiting it.
This investigation maps the full ecosystem: from the peer-reviewed research quantifying the problem, to the marketplaces selling stars openly, to the venture capital pipeline that converts star counts into funding decisions. We ran our own analysis on 20 repositories using the GitHub API, sampling thousands of stargazer profiles to independently verify which projects show fingerprints of manipulation - and which don’t.
The picture that emerges is a mature, professionalized shadow economy operating in plain sight.
The definitive account comes from a peer-reviewed study presented at ICSE 2026 by researchers at Carnegie Mellon University, North Carolina State University, and Socket. Their tool, StarScout, analyzed 20 terabytes of GitHub metadata - 6.7 billion events and 326 million stars from 2019 to 2024 - and identified approximately 6 million suspected fake stars distributed across 18,617 repositories by roughly 301,000 accounts.
The problem accelerated dramatically in 2024. By July, 16.66% of all repositories with 50 or more stars were involved in fake star campaigns - up from near-zero before 2022. The researchers’ detection proved accurate: 90.42% of flagged repositories and 57.07% of flagged accounts had been deleted as of January 2025, confirming GitHub itself recognized these as illegitimate.
AI and LLM repositories emerged as the largest non-malicious category of fake-star recipients, ahead of blockchain/cryptocurrency projects in absolute volume at 177,000 fake stars. The study notes that “many of which are academic paper repositories or LLM-related startup products.” Critically, 78 repositories with detected fake star campaigns appeared on GitHub Trending, proving that purchased stars successfully game the platform’s discovery algorithm.
Earlier foundational work includes Dagster’s March 2023 investigation, where engineers purchased stars from two vendors to study the phenomenon. They found services via basic Google search. A premium vendor - GitHub24, a registered German company (Moller und Ringauf GbR) - charged EUR 0.85 per star and delivered reliably, with all 100 stars persisting after one month. A budget service (Baddhi Shop) sold 1,000 stars for $64, though only 75% survived.
The star-selling ecosystem spans dedicated websites, freelance platforms, exchange networks, and underground channels. At least a dozen active websites sell GitHub stars directly, including SocialPlug.io, Buy.fans, Boost-Like.store, GitHubPromoter.com, Followdeh.com, and Vurike.com.
On Fiverr, 24 active gigs sell GitHub promotion, with packages from $5 for basic stars and forks to $25+ for “organic promotion.” Many use obfuscated language to evade platform filters. Star exchange platforms like GithubStarMate.com and SafeStarExchange.com - both live and operational - enable free mutual starring through credit-based systems.
The infrastructure extends beyond stars. At least seven open-source tools on GitHub (fake-git-history, commit-bot, Commiter, and others) exist specifically to fabricate GitHub contribution graphs. Pre-built GitHub profiles with five-year commit histories and Arctic Code Vault Contributor badges sell for approximately $5,000 on Telegram.
Some vendors offer replacement guarantees - Followdeh advertises 30-day coverage, and premium services promise “non-drop” stars that survive GitHub’s detection systems. SocialPlug claims 3.1 million stars delivered across 53,000+ clients and offers a formal API for programmatic purchasing.
A Tsinghua University study (ACSAC 2020) documented Chinese QQ and WeChat promotion groups with 1,020+ members processing roughly 20 repos per day, generating an estimated $3.4 to $4.4 million annually in promoter profits.
To move beyond reported statistics, we built a GitHub API analysis tool and ran it against 20 repositories: projects flagged by StarScout, fast-growing AI repos from the Runa Capital ROSS Index, and known organic baselines. For each repo, we sampled 150 stargazer profiles and measured account age, public repos, followers, and bio presence.
The fingerprints of manipulation are unmistakable once you know what to look for.
Organic repositories are starred by developers who have been on GitHub for years, maintain their own projects, and follow other users. Ghost accounts - zero repos, zero followers, no bio - make up about 1% of a healthy project’s stargazer base.
These repos share a distinctive fingerprint. The accounts aren’t obviously new - median ages of 1,000+ days - so they pass simple “young account” filters. But they’re empty: a third have zero repos, half to four-fifths have zero followers, and a quarter are complete ghosts. These are aged accounts purchased or farmed specifically for star campaigns.
The fork-to-star ratio is the strongest signal. Flask has 235 forks per 1,000 stars. Shardeum has 22. FreeDomain has 17. When nobody is forking a 157,000-star repository, nobody is using it. The watcher-to-star ratio tells the same story: FreeDomain’s 0.001 means that for every 1,000 people who starred the repo, just one actually watches it for updates.
FreeDomain is worth isolating: 157,000 stars, but only 168 watchers and 2,676 forks. That’s a watcher-to-star ratio 26x lower than Flask. 81.3% of sampled stargazers have zero followers. This is a repository where almost nobody who starred it has any visible presence on GitHub.
Union Labs is the most consequential case. It was ranked #1 on Runa Capital’s ROSS Index for Q2 2025 - a widely cited VC industry report identifying the “hottest open-source startups” - with 54.2x star growth and 74,300 stars. Our analysis found 32.7% zero-repo accounts, 52% zero-follower accounts, and a fork-to-star ratio of 0.052. The StarScout analysis flagged it with 47.4% suspected fake stars. An influential investment-sourcing report that VCs rely on was topped by a project with nearly half its stars suspected as artificial.
RagaAI-Catalyst and openai-fm show clear manipulation signals. RagaAI has 76.2% zero-follower accounts and 28% ghosts - nearly identical to the blockchain pattern. openai-fm is the most extreme case in our dataset: 66% suspicious accounts, 36% ghosts, and a median account age of just 116 days. Two-thirds of its stargazers are less than a year old with virtually no GitHub activity. (The StarScout analysis notes this is likely third-party bots, not OpenAI itself.)
Langflow - flagged by StarScout at 47.9% fake - showed clean metrics in our profile sample, with a median age of 2,859 days and low ghost rates. This likely reflects improved account quality since the StarScout scan. The 0.060 fork-to-star ratio is still notably low - roughly a quarter of Flask’s - suggesting less genuine adoption relative to star count.
For comparison, NousResearch’s hermes-agent looks relatively organic: median age 8 years, 6% ghosts, fork-to-star ratio of 0.133. Despite Reddit accusations of astroturfing, the stargazer population is mostly real developers. The project’s crypto-adjacent audience includes more casual GitHub users, which explains slightly elevated zero-follower rates, but the fundamental engagement pattern is legitimate.
The connection between GitHub star counts and startup funding is not speculative - it is explicitly documented by the investors themselves.
Jordan Segall, Partner at Redpoint Ventures, published an analysis of 80 developer tool companies showing that the median GitHub star count at seed financing was 2,850 and at Series A was 4,980. He confirmed: “Many VCs write internal scraping programs to identify fast growing github projects for sourcing, and the most common metric they look toward is stars.”
Those numbers set an implicit target. For $85 to $285 in budget stars, a startup can manufacture the 2,850-star seed median. For $990 to $4,500, it can reach Series A territory. Against typical seed rounds of $1-10 million, the ROI ranges from 3,500x to 117,000x.
Runa Capital publishes the ROSS (Runa Open Source Startup) Index quarterly, ranking the 20 fastest-growing open-source startups by GitHub star growth rate. Per TechCrunch, 68% of ROSS Index startups that attracted investment did so at seed stage, with $169 million raised across tracked rounds. GitHub itself, through its GitHub Fund partnership with M12 (Microsoft’s VC arm), commits $10 million annually to invest in 8-10 open-source companies at pre-seed/seed stages based partly on platform traction.
* Lovable (formerly GPT Engineer): 50,000+ stars, $7.5M pre-seed, $200M Series A at $1.8 billion valuation with 45 employees
Dagster’s Fraser Marlow, who led the fake star investigation, admitted directly: “In the run-up to the fundraising, I spent a fair amount of time preoccupied with GitHub stars.” An academic paper in Organization Science provided rigorous statistical evidence that GitHub engagement correlates with startup funding outcomes - startups active on GitHub are 15 percentage points more likely to have raised a financing round.
The incentive loop is self-reinforcing: VCs use stars as sourcing signals, so startups manipulate stars, so VCs see inflated traction, so more VCs adopt star-tracking, so more startups manipulate. Redpoint’s own published benchmarks give startups an exact target to buy toward.
Our analysis revealed the fork-to-star ratio as the strongest simple heuristic for identifying potential manipulation. The logic is straightforward: a star costs nothing and conveys no commitment. A fork means someone downloaded the code to use or modify it.
Any repository with a fork-to-star ratio below 0.05 and more than 10,000 stars warrants scrutiny. The watcher-to-star ratio is even more telling: organic projects average 0.005 to 0.030; FreeDomain registers 0.001.
These ratios aren’t perfect - educational repos and curated lists naturally have low fork rates. But as a first-pass filter, they catch the most egregious cases that raw star counts miss entirely.
The problem extends to every platform where popularity metrics influence trust.
npm downloads are trivially inflatable. Developer Andy Richardson demonstrated this by using a single AWS Lambda function (free tier) to push his package is-introspection-query to nearly 1 million downloads per week - surpassing legitimate packages like urql and mobx. Zero actual users. The CMU study found that of repos with fake star campaigns, only 1.23% appeared in package registries, but of those 738 packages, 70.46% had zero dependent projects.
VS Code Marketplace extensions are similarly vulnerable. Researchers demonstrated 1,000+ installs of a fake extension in 48 hours. AquaSec found 1,283 extensions with known malicious dependencies totaling 229 million installs.
X/Twitter promotion amplifies artificial GitHub virality through engagement pods - private groups where members agree to like, repost, and comment on each other’s content. Growth Terminal sells this as a product feature. NBC News and Clemson University researchers identified a network of 686 X accounts that posted more than 130,000 times using LLM-generated content, some containing telltale artifacts like “Dolphin here!” from the uncensored Dolphin model they employed.
The Higgsfield AI case documents cross-platform astroturfing at industrial scale: over 100 confirmed spam posts across 60+ subreddits, combined with mass template DMs to content creators offering payment for promotion.
The FTC Consumer Review Rule, effective October 21, 2024, explicitly prohibits selling or buying “fake indicators of social media influence” generated by bots or fake accounts for commercial purposes. Penalties: up to $53,088 per violation. The FTC issued its first warning letters to 10 companies in December 2025. A GitHub star purchased to promote a commercial product fits this framework.
The SEC precedent is more direct. HeadSpin’s CEO was charged with wire fraud (maximum 20 years) and securities fraud for inflating metrics to deceive investors out of $80 million. ComplYant’s founder faced charges for claiming $250,000 monthly revenue when actual revenue was $250.
The SEC’s message: “Startup fundraisers cannot use the ‘fake it until you make it’ ethos to whitewash lying to investors.”
If a startup buys fake GitHub stars to inflate perceived traction during a fundraising round, and investors rely on those metrics to deploy capital, the wire fraud framework applies: using electronic communications to misrepresent material facts for financial gain. No one has been charged specifically for fake GitHub stars yet. Given the CMU research documenting the practice at scale and the FTC rule explicitly covering fake social influence metrics, it may only be a matter of time.
GitHub’s Acceptable Use Policies explicitly prohibit “inauthentic interactions, such as fake accounts and automated inauthentic activity,” “rank abuse, such as automated starring or following,” and “creation of or participation in secondary markets for the purpose of the proliferation of inauthentic activity.” The policies even specifically prohibit starring incentivized by “cryptocurrency airdrops, tokens, credits, gifts or other give-aways.”
Enforcement is reactive and asymmetric. GitHub removed 90.42% of repositories flagged by StarScout, but only 57.07% of the accounts that delivered those stars. The infrastructure for future campaigns largely remains intact. When Dagster published its investigation, fake star profiles were deleted within 48 hours - but only after public embarrassment, not proactive detection.
GitHub has never published an engineering blog post about its detection methods or enforcement statistics. No transparency report exists for star manipulation. The company’s VP of Security Operations told Wired only that they “disabled user accounts in accordance with GitHub’s Acceptable Use Policies,” declining to elaborate - though that comment was specifically about the Stargazers Ghost Network malware operation, not vanity metric manipulation.
The CMU researchers recommended GitHub adopt a weighted popularity metric based on network centrality rather than raw star counts. A change that would structurally undermine the fake star economy. GitHub has not implemented it.
Bessemer Venture Partners calls stars “vanity metrics” and instead tracks unique monthly contributor activity - anyone who created an issue, comment, PR, or commit. Fewer than 5% of top 10,000 projects ever exceeded 250 monthly contributors; only 2% sustained it across six months.
Jono Bacon at StateShift recommends five metrics that correlate with real adoption: package downloads, issue quality (production edge cases from real users), contributor retention (time to second PR), community discussion depth, and usage telemetry.
The fork-to-star ratio our analysis surfaced is the simplest first-pass filter. A healthy project has roughly 100-200 forks per 1,000 stars. Projects below 50 forks per 1,000 stars with high absolute counts deserve a closer look.
As one commenter put it: “You can fake a star count, but you can’t fake a bug fix that saves someone’s weekend.”
First, the incentive loop. VCs use stars as sourcing signals. Startups manipulate stars. VCs see inflated traction. More VCs adopt star-tracking. More startups manipulate. Redpoint’s published benchmarks - 2,850 at seed, 4,980 at Series A - effectively give startups a price list for how many stars to buy.
Second, the AI sector’s specific vulnerability. The combination of extreme hype, crypto-adjacent funding models that reward token price over product quality, and a reviewer ecosystem on X/Twitter populated partly by fabricated personas creates a perfect environment for manufactured credibility. Our analysis confirmed this: the repos with the worst manipulation signals were overwhelmingly blockchain and crypto-adjacent AI projects.
Third, GitHub’s enforcement asymmetry. Removing repos but leaving 57% of fake accounts intact preserves the labor force of the fake star economy while doing little to deter repeat offenses. Until GitHub implements structural changes - weighted popularity metrics, account-level reputation scoring, or transparent enforcement reporting - the gap between star counts and genuine developer adoption will continue to widen.
The star economy is a $50 problem with a $50 million consequence. Until the platforms, investors, and regulators catch up, the market will keep paying the $50.
...
Read the original on awesomeagents.ai »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.