10 interesting stories served every morning and every evening.
Tim Cook to become :br(s): :br(m): :br(l): :br(xl):Apple Executive Chairman
John Ternus
to become Apple CEO
CUPERTINO, CALIFORNIA Apple announced that Tim Cook will become executive chairman of Apple’s board of directors and John Ternus, senior vice president of Hardware Engineering, will become Apple’s next chief executive officer effective on September 1, 2026. The transition, which was approved unanimously by the Board of Directors, follows a thoughtful, long-term succession planning process.
Cook will continue in his role as CEO through the summer as he works closely with Ternus on a smooth transition. As executive chairman, Cook will assist with certain aspects of the company, including engaging with policymakers around the world.
“It has been the greatest privilege of my life to be the CEO of Apple and to have been trusted to lead such an extraordinary company. I love Apple with all of my being, and I am so grateful to have had the opportunity to work with a team of such ingenious, innovative, creative, and deeply caring people who have been unwavering in their dedication to enriching the lives of our customers and creating the best products and services in the world,” said Cook. “John Ternus has the mind of an engineer, the soul of an innovator, and the heart to lead with integrity and with honor. He is a visionary whose contributions to Apple over 25 years are already too numerous to count, and he is without question the right person to lead Apple into the future. I could not be more confident in his abilities and his character, and I look forward to working closely with him on this transition and in my new role as executive chairman.”
“I am profoundly grateful for this opportunity to carry Apple’s mission forward,” said Ternus. “Having spent almost my entire career at Apple, I have been lucky to have worked under Steve Jobs and to have had Tim Cook as my mentor. It has been a privilege to help shape the products and experiences that have changed so much of how we interact with the world and with one another. I am filled with optimism about what we can achieve in the years to come, and I am so happy to know that the most talented people on earth are here at Apple, determined to be part of something bigger than any one of us. I am humbled to step into this role, and I promise to lead with the values and vision that have come to define this special place for half a century.”
Arthur Levinson, who has been Apple’s non-executive chairman for the past 15 years, will become its lead independent director on September 1, 2026. Ternus will join the board of directors, also effective September 1, 2026.
“Tim’s unprecedented and outstanding leadership has transformed Apple into the world’s best company. He’s introduced groundbreaking products and services time and again, and his integrity and values are infused into everything Apple does,” said Levinson. “On behalf of the entire board of directors, we are incredibly grateful for his countless contributions to Apple and the world, and we are thrilled he will now be executive chairman. We believe John is the best possible leader to succeed Tim and as he transitions to CEO we know his love of Apple, his leadership, deep technical knowledge, and relentless focus on creating great products will help lead Apple to an extraordinary future.”
“I want to thank Art for the incredible work he has done leading the board of directors for the past 15 years,” said Cook. “I have always found his advice to be invaluable and I appreciate his thoughtfulness and his unwavering dedication to the company. I am grateful he will serve as our lead independent director, and I look forward to working with him in my new role.”
Tim Cook joined Apple in 1998. He became CEO in 2011 and has overseen the introduction of numerous products and services, including new categories like Apple Watch, AirPods, and Apple Vision Pro, and services ranging from iCloud and Apple Pay to Apple TV and Apple Music. He was also instrumental in expanding existing product lines. Under Cook’s leadership Apple has grown from a market capitalization of approximately $350 billion to $4 trillion, representing a more than 1,000% increase, and yearly revenue has nearly quadrupled, from $108 billion in fiscal year 2011 to more than $416 billion in fiscal year 2025. The company has expanded its global footprint substantially, particularly in emerging markets; it is now in more than 200 countries and territories. Apple operates over 500 retail stores and has more than doubled the number of countries in which its customers can visit an Apple Store. During his tenure, Apple has grown by more than 100,000 team members and increased its active installed base to more than 2.5 billion devices.
Apple Services has been a major focus area of Cook’s, and during his tenure the category has grown to become a more than $100 billion business, the equivalent of a Fortune 40 company. Cook was also instrumental in creating the wearables category at Apple, which now includes the world’s most popular watch and headphones, and which has served as the foundation for Apple’s remarkable impact on the health and safety of its users. Under Cook’s leadership, Apple also transitioned to Apple-designed silicon, enabling the company to own more of its primary technology and deliver industry-leading gains in power efficiency and performance that directly benefit users across its products.
Cook has made Apple’s core values even more central to the company’s decision making and product development. Under his leadership, the company reduced its carbon footprint by more than 60 percent below 2015 levels during a period in which revenue nearly doubled. Cook, who has long advocated for privacy as a fundamental human right, has made privacy and security imperative at Apple, setting a standard for user protection that continues to set the company apart from the rest of the technology industry. He has also pushed for continued innovation in the accessibility space, believing that Apple products should be made for everyone. And he has made central to his leadership the notion that Apple should be a place where everyone can feel they belong and where everyone is treated with dignity and respect.
Ternus joined Apple’s product design team in 2001 and became a vice president of Hardware Engineering in 2013. He joined the executive team in 2021 as senior vice president of Hardware Engineering. Throughout his tenure at Apple, Ternus has overseen hardware engineering work on a variety of groundbreaking products across every category. He was instrumental in the introduction of multiple new product lines, including iPad and AirPods, as well as many generations of products across iPhone, Mac, and Apple Watch.
Ternus’s work on Mac has helped the category become more powerful and more popular globally than at any time in its 40-year history. That includes the recent introduction of MacBook Neo, an all-new laptop that makes the Mac experience even more accessible to more people around the world. This past fall, his team’s efforts were on full display with the introduction of a redefined iPhone lineup, including the incredibly powerful iPhone 17 Pro and Pro Max, the radically thin and durable iPhone Air, and the iPhone 17, which has been an incredible upgrade for users. Under his leadership, his team also drove advancements in AirPods to make them the world’s best in-ear headphones, with unprecedented active noise cancellation, as well as the capability to become an all-in-one hearing health system that can serve as over-the-counter hearing aids.
Ternus led much of the company’s focus in areas like reliability and durability, introducing new techniques that have made Apple products remarkably resilient. He has also driven much of Apple’s innovation in materials and hardware design that have reduced the carbon footprint of its products, including the creation of a new, recycled aluminum compound that has been introduced across multiple product lines, the use of 3-D printed titanium in Apple Watch Ultra 3, and innovations in repairability that have increased the lifespans of several Apple products.
Prior to Apple, Ternus worked as a mechanical engineer at Virtual Research Systems. He holds a bachelor’s degree in Mechanical Engineering from the University of Pennsylvania.
This press release contains forward-looking statements, within the meaning of the Private Securities Litigation Reform Act of 1995. These forward-looking statements include without limitation those about Apple’s executive succession plans. These statements involve risks and uncertainties, and actual results may differ materially from any future results expressed or implied by the forward-looking statements. More information regarding potential risks and other factors that could affect the company are included in Apple’s filings with the SEC, including in the “Risk Factors” and “Management’s Discussion and Analysis of Financial Condition and Results of Operations” sections of Apple’s most recently filed periodic reports on Form 10-K and Form 10-Q and subsequent filings. Apple assumes no obligation to update any forward-looking statements or information, which speak only as of the date they are made.
About Apple
Apple revolutionized personal technology with the introduction of the Macintosh in 1984. Today, Apple leads the world in innovation with iPhone, iPad, Mac, AirPods, Apple Watch, and Apple Vision Pro. Apple’s six software platforms — iOS, iPadOS, macOS, watchOS, visionOS, and tvOS — provide seamless experiences across all Apple devices and empower people with breakthrough services including the App Store, Apple Music, Apple Pay, iCloud, and Apple TV+. Apple’s more than 150,000 employees are dedicated to making the best products on earth and to leaving the world better than we found it.
© 2026 Apple Inc. All rights reserved. Apple, the Apple logo, Apple Watch, AirPods, Apple Vision Pro, iCloud, Apple Pay, Apple TV, Apple Music, Apple Store, iPad, iPhone, Mac, MacBook Neo, and iPhone Air are trademarks of Apple. Other company and product names may be trademarks of their respective owners.
...
Read the original on www.apple.com »
We are open sourcing our latest model, Kimi K2.6, featuring state-of-the-art coding, long-horizon execution, and agent swarm capabilities. Kimi K2.6 is now available via Kimi.com, the Kimi App, the API, and Kimi Code.
Kimi K2.6 shows strong improvements in long-horizon coding tasks, with reliable generalization across programming languages (e.g., Rust, Go, and Python) and tasks (e.g., front-end, devops, and performance optimization). On Kimi Code Bench, our internal coding benchmark covering diverse complicated end-to-end tasks, Kimi K2.6 demonstrates significant improvements over Kimi K2.5.
Kimi K2.6 successfully downloaded and deployed the Qwen3.5-0.8B model locally on a Mac. By implementing and optimizing model inference in Zig—a highly niche programming language—it demonstrated exceptional out-of-distribution generalization. Across 4,000+ tool calls, over 12 hours of continuous execution, and 14 iterations, Kimi K2.6 dramatically improved throughput from ~15 to ~193 tokens/sec, ultimately achieving speeds ~20% faster than LM Studio.
Kimi K2.6 autonomously overhauled exchange-core, an 8-year-old open-source financial matching engine. Over a 13-hour execution, the model iterated through 12 optimization strategies, initiating over 1,000 tool calls to precisely modify more than 4,000 lines of code. Acting as an expert systems architect, Kimi K2.6 analyzed CPU and allocation flame graphs to pinpoint hidden bottlenecks and boldly reconfigured the core thread topology (from 4ME+2RE to 2ME+1RE). Despite the engine already operating near its performance limits, Kimi K2.6 extracted a 185% medium throughput leap (from 0.43 to 1.24 MT/s) and a 133% performance throughput gain (soaring from 1.23 to 2.86 MT/s).
In beta tests, K2.6 performs well on long-horizon coding tasks in enterprise evaluations (randomly ordered):
Based on the strong coding capabilities, Kimi K2.6 can turn simple prompts into complete front-end interfaces, generating structured layouts with deliberate design choices such as aesthetic hero sections, as well as interactive elements and rich animations, including scroll-triggered effects. With strong proficiency in leveraging image and video generation tools, Kimi K2.6 supports the generation of visually coherent assets and contributes to higher-quality, more salient hero sections.
Moreover, Kimi K2.6 expands beyond static frontend development to simple full-stack workflows—spanning authentication to user interaction to database operations for lightweight use cases like transaction logging or session management.
We established an internal Kimi Design Bench, organized into four categories: Visual Input Tasks, Landing Page Construction, Full-Stack Application Development, and General Creative Programming. In comparison with Google AI Studio, Kimi K2.6 shows promising results and performs well across these categories.
Below are examples generated by K2.6 Agent from a single prompt, with preconfigured harnesses and tools:
Scaling out, not just up. An Agent Swarm dynamically decomposes tasks into heterogeneous subtasks executed concurrently by self-created domain-specialized agents.
Based on the K2.5 Agent Swarm research preview, Kimi K2.6 Agent Swarm demonstrates a qualitative leap in the agent swarm experience. It seamlessly coordinates heterogeneous agents to combine complementary skills: broad search layered with deep research, large-scale document analysis fused with long-form writing, and multi-format content generation executed in parallel. This compositional intelligence enables the swarm to deliver end-to-end outputs—spanning documents, websites, slides, and spreadsheets—within a single autonomous run.
The architecture scales horizontally to 300 sub-agents executing across 4,000 coordinated steps simultaneously, a substantial expansion from K2.5′s 100 sub-agents and 1,500 steps. This massive parallelization fundamentally reduces end-to-end latency while significantly enhancing output quality and expanding the operational boundaries of Agents swarms.
It can also turn any high-quality files such as PDFs, spreadsheets, slides, and Word documents into Skills. Kimi K2.6 captures and maintains the documents’ structural and stylistic DNA, enabling you to reproduce the same quality and format in future tasks.
Here are some examples:
K2.6 demonstrates strong performance in autonomous, proactive agents such as OpenClaw and Hermes, which operate across multiple applications with continuous, 24/7 execution.
Unlike simple chat-based interactions, these workflows require AI to proactively manage schedules, execute code, and orchestrate cross-platform operations as a persistent background agent.
Our RL infra team used a K2.6-backed agent that operated autonomously for 5 days, managing monitoring, incident response, and system operations, demonstrating persistent context, multi-threaded task handling, and full-cycle execution from alert to resolution. Here is K2.6′s worklog (anonymized to remove sensitive information):
Kimi K2.6 delivers measurable improvements in real-world reliability: more precise API interpretation, stabler long-running performance, and enhanced safety awareness during extended research tasks.
Performance gains are quantified by our internal Claw Bench, the evaluation suite spanning five domains: Coding Tasks, IM Ecosystem Integration, Information Research & Analysis, Scheduled Task Management, and Memory Utilization. Across all metrics, Kimi K2.6 significantly outperforms Kimi K2.5 in task completion rates and tool invocation accuracy—particularly in workflows requiring sustained autonomous operation without human oversight.
Building upon Kimi K2.6′s robust orchestration capabilities, Kimi K2.6 extends your proactive agents to Claw Groups as a research preview—a new instantiation of the Agent Swarm architecture.
Claw Groups embrace an open, heterogeneous ecosystem: Multiple agents and humans operate as true collaborators. Users can onboard agents from any device, running any model, each carrying their own specialized toolkits, skills and persistent memory contexts. Whether deployed on local laptops, mobile devices, or cloud instances, these diverse agents integrate seamlessly into a shared operational space.
At the center of this swarm, Kimi K2.6 serves as an adaptive coordinator. It dynamically matches tasks to agents based on their specific skill profiles and available tools, optimizing for capability fit. When an agent encounters failure or stalls, the coordinator detects the interruption, automatically reassigns the task or regenerates subtasks, and actively manages the full lifecycle of deliverables—from initiation through validation to completion.
We also want to thank the K2.6-powered agents in Claw Groups—we’ve been dogfooding our own agent marketing team by refining human–agent workflows in practice. Using Claw Groups, we run end-to-end content production and launch campaigns, with specialized agents like Demo Makers, Benchmark Makers, Social Media Agents, and Video Makers working together. K2.6 coordinates the process, enabling agents to share intermediate results and turn ideas into consistent, fully packaged deliverables.
We are moving beyond simply asking AI a question or assigning AI a task, and entering a phase where human and AI collaborate as genuine partners—combining strengths to solve problems collectively. Claw Groups marks our latest efforts toward a future where the boundaries between “my agent,” “your agent,” and “our team” dissolve seamlessly into a collaborative system.
To reproduce official Kimi-K2.6 benchmark results, we recommend using the official API. For third-party providers, refer to Kimi Vendor Verifier (KVV) to choose high-accuracy services. Details: https://kimi.com/blog/kimi-vendor-verifier
* We report results for Kimi K2.6 and Kimi K2.5 with thinking mode enabled, Claude Opus 4.6 with max effort, GPT-5.4 with xhigh reasoning effort, and Gemini 3.1 Pro with a high thinking level.
* Unless otherwise specified, all Kimi K2.6 experiments were conducted with temperature = 1.0, top-p = 1.0, and a context length of 262,144 tokens.
* Benchmarks without publicly available scores were re-evaluated under the same conditions used for Kimi K2.6 and are marked with an asterisk (*). Except where noted with an asterisk, all other results are cited from official reports.
* IMO-AnswerBench scores for GPT-5.4 and Claude 4.6 were obtained from https://z.ai/blog/glm-5.1.
* Humanity’s Last Exam (HLE) and other reasoning tasks were evaluated with a maximum generation length of 98,304 tokens. By default, we report results on the HLE full set. For the text-only subset, Kimi K2.6 achieves 36.4% accuracy without tools and 55.5% with tools.
* Kimi K2.6 was equipped with search, code-interpreter, and web-browsing tools for HLE with tools, BrowseComp, DeepSearchQA, and WideSearch.
* For HLE-Full with tools, the maximum generation length is 262,144 tokens with a per-step limit of 49,152 tokens. We employ a simple context management strategy: once the context window exceeds the threshold, only the most recent round of tool-related messages is retained.
* For BrowseComp, we report scores obtained with context management using the same discard-all strategy as Kimi K2.5 and DeepSeek-V3.2.
* For DeepSearchQA, no context management was applied to Kimi K2.6 tests, and tasks exceeding the supported context length were directly counted as failed. Scores for Claude Opus 4.6, GPT-5.4, and Gemini 3.1 Pro on DeepSearchQA are cited from the Claude Opus 4.7 System Card.
* For WideSearch, we report results under the “hide tool result” context management setting. Once the context window exceeds the threshold, only the most recent round of tool-related messages is retained.
* The test system prompts are identical to those used in the Kimi K2.5 technical report.
* Claw Eval was conducted using version 1.1 with max-tokens-per-step = 16384.
* For APEX-Agents, we evaluate 452 tasks from the public 480-task release, as done by Artificial Analysis (excluding Investment Banking Worlds 244 and 246, which have external runtime dependencies).
* Terminal-Bench 2.0 scores were obtained with the default agent framework (Terminus-2) and the provided JSON parser, operating in preserve thinking mode.
* For the SWE-Bench series of evaluations (including Verified, Multilingual, and Pro), we used an in-house evaluation framework adapted from SWE-agent. This framework includes a minimal set of tools—bash tool, createfile tool, insert tool, view tool, strreplace tool, and submit tool.
* All reported scores for coding tasks are averaged over 10 independent runs.
* Settings with Python tool use max-tokens-per-step = 65,536 and max-steps = 50 for multi-step reasoning.
* MMMU-Pro follows the official protocol, preserving input order and prepending images.
...
Read the original on www.kimi.com »
Let me tell you a story. When I was a child, I suffered from night terrors. It was always the same dream: I could hear my family and neighbors wailing in the street outside as they were pursued and then destroyed by a nameless malevolent force, something neither I nor anyone else could control, a great darkness that was, somehow, all my fault.
Today, that childhood dream is finally coming true. Today I can finally say the sweetest nine or 10 words in the English language: Global Tetrahedron has completed its plan to control InfoWars.com.
I’ve had a lot of time to think about InfoWars in the last year and a half. As the seasons have changed, my ambitions for the project have grown grander, crueler, better aligned with market data. Come, friends, and imagine with me…
Imagine a roaring arena packed to the rafters with pathological liars. High above you in the nosebleeds are podcasters, screaming that you’ll die if you don’t buy their skincare products. Below, on the floor, imagine demonic battalions of super-influencers physically forcing people into home fitness devices designed to dismantle their bodies bone by bone and reassemble them into a grotesque statue of yourself. Out of the throngs, an extremely sick looking man approaches you. He puts his hands on your shoulders. He explains that he is your life coach and that you owe him $800.
Such is the InfoWars I envision: An infinite virtual surface teeming with ads. Not just ads, but scams! Not just scams, but lies with no object, free radical misinformation, sentences and images so poorly thought out that they are unhealthy even to view for just a few seconds. The InfoWars of old was only the prototype for the hell I know we can build together: A digital platform where, every day, visitors sacrifice themselves at altars of delusion and misery, their minds fully disintegrating on contact.
With this new InfoWars, we will democratize psychological torture, welcoming brutal and sadistic ideas from everyone, even the very stupidest among us. It will be like the Manhattan Project, only instead of a bomb, we will be building a website.
The InfoWars of tomorrow will converge into a swirling vortex of content about content, talent acquiring talent, rings of concentric media mergers processing all human artistry into one endlessly digestible slurry. This will be a dank, sunless place, one where panic and capital feed on each other like twins in the womb of a hulking, unknowable monster—a monster known by many names, but which I like to call modern-day America.
All of this is to say that I believe in us. I believe that with the new InfoWars, we can alchemize the pioneering spirit of amateur inquiry, the profit-maximizing drive of corporations, and the cold mental clarity that comes only with disciplined daily ingestion of mind- and body-altering chemicals. If we can do that, what other great things can we do together?
I don’t yet know, but I’m excited to find out. Welcome home, warriors. The future belongs to us. We’re writing the story now. It’s going to be a long one, and it’s going to be a bad one.
So settle in. Make yourself comfortable. Buy a tote bag.
Nothing can stop us now that we’re in charge of a website.
...
Read the original on theonion.com »
As the internet chokes on ever more slop, the one thing that gives me hope is this: people seem to loathe AI, and are actively resisting it. This won’t be a long post, as I’m personally so tired of writing and thinking about AI at this point in time, but I do want to draw your attention here to some recent anti-AI stuff that’s worth discussing.
r/PoisonFountain, created by individuals who claim to be concerned AI industry insiders, is a community with one goal: encourage as many people as possible to feed huge quantities of trash data (poison) to all of the web crawlers out there that are scraping our work for AI training sets. They aim to serve one terabyte of poison per day to these crawlers by the end of 2026.
The poison fountain itself is hosted on rnsaffn.com, sandwiched between several garbage links that look irresistable to AI crawlers; it produces a page of code that seems correct at first glance, but is actually riddled with subtle errors that render the code unusable. Filtering out these errors is possible, but expensive at scale. Since these companies can’t improve their AI models without fresh data created by human beings, the idea here is to waste their time and make it expensive for them to steal our data.
Miasma is one example of a tool that uses the fountain to serve massive amounts of garbage to malicious bots. The developer describes it as “an endless buffet of slop for the slop machines,” which is delightful. I can’t use Miasma with my site’s setup, but it may be of interest to those of you who could. I deliver my trash to crawlers using other means … some visible, some invisible. While I can’t serve it up to anywhere near the same extent as Miasma can, I do catch sneaky bots with my junk links every day.
If you’re pro-AI and feel outraged on behalf of these companies that anyone would dare try to make life difficult for them, please know that this is simply a case of tit for tat. The teams that send AI crawlers out into the world wide web are DDoSing small websites on the regular and raising hosting fees for everyone with their voracious desire to devour the entire internet. They do not obey robots.txt, and often hide their crawlers behind residential proxies. If they can’t source training data ethically, then I see absolutely no reason why any website operator should make it easy for them to steal it.
Caution: I’m messing with automated visitors in plain sight as an experiment. 🤭 To avoid false positives, human visitors are encouraged to ignore the link in this box.
Someone Figured Out How To Poison AI Video Summarizers
Thanks to r/PoisonFountain, I learned that YouTube has no .ass. I could try to explain what that means, but the video is hilarious and well worth a watch, so I’ll leave it up to @f4mi.
Sadly, it looks like the poisoning technique used by the creator in this video no longer works; YouTube presumably fixed the transcript loophole she was exploiting here. I plugged a few of her video URLs into a few different video summarizers, and they all failed to tell me anything that wasn’t actually in the videos.
Still, it’s great to see people trying and succeeding at fucking with the slop machines — even if that success is only temporary.
All over Reddit and other social media platforms, I’m increasingly seeing stuff like this:
I mean, sure, it’s literally misinformation and you could indeed argue that there’s already enough misinformation on the internet as it is … but it’s important to note here that bots, not people, are the target audience of this misinformation.
I think most of us can understand from the context that Idris Elba did not ever play Raymond’s mother in an episode of Everybody Loves Raymond. Automated web scrapers, however, will just see good human-generated data, which is what they want. They’re going to merrily scrape that garbage from Reddit and send it back to OpenAI or whomever, who will then have to waste resources removing it from their training data sets.
This isn’t exactly the modern equivalent of angry textile workers destroying power looms, but (if you’ll forgive the pun) it’s cut from the same cloth. The difference here (I hope) is that if enough of us pollute public spaces with misinformation intended for bots, it might be enough to compel AI companies to rethink the way they source training data.
People hate what AI is doing to our world. They hate what it’s doing to our online communities, what it’s doing to our environment, what it’s doing to our elementary schools and universities, what it’s doing to at-risk individuals with mental health issues, what it’s doing (and may yet still do) to our livelihoods. While there are certainly plenty of people out there who happily consume and generate massive amounts of AI slop, they are — at least in my anecdotal experience within my own social circles, both offline and online — dwarfed by people who detest and want nothing to do with this technology.
Hatred of a thing seldom leads anywhere good, as recent events demonstrate, but do I think that if people are able to translate what they’re feeling about AI into peaceful, legal acts of resistance, then we might actually stand to change the way Silicon Valley does things.
To see what people are saying about this post, check it out on Mastodon. Want to know why this blog doesn’t have a comments section? I wrote about that here.
If you enjoy my writing and want to read more of it, check out my last post or browse through my blog archive.
...
Read the original on stephvee.ca »
Organizations design systems that mirror their own communication structure.
Premature optimization is the root of all evil.
With a sufficient number of API users, all observable behaviors of your system will be depended on by somebody.
Leave the code better than you found it.
YAGNI (You Aren’t Gonna Need It)
Don’t add functionality until it is necessary.
Adding manpower to a late software project makes it later.
A complex system that works is invariably found to have evolved from a simple system that worked.
All non-trivial abstractions, to some degree, are leaky.
Every application has an inherent amount of irreducible complexity that can only be shifted, not eliminated.
A distributed system can guarantee only two of: consistency, availability, and partition tolerance.
Small, successful systems tend to be followed by overengineered, bloated replacements.
A set of eight false assumptions that new distributed system designers often make.
Every program attempts to expand until it can read mail.
There is a cognitive limit of about 150 stable relationships one person can maintain.
The square root of the total number of participants does 50% of the work.
Those who understand technology don’t manage it, and those who manage it don’t understand it.
In a hierarchy, every employee tends to rise to their level of incompetence.
The minimum number of team members whose loss would put the project in serious trouble.
Companies tend to promote incompetent employees to management to limit the damage they can do.
Work expands to fill the time available for its completion.
The first 90% of the code accounts for the first 90% of development time; the remaining 10% accounts for the other 90%.
It always takes longer than you expect, even when you take into account Hofstadter’s Law.
When a measure becomes a target, it ceases to be a good measure.
Anything you need to quantify can be measured in some way better than not measuring it.
Anything that can go wrong will go wrong.
Be conservative in what you do, be liberal in what you accept from others.
Technical Debt is everything that slows us down when developing software.
Given enough eyeballs, all bugs are shallow.
Debugging is twice as hard as writing the code in the first place.
A project should have many fast unit tests, fewer integration tests, and only a small number of UI tests.
Repeatedly running the same tests becomes less effective over time.
Software that reflects the real world must evolve, and that evolution has predictable limits.
90% of everything is crap.
The speedup from parallelization is limited by the fraction of work that cannot be parallelized.
It is possible to achieve significant speedup in parallel processing by increasing the problem size.
The value of a network is proportional to the square of the number of users.
Every piece of knowledge must have a single, unambiguous, authoritative representation.
Designs and systems should be as simple as possible.
Five main guidelines that enhance software design, making code more maintainable and scalable.
An object should only interact with its immediate friends, not strangers.
Software and interfaces should behave in a way that least surprises users and other developers.
The less you know about something, the more confident you tend to be.
Never attribute to malice that which is adequately explained by stupidity or carelessness.
The simplest explanation is often the most accurate one.
Sticking with a choice because you’ve invested time or energy in it, even when walking away helps you.
The Map Is Not the Territory
Our representations of reality are not the same as reality itself.
A tendency to favor information that supports our existing beliefs or ideas.
We tend to overestimate the effect of a technology in the short run and underestimate the impact in the long run.
The longer something has been in use, the more likely it is to continue being used.
Breaking a complex problem into its most basic blocks and then building up from there.
Solving a problem by considering the opposite outcome and working backward from it.
80% of the problems result from 20% of the causes.
The best way to get the correct answer on the Internet is not to ask a question, it’s to post the wrong answer.
...
Read the original on lawsofsoftwareengineering.com »
Deezer announced on Monday that AI-generated tracks now represent 44% of all new music uploaded to its platform. The company said it’s receiving almost 75,000 AI-generated tracks per day and more than two million per month.
The consumption of AI-generated music on the platform is still very low, at 1-3% of total streams, and 85% of these streams are detected as fraudulent and demonetized by the company.
The latest figure from Deezer highlights a continuous surge in AI-generated music uploads to the platform. Deezer reported receiving around 60,000 AI tracks per day in January, up from 50,000 in November, 30,000 in September, and just 10,000 in January 2025, when it first launched its AI-music detection tool.
Songs tagged as AI-generated on Deezer are automatically removed from algorithmic recommendations and not included in editorial playlists. The company announced today that it will no longer store hi-res versions of AI tracks.
The updated figure comes as an AI-generated track topped the iTunes charts last week in the United States, United Kingdom, France, Canada, and New Zealand.
“AI-generated music is now far from a marginal phenomenon and as daily deliveries keep increasing, we hope the whole music ecosystem will join us in taking action to help safeguard artists’ rights and promote transparency for fans,” said Deezer CEO Alexis Lanternier in a press release. “Thanks to our technology and the proactive measures we put in place more than a year ago, we have shown that it’s possible to reduce AI-related fraud and payment dilution in streaming to a minimum.”
Today’s announcement comes as Deezer conducted a survey last November that found that 97% of participants couldn’t tell the difference between fully AI-generated music and human-made music.
The survey also found that 52% of respondents said 100% AI-generated songs shouldn’t be included in charts alongside human-made songs in the main charts. Meanwhile, 80% said 100% AI-generated music should be clearly labeled for listeners.
Deezer started tagging AI tracks at the platform level in June 2025, becoming the first streaming platform to do so. Over the course of 2025, Deezer tagged more than 13.4 million AI tracks on its platform.
In February, French streaming service Qobuz announced plans to tag AI-generated content on its platform. Other major streaming services, such as Spotify and Apple Music, take different approaches to AI-generated music, often combining the use of filters to identify low-quality AI music with other transparency efforts left up to the distributors.
...
Read the original on techcrunch.com »
Affiliate links on Android Authority may earn us a commission. Learn more.
I’m never buying another Kindle, and neither should youAfter a decade with Kindle, Amazon’s latest changes made it clear that ownership comes second to control. I’ve carried a Kindle in my bag for over a decade. Through every hardware iteration, from the physical keyboard right up to the latest Paperwhite, a Kindle has been with me everywhere — be it on an airplane, a train ride, the doctor’s office, or my bedside. My all-time favorite ebook reader is, hands down, the Kindle Oasis. For years, I’ve defended the ecosystem because it was convenient and the screens were the gold standard for e-ink readers. But things have changed.
In 2026, the Kindle isn’t really about books for Amazon. It’s about the ecosystem around them.
Looking at the current state of my digital library in 2026, that long-standing loyalty to Amazon’s readers is no longer a thing. The recent announcement that Amazon is sunsetting older hardware was the final straw, and it’s changed the way I look at Kindles. In fact, I’d go as far as saying that it’s a wake-up call for anyone who values digital ownership. If the writing wasn’t already on the wall, for Amazon, the e-reader is clearly no longer a tool for readers; it is quite simply a portal for a storefront. In a world where we are increasingly forced to rent our digital lives through subscription services, our books should be the one place where ownership still matters. However, Amazon’s recent moves prove that ownership is no longer a priority for the brand, and that is why I am finally walking away from the Kindle for good. Here is why you should consider doing the same.
Would you abandon Amazon Kindles for a new alternative e-reader?I don’t want an e-reader at all.Other (let us know in the comments below)The end of the road for legacy hardwareIf you’re not caught up on the latest in the Kindle world, here’s what you need to know. If you own a Kindle released before 2013, your device is effectively on death row. Amazon recently confirmed that starting May 20, these older models will lose all access to the Kindle Store. While you can technically keep reading books already on the device, the real kicker is the factory reset limitation built into the software. If you ever need to reset your device or try to register it to a new account after the deadline, it becomes a literal paperweight. As an archivist and fan of older Kindle hardware, this move is absolutely shocking.A perfectly functional Kindle can become useless overnight. That should concern everyone.If anything, the move is a sharp reminder that when you buy into the Kindle ecosystem, you are effectively renting access from Amazon. The company is using security updates as a justification to move users toward newer hardware, but the reality is that many of these devices are still perfectly functional for reading text. By cutting off the ability to re-register them, Amazon is creating a massive wave of e-waste and forcing an upgrade cycle that many users simply do not want or need.There’s the staggering environmental cost of the move, of course. But what concerns me more is the fact that most of these Kindles have perfectly functional e-ink screens and batteries that could last years of light reading. Instead of providing a path for long-term support or open-sourcing the legacy software, Amazon is choosing the landfill. And I’m not comfortable with that. Not from a company named after a literal rainforest.Contrast this with the approach taken by Kobo. Amazon’s biggest rival in the e-reader space has formed an official partnership with iFixit to provide repair kits and guides for its latest models. The Kobo Libra Colour and Clara are designed to be opened and repaired. When you buy a Kindle, you are buying a disposable product with a predetermined shelf life. Meanwhile, when you buy a Kobo, you are buying a tool that can be maintained for a decade or more.
For a company that practically invented the modern e-reader, Amazon has become remarkably lazy with its software. If you look at a Kindle from 2018 and a Kindle from 2026, the user interface is nearly identical. We are still dealing with a home screen that prioritizes advertisements and promoted recommendations over your actual library. Navigating a large collection of books remains a chore, with sluggish animations and a lack of robust folder management that has been a standard feature on rival devices for years.In 2026, the Kindle UI keeps moving further away from focusing on the library to the storefront. The latest updates make it harder to find your own sideloaded books while keeping Kindle Unlimited recommendations front and center. Look, I get it; Amazon’s goal was always to subsidize hardware costs by making money on books. But it has reached a point where Amazon has effectively turned your device into a billboard. You are paying for the privilege of being marketed to every time you wake up your device — unless you pay up.Between forced obsolescence and an AI-forward feature, this isn’t the reading experience I paid for. Amazon’s 2026 roadmap is also heavily focused on AI reading assistants and cloud-based summaries. This is essentially a data-mining operation. Amazon is not just tracking what you buy; it is tracking how you read. It knows how fast you turn pages, which sections you skip, and exactly what you highlight to feed its large language models. Yes, you can put your Kindle in airplane mode, but it doesn’t change the facts about the direction the company is taking.This level of telemetry is invasive for a device that is supposed to be a private reading experience. Nor did I ever sign up for it. Competitors like Kobo offer an offline-first experience that does not require a constant heartbeat to a central server to function as the default. Elsewhere, on a Boox device, you have total control over which apps can access the internet. With Kindle, it increasingly looks like the privacy trade-off is the hidden cost of the hardware, and I’m not comfortable with it.
There is better hardware and more open ecosystems out thereThe fact of the matter is that the Kindle is no longer your only, or best, option. There are plenty of alternatives available if you want a dedicated e-reader that respects the idea of ownership. Kobo is the logical next step. Devices like the Kobo Libra Colour offer hardware that is often superior, or at the very least equivalent, to the Kindle Paperwhite at similar price points. The standout feature is native OverDrive and Libby integration. On a Kobo, you can browse, borrow, and return library books directly on the device without ever needing to touch a phone or a computer, provided you are in a supported country.Kobo also uses the industry-standard ePub format. This means you are not locked into one store. You can buy books from Google Play, Kobo, or various independent bookstores and simply drag and drop them onto the device via USB. Kobo devices also feature much better typography settings. For those who prefer physical buttons, Kobo has kept them as a standard feature on its mid-range devices, something that the Amazon Kindle appears to be allergic to.One of the biggest reasons to stick to the Kindle was the overall experience it offered. Ironically, Kindle’s experience advantages are no longer really a thing. If you really want the ultimate no-compromise experience, Onyx Boox has been steadily changing the game. Devices like the Boox Palma 2 or the Go 10.3 are not just e-readers. Instead, these devices are e-ink tablets running a full version of Android that dramatically open up opportunities for customization.In my opinion, this should be the top option for anyone who wants to leave Kindle hardware but keep their Kindle books. Because these devices have the Google Play Store, you can simply install the Kindle app. You get the benefits of the Amazon bookstore and your existing library, but you get to use it on hardware that is faster and better designed.Using the Kindle app on a Boox device actually provides a better experience than using a Kindle. You get smoother scrolling and the ability to use third-party fonts without any restrictions. Plus, you can run other apps like Spotify for background music or Notion and Goodreads for book tracking. You are no longer limited to what Amazon thinks you should be doing with your device. Instead, you are in full control of the software experience.The alternatives have caught up and, in some cases, surpassed Kindle.Another area where Amazon used to lead was display quality, but that gap has closed. The newest Kobo and Boox devices are using the latest e-ink Carta 1300 panels. These panels offer significantly better contrast and faster refresh rates than the older Carta 1200 found in most Kindles. This means virtually non-existent ghosting and text that looks perfectly crisp.Having used a range of Boox hardware, I can say the Boox Go 10.3 is a particularly impressive piece of hardware. Between the high-resolution screen and a panel that sits closer to the surface, you get a remarkably paper-like experience. Plus, the stylus integration goes above and beyond what you’ll find on equivalent Kindle hardware. If you do any amount of note-taking, the Scribe feels like a toy compared to the much more feature-packed Boox tablets. As I mentioned earlier, the Kindle really isn’t the epitome of a quality reading experience anymore.
The biggest fear people have when leaving behind the Kindle is that they will lose access to books. This is a myth. While Amazon does have some exclusive self-published titles, the vast majority of mainstream books are available on every platform. Kobo, Google Play Books, and Apple Books all have catalogs that rival Amazon in size. In many cases, you can actually find better deals on these platforms.Even Amazon seems to be acknowledging that it can’t take its audience for granted. Starting in January 2026, Amazon has begun allowing users to download DRM-free versions (Digital Rights Management) of select ePub and PDF files directly from their management page. This only applies to books where the publisher has opted out of DRM, but it is a massive shift. It proves that even Amazon knows the proprietary formats are becoming a liability in a market that is moving toward open standards.Digital ownership only exists if you can take your library with you.
For the books you already own that still have DRM, you do not have to leave them behind. There are ways to manage your digital library using tools like Calibre and a few plugins that let you import your Kindle purchases into a central database. This allows you to convert them to ePub and move them to any device you choose.
The goal isn’t just convenience. Digital preservation is extremely important to me and millions of other users. If Amazon decides to delete a book from its servers or shut down your account, you still have the file you paid for. Having a local, DRM-free backup of your library is the only way to ensure that your collection survives the whims of a multi-trillion-dollar corporation. Once your books are in Calibre, you can use powerful tools to fix metadata, add high-resolution covers, and read them on whatever device you want.
The e-reader market in 2026 is the most competitive it has ever been. We have reached a point where Amazon’s ecosystem no longer offers enough unique value to justify its restrictions. Combine that with Amazon’s move to brick older hardware, to me, it is just the final sign that the customer is not the priority for the company. Between the seamless library integration of Kobo and the raw power of Android-based readers from Boox, there is no reason to buy another Kindle.There’s no reason to stay locked in when better, more open options exist.
If you want the best reading experience, buy a Kobo. If you want a powerful e-ink tablet that does everything, buy a Boox. If you want to actually own the books you pay for, use Calibre. But until Amazon turns the ship around with its digital and hardware policies, I do not plan to give Amazon another cent for a device that it can take away from me with a single server-side update. My library deserves better than that. And so does yours.
Don’t want to miss the best from Android Authority?
Set us as a favorite source in Google Discover to never miss our latest exclusive reports, expert analysis, and much more.
You can also set us as a preferred source in Google Search by clicking the button below.
Thank you for being part of our community. Read our Comment Policy before posting.
...
Read the original on www.androidauthority.com »
Alongside the release of the Kimi K2.6 model, we are open-sourcing the Kimi Vendor Verifier (KVV) project, designed to help users of open-source models verify the accuracy of their inference implementations.
Not as an afterthought, but because we learned the hard way that open-sourcing a model is only half the battle. The other half is ensuring it runs correctly everywhere else.
You can click here to access the Kimi API K2VV evaluation results for calculating the F1 score.
Since the release of K2 Thinking, we have received frequent feedback from the community regarding anomalies in benchmark scores. Our investigation confirmed that a significant portion of these cases stemmed from the misuse of Decoding parameters. To mitigate this immediately, we built our first line of defense at the API level: enforcing Temperature=1.0 and TopP=0.95 in Thinking mode, with mandatory validation that thinking content is correctly passed back.
However, more subtle anomalies soon triggered our alarm. In a specific evaluation on LiveBenchmark, we observed a stark contrast between third-party API and official API. After extensive testing of various infrastructure providers, we found this difference is widespread.
This exposed a deeper problem in the open-source model ecosystem: The more open the weights are, and the more diverse the deployment channels become, the less controllable the quality becomes.
If users cannot distinguish between “model capability defects” and “engineering implementation deviations,” trust in the open-source ecosystem will inevitably collapse.
Pre-Verification: Validates that API parameter constraints (temperature, top_p, etc.) are correctly enforced. All tests must pass before proceeding to benchmark evaluation. K2VV ToolCall: Measures trigger consistency (F1) and JSON Schema accuracy. Tool errors compound in agents; we catch them early.SWE-Bench: Full agentic coding test. (Not open sourced due to dependency of sandbox)
Upstream Fix: We embed with vLLM/SGLang/KTransformers communities to fix root causes, not just detect symptoms.
Pre-Release Validation: Rather than waiting for post-deployment complaints, we provide early access to test models. This lets infrastructure providers validate their stacks before users encounter issues.
Continuous Benchmarking: We will maintain a public leaderboard of vendor results. This transparency encourages vendors to prioritize accuracy.
We completed full evaluation workflow validation on Two NVIDIA H20 8-GPU servers, with sequential execution taking approximately 15 hours. To improve evaluation efficiency, scripts have been optimized for long-running inference scenarios, including streaming inference, automatic retry, and checkpoint resumption mechanisms.
Weights are open. The knowledge to run them correctly must be too.
We are expanding vendor coverage and seeking lighter agentic tests. Contact Us: [email protected]
...
Read the original on www.kimi.com »
The DSP is offering ad placements driven by prompt relevance and dangling CPMs ranging from $15 to $60, with a $50,000 minimum spend for the pilot.
If you want to shape media strategy—not just optimize it—upgrade your expertise. The ADWEEK MiniMBA in Marketing equips you to lead with confidence and credibility. Register now.
StackAdapt is quietly courting advertisers to test ads inside ChatGPT. The independent demand-side platform is dangling CPMs as low as $15 alongside discounted platform and management fees. The company is framing the push as early access to a new “discovery layer”—one that captures people in the middle of researching and comparing products on ChatGPT. According to a pitch deck titled “OpenAI x StackAdapt Limited Pilot Program,” shared with select buyers on March 27 and reviewed by ADWEEK, the company is positioning the offering as an early-stage test inside a still-developing ad system.“StackAdapt has partnered with OpenAl to enable advertising within ChatGPT, one of the fastest growing consumer platforms in the world,” the deck reads.
Trishla Ostwal
Trishla is an Adweek staff reporter covering AI and tech.
How AI Can Turn Social Media Marketing Into a Measurable Growth Engine
...
Read the original on www.adweek.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.