10 interesting stories served every morning and every evening.




1 2,061 shares, 84 trendiness

Tim Cook to become Apple Executive Chairman John Ternus to become Apple CEO

Tim Cook to be­come :br(s): :br(m): :br(l): :br(xl):Apple Executive Chairman

John Ternus

to be­come Apple CEO

CUPERTINO, CALIFORNIA Apple an­nounced that Tim Cook will be­come ex­ec­u­tive chair­man of Apple’s board of di­rec­tors and John Ternus, se­nior vice pres­i­dent of Hardware Engineering, will be­come Apple’s next chief ex­ec­u­tive of­fi­cer ef­fec­tive on September 1, 2026. The tran­si­tion, which was ap­proved unan­i­mously by the Board of Directors, fol­lows a thought­ful, long-term suc­ces­sion plan­ning process.

Cook will con­tinue in his role as CEO through the sum­mer as he works closely with Ternus on a smooth tran­si­tion. As ex­ec­u­tive chair­man, Cook will as­sist with cer­tain as­pects of the com­pany, in­clud­ing en­gag­ing with pol­i­cy­mak­ers around the world.

It has been the great­est priv­i­lege of my life to be the CEO of Apple and to have been trusted to lead such an ex­tra­or­di­nary com­pany. I love Apple with all of my be­ing, and I am so grate­ful to have had the op­por­tu­nity to work with a team of such in­ge­nious, in­no­v­a­tive, cre­ative, and deeply car­ing peo­ple who have been un­wa­ver­ing in their ded­i­ca­tion to en­rich­ing the lives of our cus­tomers and cre­at­ing the best prod­ucts and ser­vices in the world,” said Cook. John Ternus has the mind of an en­gi­neer, the soul of an in­no­va­tor, and the heart to lead with in­tegrity and with honor. He is a vi­sion­ary whose con­tri­bu­tions to Apple over 25 years are al­ready too nu­mer­ous to count, and he is with­out ques­tion the right per­son to lead Apple into the fu­ture. I could not be more con­fi­dent in his abil­i­ties and his char­ac­ter, and I look for­ward to work­ing closely with him on this tran­si­tion and in my new role as ex­ec­u­tive chair­man.”

I am pro­foundly grate­ful for this op­por­tu­nity to carry Apple’s mis­sion for­ward,” said Ternus. Having spent al­most my en­tire ca­reer at Apple, I have been lucky to have worked un­der Steve Jobs and to have had Tim Cook as my men­tor. It has been a priv­i­lege to help shape the prod­ucts and ex­pe­ri­ences that have changed so much of how we in­ter­act with the world and with one an­other. I am filled with op­ti­mism about what we can achieve in the years to come, and I am so happy to know that the most tal­ented peo­ple on earth are here at Apple, de­ter­mined to be part of some­thing big­ger than any one of us. I am hum­bled to step into this role, and I promise to lead with the val­ues and vi­sion that have come to de­fine this spe­cial place for half a cen­tury.”

Arthur Levinson, who has been Apple’s non-ex­ec­u­tive chair­man for the past 15 years, will be­come its lead in­de­pen­dent di­rec­tor on September 1, 2026. Ternus will join the board of di­rec­tors, also ef­fec­tive September 1, 2026.

Tim’s un­prece­dented and out­stand­ing lead­er­ship has trans­formed Apple into the world’s best com­pany. He’s in­tro­duced ground­break­ing prod­ucts and ser­vices time and again, and his in­tegrity and val­ues are in­fused into every­thing Apple does,” said Levinson. On be­half of the en­tire board of di­rec­tors, we are in­cred­i­bly grate­ful for his count­less con­tri­bu­tions to Apple and the world, and we are thrilled he will now be ex­ec­u­tive chair­man. We be­lieve John is the best pos­si­ble leader to suc­ceed Tim and as he tran­si­tions to CEO we know his love of Apple, his lead­er­ship, deep tech­ni­cal knowl­edge, and re­lent­less fo­cus on cre­at­ing great prod­ucts will help lead Apple to an ex­tra­or­di­nary fu­ture.”

I want to thank Art for the in­cred­i­ble work he has done lead­ing the board of di­rec­tors for the past 15 years,” said Cook. I have al­ways found his ad­vice to be in­valu­able and I ap­pre­ci­ate his thought­ful­ness and his un­wa­ver­ing ded­i­ca­tion to the com­pany. I am grate­ful he will serve as our lead in­de­pen­dent di­rec­tor, and I look for­ward to work­ing with him in my new role.”

Tim Cook joined Apple in 1998. He be­came CEO in 2011 and has over­seen the in­tro­duc­tion of nu­mer­ous prod­ucts and ser­vices, in­clud­ing new cat­e­gories like Apple Watch, AirPods, and Apple Vision Pro, and ser­vices rang­ing from iCloud and Apple Pay to Apple TV and Apple Music. He was also in­stru­men­tal in ex­pand­ing ex­ist­ing prod­uct lines. Under Cook’s lead­er­ship Apple has grown from a mar­ket cap­i­tal­iza­tion of ap­prox­i­mately $350 bil­lion to $4 tril­lion, rep­re­sent­ing a more than 1,000% in­crease, and yearly rev­enue has nearly quadru­pled, from $108 bil­lion in fis­cal year 2011 to more than $416 bil­lion in fis­cal year 2025. The com­pany has ex­panded its global foot­print sub­stan­tially, par­tic­u­larly in emerg­ing mar­kets; it is now in more than 200 coun­tries and ter­ri­to­ries. Apple op­er­ates over 500 re­tail stores and has more than dou­bled the num­ber of coun­tries in which its cus­tomers can visit an Apple Store. During his tenure, Apple has grown by more than 100,000 team mem­bers and in­creased its ac­tive in­stalled base to more than 2.5 bil­lion de­vices.

Apple Services has been a ma­jor fo­cus area of Cook’s, and dur­ing his tenure the cat­e­gory has grown to be­come a more than $100 bil­lion busi­ness, the equiv­a­lent of a Fortune 40 com­pany. Cook was also in­stru­men­tal in cre­at­ing the wear­ables cat­e­gory at Apple, which now in­cludes the world’s most pop­u­lar watch and head­phones, and which has served as the foun­da­tion for Apple’s re­mark­able im­pact on the health and safety of its users. Under Cook’s lead­er­ship, Apple also tran­si­tioned to Apple-designed sil­i­con, en­abling the com­pany to own more of its pri­mary tech­nol­ogy and de­liver in­dus­try-lead­ing gains in power ef­fi­ciency and per­for­mance that di­rectly ben­e­fit users across its prod­ucts.

Cook has made Apple’s core val­ues even more cen­tral to the com­pa­ny’s de­ci­sion mak­ing and prod­uct de­vel­op­ment. Under his lead­er­ship, the com­pany re­duced its car­bon foot­print by more than 60 per­cent be­low 2015 lev­els dur­ing a pe­riod in which rev­enue nearly dou­bled. Cook, who has long ad­vo­cated for pri­vacy as a fun­da­men­tal hu­man right, has made pri­vacy and se­cu­rity im­per­a­tive at Apple, set­ting a stan­dard for user pro­tec­tion that con­tin­ues to set the com­pany apart from the rest of the tech­nol­ogy in­dus­try. He has also pushed for con­tin­ued in­no­va­tion in the ac­ces­si­bil­ity space, be­liev­ing that Apple prod­ucts should be made for every­one. And he has made cen­tral to his lead­er­ship the no­tion that Apple should be a place where every­one can feel they be­long and where every­one is treated with dig­nity and re­spect.

Ternus joined Apple’s prod­uct de­sign team in 2001 and be­came a vice pres­i­dent of Hardware Engineering in 2013. He joined the ex­ec­u­tive team in 2021 as se­nior vice pres­i­dent of Hardware Engineering. Throughout his tenure at Apple, Ternus has over­seen hard­ware en­gi­neer­ing work on a va­ri­ety of ground­break­ing prod­ucts across every cat­e­gory. He was in­stru­men­tal in the in­tro­duc­tion of mul­ti­ple new prod­uct lines, in­clud­ing iPad and AirPods, as well as many gen­er­a­tions of prod­ucts across iPhone, Mac, and Apple Watch.

Ternus’s work on Mac has helped the cat­e­gory be­come more pow­er­ful and more pop­u­lar glob­ally than at any time in its 40-year his­tory. That in­cludes the re­cent in­tro­duc­tion of MacBook Neo, an all-new lap­top that makes the Mac ex­pe­ri­ence even more ac­ces­si­ble to more peo­ple around the world. This past fall, his team’s ef­forts were on full dis­play with the in­tro­duc­tion of a re­de­fined iPhone lineup, in­clud­ing the in­cred­i­bly pow­er­ful iPhone 17 Pro and Pro Max, the rad­i­cally thin and durable iPhone Air, and the iPhone 17, which has been an in­cred­i­ble up­grade for users. Under his lead­er­ship, his team also drove ad­vance­ments in AirPods to make them the world’s best in-ear head­phones, with un­prece­dented ac­tive noise can­cel­la­tion, as well as the ca­pa­bil­ity to be­come an all-in-one hear­ing health sys­tem that can serve as over-the-counter hear­ing aids.

Ternus led much of the com­pa­ny’s fo­cus in ar­eas like re­li­a­bil­ity and dura­bil­ity, in­tro­duc­ing new tech­niques that have made Apple prod­ucts re­mark­ably re­silient. He has also dri­ven much of Apple’s in­no­va­tion in ma­te­ri­als and hard­ware de­sign that have re­duced the car­bon foot­print of its prod­ucts, in­clud­ing the cre­ation of a new, re­cy­cled alu­minum com­pound that has been in­tro­duced across mul­ti­ple prod­uct lines, the use of 3-D printed ti­ta­nium in Apple Watch Ultra 3, and in­no­va­tions in re­pairabil­ity that have in­creased the lifes­pans of sev­eral Apple prod­ucts.

Prior to Apple, Ternus worked as a me­chan­i­cal en­gi­neer at Virtual Research Systems. He holds a bach­e­lor’s de­gree in Mechanical Engineering from the University of Pennsylvania.

This press re­lease con­tains for­ward-look­ing state­ments, within the mean­ing of the Private Securities Litigation Reform Act of 1995. These for­ward-look­ing state­ments in­clude with­out lim­i­ta­tion those about Apple’s ex­ec­u­tive suc­ces­sion plans. These state­ments in­volve risks and un­cer­tain­ties, and ac­tual re­sults may dif­fer ma­te­ri­ally from any fu­ture re­sults ex­pressed or im­plied by the for­ward-look­ing state­ments. More in­for­ma­tion re­gard­ing po­ten­tial risks and other fac­tors that could af­fect the com­pany are in­cluded in Apple’s fil­ings with the SEC, in­clud­ing in the Risk Factors” and Management’s Discussion and Analysis of Financial Condition and Results of Operations” sec­tions of Apple’s most re­cently filed pe­ri­odic re­ports on Form 10-K and Form 10-Q and sub­se­quent fil­ings. Apple as­sumes no oblig­a­tion to up­date any for­ward-look­ing state­ments or in­for­ma­tion, which speak only as of the date they are made.

About Apple

Apple rev­o­lu­tion­ized per­sonal tech­nol­ogy with the in­tro­duc­tion of the Macintosh in 1984. Today, Apple leads the world in in­no­va­tion with iPhone, iPad, Mac, AirPods, Apple Watch, and Apple Vision Pro. Apple’s six soft­ware plat­forms — iOS, iPa­dOS, ma­cOS, watchOS, vi­sionOS, and tvOS — pro­vide seam­less ex­pe­ri­ences across all Apple de­vices and em­power peo­ple with break­through ser­vices in­clud­ing the App Store, Apple Music, Apple Pay, iCloud, and Apple TV+. Apple’s more than 150,000 em­ploy­ees are ded­i­cated to mak­ing the best prod­ucts on earth and to leav­ing the world bet­ter than we found it.

© 2026 Apple Inc. All rights re­served. Apple, the Apple logo, Apple Watch, AirPods, Apple Vision Pro, iCloud, Apple Pay, Apple TV, Apple Music, Apple Store, iPad, iPhone, Mac, MacBook Neo, and iPhone Air are trade­marks of Apple. Other com­pany and prod­uct names may be trade­marks of their re­spec­tive own­ers.

...

Read the original on www.apple.com »

2 680 shares, 24 trendiness

Advancing Open-Source Coding

We are open sourc­ing our lat­est model, Kimi K2.6, fea­tur­ing state-of-the-art cod­ing, long-hori­zon ex­e­cu­tion, and agent swarm ca­pa­bil­i­ties. Kimi K2.6 is now avail­able via Kimi.com, the Kimi App, the API, and Kimi Code.

Kimi K2.6 shows strong im­prove­ments in long-hori­zon cod­ing tasks, with re­li­able gen­er­al­iza­tion across pro­gram­ming lan­guages (e.g., Rust, Go, and Python) and tasks (e.g., front-end, de­vops, and per­for­mance op­ti­miza­tion). On Kimi Code Bench, our in­ter­nal cod­ing bench­mark cov­er­ing di­verse com­pli­cated end-to-end tasks, Kimi K2.6 demon­strates sig­nif­i­cant im­prove­ments over Kimi K2.5.

Kimi K2.6 suc­cess­fully down­loaded and de­ployed the Qwen3.5-0.8B model lo­cally on a Mac. By im­ple­ment­ing and op­ti­miz­ing model in­fer­ence in Zig—a highly niche pro­gram­ming lan­guage—it demon­strated ex­cep­tional out-of-dis­tri­b­u­tion gen­er­al­iza­tion. Across 4,000+ tool calls, over 12 hours of con­tin­u­ous ex­e­cu­tion, and 14 it­er­a­tions, Kimi K2.6 dra­mat­i­cally im­proved through­put from ~15 to ~193 to­kens/​sec, ul­ti­mately achiev­ing speeds ~20% faster than LM Studio.

Kimi K2.6 au­tonomously over­hauled ex­change-core, an 8-year-old open-source fi­nan­cial match­ing en­gine. Over a 13-hour ex­e­cu­tion, the model it­er­ated through 12 op­ti­miza­tion strate­gies, ini­ti­at­ing over 1,000 tool calls to pre­cisely mod­ify more than 4,000 lines of code. Acting as an ex­pert sys­tems ar­chi­tect, Kimi K2.6 an­a­lyzed CPU and al­lo­ca­tion flame graphs to pin­point hid­den bot­tle­necks and boldly re­con­fig­ured the core thread topol­ogy (from 4ME+2RE to 2ME+1RE). Despite the en­gine al­ready op­er­at­ing near its per­for­mance lim­its, Kimi K2.6 ex­tracted a 185% medium through­put leap (from 0.43 to 1.24 MT/s) and a 133% per­for­mance through­put gain (soaring from 1.23 to 2.86 MT/s).

In beta tests, K2.6 per­forms well on long-hori­zon cod­ing tasks in en­ter­prise eval­u­a­tions (randomly or­dered):

Based on the strong cod­ing ca­pa­bil­i­ties, Kimi K2.6 can turn sim­ple prompts into com­plete front-end in­ter­faces, gen­er­at­ing struc­tured lay­outs with de­lib­er­ate de­sign choices such as aes­thetic hero sec­tions, as well as in­ter­ac­tive el­e­ments and rich an­i­ma­tions, in­clud­ing scroll-trig­gered ef­fects. With strong pro­fi­ciency in lever­ag­ing im­age and video gen­er­a­tion tools, Kimi K2.6 sup­ports the gen­er­a­tion of vi­su­ally co­her­ent as­sets and con­tributes to higher-qual­ity, more salient hero sec­tions.

Moreover, Kimi K2.6 ex­pands be­yond sta­tic fron­tend de­vel­op­ment to sim­ple full-stack work­flows—span­ning au­then­ti­ca­tion to user in­ter­ac­tion to data­base op­er­a­tions for light­weight use cases like trans­ac­tion log­ging or ses­sion man­age­ment.

We es­tab­lished an in­ter­nal Kimi Design Bench, or­ga­nized into four cat­e­gories: Visual Input Tasks, Landing Page Construction, Full-Stack Application Development, and General Creative Programming. In com­par­i­son with Google AI Studio, Kimi K2.6 shows promis­ing re­sults and per­forms well across these cat­e­gories.

Below are ex­am­ples gen­er­ated by K2.6 Agent from a sin­gle prompt, with pre­con­fig­ured har­nesses and tools:

Scaling out, not just up. An Agent Swarm dy­nam­i­cally de­com­poses tasks into het­ero­ge­neous sub­tasks ex­e­cuted con­cur­rently by self-cre­ated do­main-spe­cial­ized agents.

Based on the K2.5 Agent Swarm re­search pre­view, Kimi K2.6 Agent Swarm demon­strates a qual­i­ta­tive leap in the agent swarm ex­pe­ri­ence. It seam­lessly co­or­di­nates het­ero­ge­neous agents to com­bine com­ple­men­tary skills: broad search lay­ered with deep re­search, large-scale doc­u­ment analy­sis fused with long-form writ­ing, and multi-for­mat con­tent gen­er­a­tion ex­e­cuted in par­al­lel. This com­po­si­tional in­tel­li­gence en­ables the swarm to de­liver end-to-end out­puts—span­ning doc­u­ments, web­sites, slides, and spread­sheets—within a sin­gle au­tonomous run.

The ar­chi­tec­ture scales hor­i­zon­tally to 300 sub-agents ex­e­cut­ing across 4,000 co­or­di­nated steps si­mul­ta­ne­ously, a sub­stan­tial ex­pan­sion from K2.5′s 100 sub-agents and 1,500 steps. This mas­sive par­al­leliza­tion fun­da­men­tally re­duces end-to-end la­tency while sig­nif­i­cantly en­hanc­ing out­put qual­ity and ex­pand­ing the op­er­a­tional bound­aries of Agents swarms.

It can also turn any high-qual­ity files such as PDFs, spread­sheets, slides, and Word doc­u­ments into Skills. Kimi K2.6 cap­tures and main­tains the doc­u­ments’ struc­tural and styl­is­tic DNA, en­abling you to re­pro­duce the same qual­ity and for­mat in fu­ture tasks.

Here are some ex­am­ples:

K2.6 demon­strates strong per­for­mance in au­tonomous, proac­tive agents such as OpenClaw and Hermes, which op­er­ate across mul­ti­ple ap­pli­ca­tions with con­tin­u­ous, 24/7 ex­e­cu­tion.

Unlike sim­ple chat-based in­ter­ac­tions, these work­flows re­quire AI to proac­tively man­age sched­ules, ex­e­cute code, and or­ches­trate cross-plat­form op­er­a­tions as a per­sis­tent back­ground agent.

Our RL in­fra team used a K2.6-backed agent that op­er­ated au­tonomously for 5 days, man­ag­ing mon­i­tor­ing, in­ci­dent re­sponse, and sys­tem op­er­a­tions, demon­strat­ing per­sis­tent con­text, multi-threaded task han­dling, and full-cy­cle ex­e­cu­tion from alert to res­o­lu­tion. Here is K2.6′s work­log (anonymized to re­move sen­si­tive in­for­ma­tion):

Kimi K2.6 de­liv­ers mea­sur­able im­prove­ments in real-world re­li­a­bil­ity: more pre­cise API in­ter­pre­ta­tion, sta­bler long-run­ning per­for­mance, and en­hanced safety aware­ness dur­ing ex­tended re­search tasks.

Performance gains are quan­ti­fied by our in­ter­nal Claw Bench, the eval­u­a­tion suite span­ning five do­mains: Coding Tasks, IM Ecosystem Integration, Information Research & Analysis, Scheduled Task Management, and Memory Utilization. Across all met­rics, Kimi K2.6 sig­nif­i­cantly out­per­forms Kimi K2.5 in task com­ple­tion rates and tool in­vo­ca­tion ac­cu­racy—par­tic­u­larly in work­flows re­quir­ing sus­tained au­tonomous op­er­a­tion with­out hu­man over­sight.

Building upon Kimi K2.6′s ro­bust or­ches­tra­tion ca­pa­bil­i­ties, Kimi K2.6 ex­tends your proac­tive agents to Claw Groups as a re­search pre­view—a new in­stan­ti­a­tion of the Agent Swarm ar­chi­tec­ture.

Claw Groups em­brace an open, het­ero­ge­neous ecosys­tem: Multiple agents and hu­mans op­er­ate as true col­lab­o­ra­tors. Users can on­board agents from any de­vice, run­ning any model, each car­ry­ing their own spe­cial­ized toolk­its, skills and per­sis­tent mem­ory con­texts. Whether de­ployed on lo­cal lap­tops, mo­bile de­vices, or cloud in­stances, these di­verse agents in­te­grate seam­lessly into a shared op­er­a­tional space.

At the cen­ter of this swarm, Kimi K2.6 serves as an adap­tive co­or­di­na­tor. It dy­nam­i­cally matches tasks to agents based on their spe­cific skill pro­files and avail­able tools, op­ti­miz­ing for ca­pa­bil­ity fit. When an agent en­coun­ters fail­ure or stalls, the co­or­di­na­tor de­tects the in­ter­rup­tion, au­to­mat­i­cally re­as­signs the task or re­gen­er­ates sub­tasks, and ac­tively man­ages the full life­cy­cle of de­liv­er­ables—from ini­ti­a­tion through val­i­da­tion to com­ple­tion.

We also want to thank the K2.6-powered agents in Claw Groups—we’ve been dog­food­ing our own agent mar­ket­ing team by re­fin­ing hu­man–agent work­flows in prac­tice. Using Claw Groups, we run end-to-end con­tent pro­duc­tion and launch cam­paigns, with spe­cial­ized agents like Demo Makers, Benchmark Makers, Social Media Agents, and Video Makers work­ing to­gether. K2.6 co­or­di­nates the process, en­abling agents to share in­ter­me­di­ate re­sults and turn ideas into con­sis­tent, fully pack­aged de­liv­er­ables.

We are mov­ing be­yond sim­ply ask­ing AI a ques­tion or as­sign­ing AI a task, and en­ter­ing a phase where hu­man and AI col­lab­o­rate as gen­uine part­ners—com­bin­ing strengths to solve prob­lems col­lec­tively. Claw Groups marks our lat­est ef­forts to­ward a fu­ture where the bound­aries be­tween my agent,” your agent,” and our team” dis­solve seam­lessly into a col­lab­o­ra­tive sys­tem.

To re­pro­duce of­fi­cial Kimi-K2.6 bench­mark re­sults, we rec­om­mend us­ing the of­fi­cial API. For third-party providers, re­fer to Kimi Vendor Verifier (KVV) to choose high-ac­cu­racy ser­vices. Details: https://​kimi.com/​blog/​kimi-ven­dor-ver­i­fier

* We re­port re­sults for Kimi K2.6 and Kimi K2.5 with think­ing mode en­abled, Claude Opus 4.6 with max ef­fort, GPT-5.4 with xhigh rea­son­ing ef­fort, and Gemini 3.1 Pro with a high think­ing level.

* Unless oth­er­wise spec­i­fied, all Kimi K2.6 ex­per­i­ments were con­ducted with tem­per­a­ture = 1.0, top-p = 1.0, and a con­text length of 262,144 to­kens.

* Benchmarks with­out pub­licly avail­able scores were re-eval­u­ated un­der the same con­di­tions used for Kimi K2.6 and are marked with an as­ter­isk (*). Except where noted with an as­ter­isk, all other re­sults are cited from of­fi­cial re­ports.

* IMO-AnswerBench scores for GPT-5.4 and Claude 4.6 were ob­tained from https://​z.ai/​blog/​glm-5.1.

* Humanity’s Last Exam (HLE) and other rea­son­ing tasks were eval­u­ated with a max­i­mum gen­er­a­tion length of 98,304 to­kens. By de­fault, we re­port re­sults on the HLE full set. For the text-only sub­set, Kimi K2.6 achieves 36.4% ac­cu­racy with­out tools and 55.5% with tools.

* Kimi K2.6 was equipped with search, code-in­ter­preter, and web-brows­ing tools for HLE with tools, BrowseComp, DeepSearchQA, and WideSearch.

* For HLE-Full with tools, the max­i­mum gen­er­a­tion length is 262,144 to­kens with a per-step limit of 49,152 to­kens. We em­ploy a sim­ple con­text man­age­ment strat­egy: once the con­text win­dow ex­ceeds the thresh­old, only the most re­cent round of tool-re­lated mes­sages is re­tained.

* For BrowseComp, we re­port scores ob­tained with con­text man­age­ment us­ing the same dis­card-all strat­egy as Kimi K2.5 and DeepSeek-V3.2.

* For DeepSearchQA, no con­text man­age­ment was ap­plied to Kimi K2.6 tests, and tasks ex­ceed­ing the sup­ported con­text length were di­rectly counted as failed. Scores for Claude Opus 4.6, GPT-5.4, and Gemini 3.1 Pro on DeepSearchQA are cited from the Claude Opus 4.7 System Card.

* For WideSearch, we re­port re­sults un­der the hide tool re­sult” con­text man­age­ment set­ting. Once the con­text win­dow ex­ceeds the thresh­old, only the most re­cent round of tool-re­lated mes­sages is re­tained.

* The test sys­tem prompts are iden­ti­cal to those used in the Kimi K2.5 tech­ni­cal re­port.

* Claw Eval was con­ducted us­ing ver­sion 1.1 with max-to­kens-per-step = 16384.

* For APEX-Agents, we eval­u­ate 452 tasks from the pub­lic 480-task re­lease, as done by Artificial Analysis (excluding Investment Banking Worlds 244 and 246, which have ex­ter­nal run­time de­pen­den­cies).

* Terminal-Bench 2.0 scores were ob­tained with the de­fault agent frame­work (Terminus-2) and the pro­vided JSON parser, op­er­at­ing in pre­serve think­ing mode.

* For the SWE-Bench se­ries of eval­u­a­tions (including Verified, Multilingual, and Pro), we used an in-house eval­u­a­tion frame­work adapted from SWE-agent. This frame­work in­cludes a min­i­mal set of tools—bash tool, cre­ate­file tool, in­sert tool, view tool, str­re­place tool, and sub­mit tool.

* All re­ported scores for cod­ing tasks are av­er­aged over 10 in­de­pen­dent runs.

* Settings with Python tool use max-to­kens-per-step = 65,536 and max-steps = 50 for multi-step rea­son­ing.

* MMMU-Pro fol­lows the of­fi­cial pro­to­col, pre­serv­ing in­put or­der and prepend­ing im­ages.

...

Read the original on www.kimi.com »

3 603 shares, 15 trendiness

At Long Last, InfoWars Is Ours

Let me tell you a story. When I was a child, I suf­fered from night ter­rors. It was al­ways the same dream: I could hear my fam­ily and neigh­bors wail­ing in the street out­side as they were pur­sued and then de­stroyed by a name­less malev­o­lent force, some­thing nei­ther I nor any­one else could con­trol, a great dark­ness that was, some­how, all my fault.

Today, that child­hood dream is fi­nally com­ing true. Today I can fi­nally say the sweet­est nine or 10 words in the English lan­guage: Global Tetrahedron has com­pleted its plan to con­trol InfoWars.com.

I’ve had a lot of time to think about InfoWars in the last year and a half. As the sea­sons have changed, my am­bi­tions for the pro­ject have grown grander, cru­eler, bet­ter aligned with mar­ket data. Come, friends, and imag­ine with me…

Imagine a roar­ing arena packed to the rafters with patho­log­i­cal liars. High above you in the nose­bleeds are pod­cast­ers, scream­ing that you’ll die if you don’t buy their skin­care prod­ucts. Below, on the floor, imag­ine de­monic bat­tal­ions of su­per-in­flu­encers phys­i­cally forc­ing peo­ple into home fit­ness de­vices de­signed to dis­man­tle their bod­ies bone by bone and re­assem­ble them into a grotesque statue of your­self. Out of the throngs, an ex­tremely sick look­ing man ap­proaches you. He puts his hands on your shoul­ders. He ex­plains that he is your life coach and that you owe him $800.

Such is the InfoWars I en­vi­sion: An in­fi­nite vir­tual sur­face teem­ing with ads. Not just ads, but scams! Not just scams, but lies with no ob­ject, free rad­i­cal mis­in­for­ma­tion, sen­tences and im­ages so poorly thought out that they are un­healthy even to view for just a few sec­onds. The InfoWars of old was only the pro­to­type for the hell I know we can build to­gether: A dig­i­tal plat­form where, every day, vis­i­tors sac­ri­fice them­selves at al­tars of delu­sion and mis­ery, their minds fully dis­in­te­grat­ing on con­tact.

With this new InfoWars, we will de­moc­ra­tize psy­cho­log­i­cal tor­ture, wel­com­ing bru­tal and sadis­tic ideas from every­one, even the very stu­pid­est among us. It will be like the Manhattan Project, only in­stead of a bomb, we will be build­ing a web­site.

The InfoWars of to­mor­row will con­verge into a swirling vor­tex of con­tent about con­tent, tal­ent ac­quir­ing tal­ent, rings of con­cen­tric me­dia merg­ers pro­cess­ing all hu­man artistry into one end­lessly di­gestible slurry. This will be a dank, sun­less place, one where panic and cap­i­tal feed on each other like twins in the womb of a hulk­ing, un­know­able mon­ster—a mon­ster known by many names, but which I like to call mod­ern-day America.

All of this is to say that I be­lieve in us. I be­lieve that with the new InfoWars, we can al­chem­ize the pi­o­neer­ing spirit of am­a­teur in­quiry, the profit-max­i­miz­ing drive of cor­po­ra­tions, and the cold men­tal clar­ity that comes only with dis­ci­plined daily in­ges­tion of mind- and body-al­ter­ing chem­i­cals. If we can do that, what other great things can we do to­gether?

I don’t yet know, but I’m ex­cited to find out. Welcome home, war­riors. The fu­ture be­longs to us. We’re writ­ing the story now. It’s go­ing to be a long one, and it’s go­ing to be a bad one.

So set­tle in. Make your­self com­fort­able. Buy a tote bag.

Nothing can stop us now that we’re in charge of a web­site.

...

Read the original on theonion.com »

4 370 shares, 8 trendiness

AI Resistance is Growing

As the in­ter­net chokes on ever more slop, the one thing that gives me hope is this: peo­ple seem to loathe AI, and are ac­tively re­sist­ing it. This won’t be a long post, as I’m per­son­ally so tired of writ­ing and think­ing about AI at this point in time, but I do want to draw your at­ten­tion here to some re­cent anti-AI stuff that’s worth dis­cussing.

r/​Poi­son­Foun­tain, cre­ated by in­di­vid­u­als who claim to be con­cerned AI in­dus­try in­sid­ers, is a com­mu­nity with one goal: en­cour­age as many peo­ple as pos­si­ble to feed huge quan­ti­ties of trash data (poison) to all of the web crawlers out there that are scrap­ing our work for AI train­ing sets. They aim to serve one ter­abyte of poi­son per day to these crawlers by the end of 2026.

The poi­son foun­tain it­self is hosted on rn­saffn.com, sand­wiched be­tween sev­eral garbage links that look ir­re­sistable to AI crawlers; it pro­duces a page of code that seems cor­rect at first glance, but is ac­tu­ally rid­dled with sub­tle er­rors that ren­der the code un­us­able. Filtering out these er­rors is pos­si­ble, but ex­pen­sive at scale. Since these com­pa­nies can’t im­prove their AI mod­els with­out fresh data cre­ated by hu­man be­ings, the idea here is to waste their time and make it ex­pen­sive for them to steal our data.

Miasma is one ex­am­ple of a tool that uses the foun­tain to serve mas­sive amounts of garbage to ma­li­cious bots. The de­vel­oper de­scribes it as an end­less buf­fet of slop for the slop ma­chines,” which is de­light­ful. I can’t use Miasma with my site’s setup, but it may be of in­ter­est to those of you who could. I de­liver my trash to crawlers us­ing other means … some vis­i­ble, some in­vis­i­ble. While I can’t serve it up to any­where near the same ex­tent as Miasma can, I do catch sneaky bots with my junk links every day.

If you’re pro-AI and feel out­raged on be­half of these com­pa­nies that any­one would dare try to make life dif­fi­cult for them, please know that this is sim­ply a case of tit for tat. The teams that send AI crawlers out into the world wide web are DDoSing small web­sites on the reg­u­lar and rais­ing host­ing fees for every­one with their vo­ra­cious de­sire to de­vour the en­tire in­ter­net. They do not obey ro­bots.txt, and of­ten hide their crawlers be­hind res­i­den­tial prox­ies. If they can’t source train­ing data eth­i­cally, then I see ab­solutely no rea­son why any web­site op­er­a­tor should make it easy for them to steal it.

Caution: I’m mess­ing with au­to­mated vis­i­tors in plain sight as an ex­per­i­ment. 🤭 To avoid false pos­i­tives, hu­man vis­i­tors are en­cour­aged to ig­nore the link in this box.

Someone Figured Out How To Poison AI Video Summarizers

Thanks to r/​Poi­son­Foun­tain, I learned that YouTube has no .ass. I could try to ex­plain what that means, but the video is hi­lar­i­ous and well worth a watch, so I’ll leave it up to @f4mi.

Sadly, it looks like the poi­son­ing tech­nique used by the cre­ator in this video no longer works; YouTube pre­sum­ably fixed the tran­script loop­hole she was ex­ploit­ing here. I plugged a few of her video URLs into a few dif­fer­ent video sum­ma­riz­ers, and they all failed to tell me any­thing that was­n’t ac­tu­ally in the videos.

Still, it’s great to see peo­ple try­ing and suc­ceed­ing at fuck­ing with the slop ma­chines — even if that suc­cess is only tem­po­rary.

All over Reddit and other so­cial me­dia plat­forms, I’m in­creas­ingly see­ing stuff like this:

I mean, sure, it’s lit­er­ally mis­in­for­ma­tion and you could in­deed ar­gue that there’s al­ready enough mis­in­for­ma­tion on the in­ter­net as it is … but it’s im­por­tant to note here that bots, not peo­ple, are the tar­get au­di­ence of this mis­in­for­ma­tion.

I think most of us can un­der­stand from the con­text that Idris Elba did not ever play Raymond’s mother in an episode of Everybody Loves Raymond. Automated web scrap­ers, how­ever, will just see good hu­man-gen­er­ated data, which is what they want. They’re go­ing to mer­rily scrape that garbage from Reddit and send it back to OpenAI or whomever, who will then have to waste re­sources re­mov­ing it from their train­ing data sets.

This is­n’t ex­actly the mod­ern equiv­a­lent of an­gry tex­tile work­ers de­stroy­ing power looms, but (if you’ll for­give the pun) it’s cut from the same cloth. The dif­fer­ence here (I hope) is that if enough of us pol­lute pub­lic spaces with mis­in­for­ma­tion in­tended for bots, it might be enough to com­pel AI com­pa­nies to re­think the way they source train­ing data.

People hate what AI is do­ing to our world. They hate what it’s do­ing to our on­line com­mu­ni­ties, what it’s do­ing to our en­vi­ron­ment, what it’s do­ing to our el­e­men­tary schools and uni­ver­si­ties, what it’s do­ing to at-risk in­di­vid­u­als with men­tal health is­sues, what it’s do­ing (and may yet still do) to our liveli­hoods. While there are cer­tainly plenty of peo­ple out there who hap­pily con­sume and gen­er­ate mas­sive amounts of AI slop, they are — at least in my anec­do­tal ex­pe­ri­ence within my own so­cial cir­cles, both of­fline and on­line — dwarfed by peo­ple who de­test and want noth­ing to do with this tech­nol­ogy.

Hatred of a thing sel­dom leads any­where good, as re­cent events demon­strate, but do I think that if peo­ple are able to trans­late what they’re feel­ing about AI into peace­ful, le­gal acts of re­sis­tance, then we might ac­tu­ally stand to change the way Silicon Valley does things.

To see what peo­ple are say­ing about this post, check it out on Mastodon. Want to know why this blog does­n’t have a com­ments sec­tion? I wrote about that here.

If you en­joy my writ­ing and want to read more of it, check out my last post or browse through my blog archive.

...

Read the original on stephvee.ca »

5 363 shares, 96 trendiness

Laws of Software Engineering

Organizations de­sign sys­tems that mir­ror their own com­mu­ni­ca­tion struc­ture.

Premature op­ti­miza­tion is the root of all evil.

With a suf­fi­cient num­ber of API users, all ob­serv­able be­hav­iors of your sys­tem will be de­pended on by some­body.

Leave the code bet­ter than you found it.

YAGNI (You Aren’t Gonna Need It)

Don’t add func­tion­al­ity un­til it is nec­es­sary.

Adding man­power to a late soft­ware pro­ject makes it later.

A com­plex sys­tem that works is in­vari­ably found to have evolved from a sim­ple sys­tem that worked.

All non-triv­ial ab­strac­tions, to some de­gree, are leaky.

Every ap­pli­ca­tion has an in­her­ent amount of ir­re­ducible com­plex­ity that can only be shifted, not elim­i­nated.

A dis­trib­uted sys­tem can guar­an­tee only two of: con­sis­tency, avail­abil­ity, and par­ti­tion tol­er­ance.

Small, suc­cess­ful sys­tems tend to be fol­lowed by ov­erengi­neered, bloated re­place­ments.

A set of eight false as­sump­tions that new dis­trib­uted sys­tem de­sign­ers of­ten make.

Every pro­gram at­tempts to ex­pand un­til it can read mail.

There is a cog­ni­tive limit of about 150 sta­ble re­la­tion­ships one per­son can main­tain.

The square root of the to­tal num­ber of par­tic­i­pants does 50% of the work.

Those who un­der­stand tech­nol­ogy don’t man­age it, and those who man­age it don’t un­der­stand it.

In a hi­er­ar­chy, every em­ployee tends to rise to their level of in­com­pe­tence.

The min­i­mum num­ber of team mem­bers whose loss would put the pro­ject in se­ri­ous trou­ble.

Companies tend to pro­mote in­com­pe­tent em­ploy­ees to man­age­ment to limit the dam­age they can do.

Work ex­pands to fill the time avail­able for its com­ple­tion.

The first 90% of the code ac­counts for the first 90% of de­vel­op­ment time; the re­main­ing 10% ac­counts for the other 90%.

It al­ways takes longer than you ex­pect, even when you take into ac­count Hofstadter’s Law.

When a mea­sure be­comes a tar­get, it ceases to be a good mea­sure.

Anything you need to quan­tify can be mea­sured in some way bet­ter than not mea­sur­ing it.

Anything that can go wrong will go wrong.

Be con­ser­v­a­tive in what you do, be lib­eral in what you ac­cept from oth­ers.

Technical Debt is every­thing that slows us down when de­vel­op­ing soft­ware.

Given enough eye­balls, all bugs are shal­low.

Debugging is twice as hard as writ­ing the code in the first place.

A pro­ject should have many fast unit tests, fewer in­te­gra­tion tests, and only a small num­ber of UI tests.

Repeatedly run­ning the same tests be­comes less ef­fec­tive over time.

Software that re­flects the real world must evolve, and that evo­lu­tion has pre­dictable lim­its.

90% of every­thing is crap.

The speedup from par­al­leliza­tion is lim­ited by the frac­tion of work that can­not be par­al­lelized.

It is pos­si­ble to achieve sig­nif­i­cant speedup in par­al­lel pro­cess­ing by in­creas­ing the prob­lem size.

The value of a net­work is pro­por­tional to the square of the num­ber of users.

Every piece of knowl­edge must have a sin­gle, un­am­bigu­ous, au­thor­i­ta­tive rep­re­sen­ta­tion.

Designs and sys­tems should be as sim­ple as pos­si­ble.

Five main guide­lines that en­hance soft­ware de­sign, mak­ing code more main­tain­able and scal­able.

An ob­ject should only in­ter­act with its im­me­di­ate friends, not strangers.

Software and in­ter­faces should be­have in a way that least sur­prises users and other de­vel­op­ers.

The less you know about some­thing, the more con­fi­dent you tend to be.

Never at­tribute to mal­ice that which is ad­e­quately ex­plained by stu­pid­ity or care­less­ness.

The sim­plest ex­pla­na­tion is of­ten the most ac­cu­rate one.

Sticking with a choice be­cause you’ve in­vested time or en­ergy in it, even when walk­ing away helps you.

The Map Is Not the Territory

Our rep­re­sen­ta­tions of re­al­ity are not the same as re­al­ity it­self.

A ten­dency to fa­vor in­for­ma­tion that sup­ports our ex­ist­ing be­liefs or ideas.

We tend to over­es­ti­mate the ef­fect of a tech­nol­ogy in the short run and un­der­es­ti­mate the im­pact in the long run.

The longer some­thing has been in use, the more likely it is to con­tinue be­ing used.

Breaking a com­plex prob­lem into its most ba­sic blocks and then build­ing up from there.

Solving a prob­lem by con­sid­er­ing the op­po­site out­come and work­ing back­ward from it.

80% of the prob­lems re­sult from 20% of the causes.

The best way to get the cor­rect an­swer on the Internet is not to ask a ques­tion, it’s to post the wrong an­swer.

...

Read the original on lawsofsoftwareengineering.com »

6 352 shares, 13 trendiness

Deezer says 44% of songs uploaded to its platform daily are AI-generated

Deezer an­nounced on Monday that AI-generated tracks now rep­re­sent 44% of all new mu­sic up­loaded to its plat­form. The com­pany said it’s re­ceiv­ing al­most 75,000 AI-generated tracks per day and more than two mil­lion per month.

The con­sump­tion of AI-generated mu­sic on the plat­form is still very low, at 1-3% of to­tal streams, and 85% of these streams are de­tected as fraud­u­lent and de­mon­e­tized by the com­pany.

The lat­est fig­ure from Deezer high­lights a con­tin­u­ous surge in AI-generated mu­sic up­loads to the plat­form. Deezer re­ported re­ceiv­ing around 60,000 AI tracks per day in January, up from 50,000 in November, 30,000 in September, and just 10,000 in January 2025, when it first launched its AI-music de­tec­tion tool.

Songs tagged as AI-generated on Deezer are au­to­mat­i­cally re­moved from al­go­rith­mic rec­om­men­da­tions and not in­cluded in ed­i­to­r­ial playlists. The com­pany an­nounced to­day that it will no longer store hi-res ver­sions of AI tracks.

The up­dated fig­ure comes as an AI-generated track topped the iTunes charts last week in the United States, United Kingdom, France, Canada, and New Zealand.

AI-generated mu­sic is now far from a mar­ginal phe­nom­e­non and as daily de­liv­er­ies keep in­creas­ing, we hope the whole mu­sic ecosys­tem will join us in tak­ing ac­tion to help safe­guard artists’ rights and pro­mote trans­parency for fans,” said Deezer CEO Alexis Lanternier in a press re­lease. Thanks to our tech­nol­ogy and the proac­tive mea­sures we put in place more than a year ago, we have shown that it’s pos­si­ble to re­duce AI-related fraud and pay­ment di­lu­tion in stream­ing to a min­i­mum.”

Today’s an­nounce­ment comes as Deezer con­ducted a sur­vey last November that found that 97% of par­tic­i­pants could­n’t tell the dif­fer­ence be­tween fully AI-generated mu­sic and hu­man-made mu­sic.

The sur­vey also found that 52% of re­spon­dents said 100% AI-generated songs should­n’t be in­cluded in charts along­side hu­man-made songs in the main charts. Meanwhile, 80% said 100% AI-generated mu­sic should be clearly la­beled for lis­ten­ers.

Deezer started tag­ging AI tracks at the plat­form level in June 2025, be­com­ing the first stream­ing plat­form to do so. Over the course of 2025, Deezer tagged more than 13.4 mil­lion AI tracks on its plat­form.

In February, French stream­ing ser­vice Qobuz an­nounced plans to tag AI-generated con­tent on its plat­form. Other ma­jor stream­ing ser­vices, such as Spotify and Apple Music, take dif­fer­ent ap­proaches to AI-generated mu­sic, of­ten com­bin­ing the use of fil­ters to iden­tify low-qual­ity AI mu­sic with other trans­parency ef­forts left up to the dis­trib­u­tors.

...

Read the original on techcrunch.com »

7 317 shares, 12 trendiness

I'm never buying another Kindle, and neither should you

Affiliate links on Android Authority may earn us a com­mis­sion. Learn more.

I’m never buy­ing an­other Kindle, and nei­ther should youAfter a decade with Kindle, Amazon’s lat­est changes made it clear that own­er­ship comes sec­ond to con­trol. I’ve car­ried a Kindle in my bag for over a decade. Through every hard­ware it­er­a­tion, from the phys­i­cal key­board right up to the lat­est Paperwhite, a Kindle has been with me every­where — be it on an air­plane, a train ride, the doc­tor’s of­fice, or my bed­side. My all-time fa­vorite ebook reader is, hands down, the Kindle Oasis. For years, I’ve de­fended the ecosys­tem be­cause it was con­ve­nient and the screens were the gold stan­dard for e-ink read­ers. But things have changed.

In 2026, the Kindle is­n’t re­ally about books for Amazon. It’s about the ecosys­tem around them.

Looking at the cur­rent state of my dig­i­tal li­brary in 2026, that long-stand­ing loy­alty to Amazon’s read­ers is no longer a thing. The re­cent an­nounce­ment that Amazon is sun­set­ting older hard­ware was the fi­nal straw, and it’s changed the way I look at Kindles. In fact, I’d go as far as say­ing that it’s a wake-up call for any­one who val­ues dig­i­tal own­er­ship. If the writ­ing was­n’t al­ready on the wall, for Amazon, the e-reader is clearly no longer a tool for read­ers; it is quite sim­ply a por­tal for a store­front. In a world where we are in­creas­ingly forced to rent our dig­i­tal lives through sub­scrip­tion ser­vices, our books should be the one place where own­er­ship still mat­ters. However, Amazon’s re­cent moves prove that own­er­ship is no longer a pri­or­ity for the brand, and that is why I am fi­nally walk­ing away from the Kindle for good. Here is why you should con­sider do­ing the same.

Would you aban­don Amazon Kindles for a new al­ter­na­tive e-reader?I don’t want an e-reader at all.Other (let us know in the com­ments be­low)The end of the road for legacy hard­wareIf you’re not caught up on the lat­est in the Kindle world, here’s what you need to know. If you own a Kindle re­leased be­fore 2013, your de­vice is ef­fec­tively on death row. Amazon re­cently con­firmed that start­ing May 20, these older mod­els will lose all ac­cess to the Kindle Store. While you can tech­ni­cally keep read­ing books al­ready on the de­vice, the real kicker is the fac­tory re­set lim­i­ta­tion built into the soft­ware. If you ever need to re­set your de­vice or try to reg­is­ter it to a new ac­count af­ter the dead­line, it be­comes a lit­eral pa­per­weight. As an archivist and fan of older Kindle hard­ware, this move is ab­solutely shock­ing.A per­fectly func­tional Kindle can be­come use­less overnight. That should con­cern every­one.If any­thing, the move is a sharp re­minder that when you buy into the Kindle ecosys­tem, you are ef­fec­tively rent­ing ac­cess from Amazon. The com­pany is us­ing se­cu­rity up­dates as a jus­ti­fi­ca­tion to move users to­ward newer hard­ware, but the re­al­ity is that many of these de­vices are still per­fectly func­tional for read­ing text. By cut­ting off the abil­ity to re-reg­is­ter them, Amazon is cre­at­ing a mas­sive wave of e-waste and forc­ing an up­grade cy­cle that many users sim­ply do not want or need.There’s the stag­ger­ing en­vi­ron­men­tal cost of the move, of course. But what con­cerns me more is the fact that most of these Kindles have per­fectly func­tional e-ink screens and bat­ter­ies that could last years of light read­ing. Instead of pro­vid­ing a path for long-term sup­port or open-sourc­ing the legacy soft­ware, Amazon is choos­ing the land­fill. And I’m not com­fort­able with that. Not from a com­pany named af­ter a lit­eral rain­for­est.Con­trast this with the ap­proach taken by Kobo. Amazon’s biggest ri­val in the e-reader space has formed an of­fi­cial part­ner­ship with iFixit to pro­vide re­pair kits and guides for its lat­est mod­els. The Kobo Libra Colour and Clara are de­signed to be opened and re­paired. When you buy a Kindle, you are buy­ing a dis­pos­able prod­uct with a pre­de­ter­mined shelf life. Meanwhile, when you buy a Kobo, you are buy­ing a tool that can be main­tained for a decade or more.

For a com­pany that prac­ti­cally in­vented the mod­ern e-reader, Amazon has be­come re­mark­ably lazy with its soft­ware. If you look at a Kindle from 2018 and a Kindle from 2026, the user in­ter­face is nearly iden­ti­cal. We are still deal­ing with a home screen that pri­or­i­tizes ad­ver­tise­ments and pro­moted rec­om­men­da­tions over your ac­tual li­brary. Navigating a large col­lec­tion of books re­mains a chore, with slug­gish an­i­ma­tions and a lack of ro­bust folder man­age­ment that has been a stan­dard fea­ture on ri­val de­vices for years.In 2026, the Kindle UI keeps mov­ing fur­ther away from fo­cus­ing on the li­brary to the store­front. The lat­est up­dates make it harder to find your own side­loaded books while keep­ing Kindle Unlimited rec­om­men­da­tions front and cen­ter. Look, I get it; Amazon’s goal was al­ways to sub­si­dize hard­ware costs by mak­ing money on books. But it has reached a point where Amazon has ef­fec­tively turned your de­vice into a bill­board. You are pay­ing for the priv­i­lege of be­ing mar­keted to every time you wake up your de­vice — unless you pay up.Be­tween forced ob­so­les­cence and an AI-forward fea­ture, this is­n’t the read­ing ex­pe­ri­ence I paid for. Amazon’s 2026 roadmap is also heav­ily fo­cused on AI read­ing as­sis­tants and cloud-based sum­maries. This is es­sen­tially a data-min­ing op­er­a­tion. Amazon is not just track­ing what you buy; it is track­ing how you read. It knows how fast you turn pages, which sec­tions you skip, and ex­actly what you high­light to feed its large lan­guage mod­els. Yes, you can put your Kindle in air­plane mode, but it does­n’t change the facts about the di­rec­tion the com­pany is tak­ing.This level of teleme­try is in­va­sive for a de­vice that is sup­posed to be a pri­vate read­ing ex­pe­ri­ence. Nor did I ever sign up for it. Competitors like Kobo of­fer an of­fline-first ex­pe­ri­ence that does not re­quire a con­stant heart­beat to a cen­tral server to func­tion as the de­fault. Elsewhere, on a Boox de­vice, you have to­tal con­trol over which apps can ac­cess the in­ter­net. With Kindle, it in­creas­ingly looks like the pri­vacy trade-off is the hid­den cost of the hard­ware, and I’m not com­fort­able with it.

There is bet­ter hard­ware and more open ecosys­tems out thereThe fact of the mat­ter is that the Kindle is no longer your only, or best, op­tion. There are plenty of al­ter­na­tives avail­able if you want a ded­i­cated e-reader that re­spects the idea of own­er­ship. Kobo is the log­i­cal next step. Devices like the Kobo Libra Colour of­fer hard­ware that is of­ten su­pe­rior, or at the very least equiv­a­lent, to the Kindle Paperwhite at sim­i­lar price points. The stand­out fea­ture is na­tive OverDrive and Libby in­te­gra­tion. On a Kobo, you can browse, bor­row, and re­turn li­brary books di­rectly on the de­vice with­out ever need­ing to touch a phone or a com­puter, pro­vided you are in a sup­ported coun­try.Kobo also uses the in­dus­try-stan­dard ePub for­mat. This means you are not locked into one store. You can buy books from Google Play, Kobo, or var­i­ous in­de­pen­dent book­stores and sim­ply drag and drop them onto the de­vice via USB. Kobo de­vices also fea­ture much bet­ter ty­pog­ra­phy set­tings. For those who pre­fer phys­i­cal but­tons, Kobo has kept them as a stan­dard fea­ture on its mid-range de­vices, some­thing that the Amazon Kindle ap­pears to be al­ler­gic to.One of the biggest rea­sons to stick to the Kindle was the over­all ex­pe­ri­ence it of­fered. Ironically, Kindle’s ex­pe­ri­ence ad­van­tages are no longer re­ally a thing. If you re­ally want the ul­ti­mate no-com­pro­mise ex­pe­ri­ence, Onyx Boox has been steadily chang­ing the game. Devices like the Boox Palma 2 or the Go 10.3 are not just e-read­ers. Instead, these de­vices are e-ink tablets run­ning a full ver­sion of Android that dra­mat­i­cally open up op­por­tu­ni­ties for cus­tomiza­tion.In my opin­ion, this should be the top op­tion for any­one who wants to leave Kindle hard­ware but keep their Kindle books. Because these de­vices have the Google Play Store, you can sim­ply in­stall the Kindle app. You get the ben­e­fits of the Amazon book­store and your ex­ist­ing li­brary, but you get to use it on hard­ware that is faster and bet­ter de­signed.Us­ing the Kindle app on a Boox de­vice ac­tu­ally pro­vides a bet­ter ex­pe­ri­ence than us­ing a Kindle. You get smoother scrolling and the abil­ity to use third-party fonts with­out any re­stric­tions. Plus, you can run other apps like Spotify for back­ground mu­sic or Notion and Goodreads for book track­ing. You are no longer lim­ited to what Amazon thinks you should be do­ing with your de­vice. Instead, you are in full con­trol of the soft­ware ex­pe­ri­ence.The al­ter­na­tives have caught up and, in some cases, sur­passed Kindle.Another area where Amazon used to lead was dis­play qual­ity, but that gap has closed. The newest Kobo and Boox de­vices are us­ing the lat­est e-ink Carta 1300 pan­els. These pan­els of­fer sig­nif­i­cantly bet­ter con­trast and faster re­fresh rates than the older Carta 1200 found in most Kindles. This means vir­tu­ally non-ex­is­tent ghost­ing and text that looks per­fectly crisp.Hav­ing used a range of Boox hard­ware, I can say the Boox Go 10.3 is a par­tic­u­larly im­pres­sive piece of hard­ware. Between the high-res­o­lu­tion screen and a panel that sits closer to the sur­face, you get a re­mark­ably pa­per-like ex­pe­ri­ence. Plus, the sty­lus in­te­gra­tion goes above and be­yond what you’ll find on equiv­a­lent Kindle hard­ware. If you do any amount of note-tak­ing, the Scribe feels like a toy com­pared to the much more fea­ture-packed Boox tablets. As I men­tioned ear­lier, the Kindle re­ally is­n’t the epit­ome of a qual­ity read­ing ex­pe­ri­ence any­more.

The biggest fear peo­ple have when leav­ing be­hind the Kindle is that they will lose ac­cess to books. This is a myth. While Amazon does have some ex­clu­sive self-pub­lished ti­tles, the vast ma­jor­ity of main­stream books are avail­able on every plat­form. Kobo, Google Play Books, and Apple Books all have cat­a­logs that ri­val Amazon in size. In many cases, you can ac­tu­ally find bet­ter deals on these plat­forms.Even Amazon seems to be ac­knowl­edg­ing that it can’t take its au­di­ence for granted. Starting in January 2026, Amazon has be­gun al­low­ing users to down­load DRM-free ver­sions (Digital Rights Management) of se­lect ePub and PDF files di­rectly from their man­age­ment page. This only ap­plies to books where the pub­lisher has opted out of DRM, but it is a mas­sive shift. It proves that even Amazon knows the pro­pri­etary for­mats are be­com­ing a li­a­bil­ity in a mar­ket that is mov­ing to­ward open stan­dards.Dig­i­tal own­er­ship only ex­ists if you can take your li­brary with you.

For the books you al­ready own that still have DRM, you do not have to leave them be­hind. There are ways to man­age your dig­i­tal li­brary us­ing tools like Calibre and a few plu­g­ins that let you im­port your Kindle pur­chases into a cen­tral data­base. This al­lows you to con­vert them to ePub and move them to any de­vice you choose.

The goal is­n’t just con­ve­nience. Digital preser­va­tion is ex­tremely im­por­tant to me and mil­lions of other users. If Amazon de­cides to delete a book from its servers or shut down your ac­count, you still have the file you paid for. Having a lo­cal, DRM-free backup of your li­brary is the only way to en­sure that your col­lec­tion sur­vives the whims of a multi-tril­lion-dol­lar cor­po­ra­tion. Once your books are in Calibre, you can use pow­er­ful tools to fix meta­data, add high-res­o­lu­tion cov­ers, and read them on what­ever de­vice you want.

The e-reader mar­ket in 2026 is the most com­pet­i­tive it has ever been. We have reached a point where Amazon’s ecosys­tem no longer of­fers enough unique value to jus­tify its re­stric­tions. Combine that with Amazon’s move to brick older hard­ware, to me, it is just the fi­nal sign that the cus­tomer is not the pri­or­ity for the com­pany. Between the seam­less li­brary in­te­gra­tion of Kobo and the raw power of Android-based read­ers from Boox, there is no rea­son to buy an­other Kindle.There’s no rea­son to stay locked in when bet­ter, more open op­tions ex­ist.

If you want the best read­ing ex­pe­ri­ence, buy a Kobo. If you want a pow­er­ful e-ink tablet that does every­thing, buy a Boox. If you want to ac­tu­ally own the books you pay for, use Calibre. But un­til Amazon turns the ship around with its dig­i­tal and hard­ware poli­cies, I do not plan to give Amazon an­other cent for a de­vice that it can take away from me with a sin­gle server-side up­date. My li­brary de­serves bet­ter than that. And so does yours.

Don’t want to miss the best from Android Authority?

Set us as a fa­vorite source in Google Discover to never miss our lat­est ex­clu­sive re­ports, ex­pert analy­sis, and much more.

You can also set us as a pre­ferred source in Google Search by click­ing the but­ton be­low.

Thank you for be­ing part of our com­mu­nity. Read our Com­ment Policy be­fore post­ing.

...

Read the original on www.androidauthority.com »

8 315 shares, 10 trendiness

...

Read the original on vivianvoss.net »

9 292 shares, 13 trendiness

Kimi Vendor Verifier

Alongside the re­lease of the Kimi K2.6 model, we are open-sourc­ing the Kimi Vendor Verifier (KVV) pro­ject, de­signed to help users of open-source mod­els ver­ify the ac­cu­racy of their in­fer­ence im­ple­men­ta­tions.

Not as an af­ter­thought, but be­cause we learned the hard way that open-sourc­ing a model is only half the bat­tle. The other half is en­sur­ing it runs cor­rectly every­where else.

You can click here to ac­cess the Kimi API K2VV eval­u­a­tion re­sults for cal­cu­lat­ing the F1 score.

Since the re­lease of K2 Thinking, we have re­ceived fre­quent feed­back from the com­mu­nity re­gard­ing anom­alies in bench­mark scores. Our in­ves­ti­ga­tion con­firmed that a sig­nif­i­cant por­tion of these cases stemmed from the mis­use of Decoding pa­ra­me­ters. To mit­i­gate this im­me­di­ately, we built our first line of de­fense at the API level: en­forc­ing Temperature=1.0 and TopP=0.95 in Thinking mode, with manda­tory val­i­da­tion that think­ing con­tent is cor­rectly passed back.

However, more sub­tle anom­alies soon trig­gered our alarm. In a spe­cific eval­u­a­tion on LiveBenchmark, we ob­served a stark con­trast be­tween third-party API and of­fi­cial API. After ex­ten­sive test­ing of var­i­ous in­fra­struc­ture providers, we found this dif­fer­ence is wide­spread.

This ex­posed a deeper prob­lem in the open-source model ecosys­tem: The more open the weights are, and the more di­verse the de­ploy­ment chan­nels be­come, the less con­trol­lable the qual­ity be­comes.

If users can­not dis­tin­guish be­tween model ca­pa­bil­ity de­fects” and engineering im­ple­men­ta­tion de­vi­a­tions,” trust in the open-source ecosys­tem will in­evitably col­lapse.

Pre-Verification: Validates that API pa­ra­me­ter con­straints (temperature, top_p, etc.) are cor­rectly en­forced. All tests must pass be­fore pro­ceed­ing to bench­mark eval­u­a­tion. K2VV ToolCall: Measures trig­ger con­sis­tency (F1) and JSON Schema ac­cu­racy. Tool er­rors com­pound in agents; we catch them early.SWE-Bench: Full agen­tic cod­ing test. (Not open sourced due to de­pen­dency of sand­box)

Upstream Fix: We em­bed with vLLM/​SGLang/​KTrans­form­ers com­mu­ni­ties to fix root causes, not just de­tect symp­toms.

Pre-Release Validation: Rather than wait­ing for post-de­ploy­ment com­plaints, we pro­vide early ac­cess to test mod­els. This lets in­fra­struc­ture providers val­i­date their stacks be­fore users en­counter is­sues.

Continuous Benchmarking: We will main­tain a pub­lic leader­board of ven­dor re­sults. This trans­parency en­cour­ages ven­dors to pri­or­i­tize ac­cu­racy.

We com­pleted full eval­u­a­tion work­flow val­i­da­tion on Two NVIDIA H20 8-GPU servers, with se­quen­tial ex­e­cu­tion tak­ing ap­prox­i­mately 15 hours. To im­prove eval­u­a­tion ef­fi­ciency, scripts have been op­ti­mized for long-run­ning in­fer­ence sce­nar­ios, in­clud­ing stream­ing in­fer­ence, au­to­matic retry, and check­point re­sump­tion mech­a­nisms.

Weights are open. The knowl­edge to run them cor­rectly must be too.

We are ex­pand­ing ven­dor cov­er­age and seek­ing lighter agen­tic tests. Contact Us: [email protected]

...

Read the original on www.kimi.com »

10 288 shares, 15 trendiness

Leaked Deck Reveals StackAdapt’s Playbook for ChatGPT Ads

The DSP is of­fer­ing ad place­ments dri­ven by prompt rel­e­vance and dan­gling CPMs rang­ing from $15 to $60, with a $50,000 min­i­mum spend for the pi­lot.

If you want to shape me­dia strat­egy—not just op­ti­mize it—up­grade your ex­per­tise. The ADWEEK MiniMBA in Marketing equips you to lead with con­fi­dence and cred­i­bil­ity. Register now.

StackAdapt is qui­etly court­ing ad­ver­tis­ers to test ads in­side ChatGPT. The in­de­pen­dent de­mand-side plat­form is dan­gling CPMs as low as $15 along­side dis­counted plat­form and man­age­ment fees. The com­pany is fram­ing the push as early ac­cess to a new discovery layer”—one that cap­tures peo­ple in the mid­dle of re­search­ing and com­par­ing prod­ucts on ChatGPT. According to a pitch deck ti­tled OpenAI x StackAdapt Limited Pilot Program,” shared with se­lect buy­ers on March 27 and re­viewed by ADWEEK, the com­pany is po­si­tion­ing the of­fer­ing as an early-stage test in­side a still-de­vel­op­ing ad sys­tem.“Stack­Adapt has part­nered with OpenAl to en­able ad­ver­tis­ing within ChatGPT, one of the fastest grow­ing con­sumer plat­forms in the world,” the deck reads.

Trishla Ostwal

Trishla is an Adweek staff re­porter cov­er­ing AI and tech.

How AI Can Turn Social Media Marketing Into a Measurable Growth Engine

...

Read the original on www.adweek.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.