10 interesting stories served every morning and every evening.




1 1,302 shares, 137 trendiness

Tim Cook to become Apple Executive Chairman John Ternus to become Apple CEO

Tim Cook to be­come :br(s): :br(m): :br(l): :br(xl):Apple Executive Chairman

John Ternus

to be­come Apple CEO

CUPERTINO, CALIFORNIA Apple an­nounced that Tim Cook will be­come ex­ec­u­tive chair­man of Apple’s board of di­rec­tors and John Ternus, se­nior vice pres­i­dent of Hardware Engineering, will be­come Apple’s next chief ex­ec­u­tive of­fi­cer ef­fec­tive on September 1, 2026. The tran­si­tion, which was ap­proved unan­i­mously by the Board of Directors, fol­lows a thought­ful, long-term suc­ces­sion plan­ning process.

Cook will con­tinue in his role as CEO through the sum­mer as he works closely with Ternus on a smooth tran­si­tion. As ex­ec­u­tive chair­man, Cook will as­sist with cer­tain as­pects of the com­pany, in­clud­ing en­gag­ing with pol­i­cy­mak­ers around the world.

It has been the great­est priv­i­lege of my life to be the CEO of Apple and to have been trusted to lead such an ex­tra­or­di­nary com­pany. I love Apple with all of my be­ing, and I am so grate­ful to have had the op­por­tu­nity to work with a team of such in­ge­nious, in­no­v­a­tive, cre­ative, and deeply car­ing peo­ple who have been un­wa­ver­ing in their ded­i­ca­tion to en­rich­ing the lives of our cus­tomers and cre­at­ing the best prod­ucts and ser­vices in the world,” said Cook. John Ternus has the mind of an en­gi­neer, the soul of an in­no­va­tor, and the heart to lead with in­tegrity and with honor. He is a vi­sion­ary whose con­tri­bu­tions to Apple over 25 years are al­ready too nu­mer­ous to count, and he is with­out ques­tion the right per­son to lead Apple into the fu­ture. I could not be more con­fi­dent in his abil­i­ties and his char­ac­ter, and I look for­ward to work­ing closely with him on this tran­si­tion and in my new role as ex­ec­u­tive chair­man.”

I am pro­foundly grate­ful for this op­por­tu­nity to carry Apple’s mis­sion for­ward,” said Ternus. Having spent al­most my en­tire ca­reer at Apple, I have been lucky to have worked un­der Steve Jobs and to have had Tim Cook as my men­tor. It has been a priv­i­lege to help shape the prod­ucts and ex­pe­ri­ences that have changed so much of how we in­ter­act with the world and with one an­other. I am filled with op­ti­mism about what we can achieve in the years to come, and I am so happy to know that the most tal­ented peo­ple on earth are here at Apple, de­ter­mined to be part of some­thing big­ger than any one of us. I am hum­bled to step into this role, and I promise to lead with the val­ues and vi­sion that have come to de­fine this spe­cial place for half a cen­tury.”

Arthur Levinson, who has been Apple’s non-ex­ec­u­tive chair­man for the past 15 years, will be­come its lead in­de­pen­dent di­rec­tor on September 1, 2026. Ternus will join the board of di­rec­tors, also ef­fec­tive September 1, 2026.

Tim’s un­prece­dented and out­stand­ing lead­er­ship has trans­formed Apple into the world’s best com­pany. He’s in­tro­duced ground­break­ing prod­ucts and ser­vices time and again, and his in­tegrity and val­ues are in­fused into every­thing Apple does,” said Levinson. On be­half of the en­tire board of di­rec­tors, we are in­cred­i­bly grate­ful for his count­less con­tri­bu­tions to Apple and the world, and we are thrilled he will now be ex­ec­u­tive chair­man. We be­lieve John is the best pos­si­ble leader to suc­ceed Tim and as he tran­si­tions to CEO we know his love of Apple, his lead­er­ship, deep tech­ni­cal knowl­edge, and re­lent­less fo­cus on cre­at­ing great prod­ucts will help lead Apple to an ex­tra­or­di­nary fu­ture.”

I want to thank Art for the in­cred­i­ble work he has done lead­ing the board of di­rec­tors for the past 15 years,” said Cook. I have al­ways found his ad­vice to be in­valu­able and I ap­pre­ci­ate his thought­ful­ness and his un­wa­ver­ing ded­i­ca­tion to the com­pany. I am grate­ful he will serve as our lead in­de­pen­dent di­rec­tor, and I look for­ward to work­ing with him in my new role.”

Tim Cook joined Apple in 1998. He be­came CEO in 2011 and has over­seen the in­tro­duc­tion of nu­mer­ous prod­ucts and ser­vices, in­clud­ing new cat­e­gories like Apple Watch, AirPods, and Apple Vision Pro, and ser­vices rang­ing from iCloud and Apple Pay to Apple TV and Apple Music. He was also in­stru­men­tal in ex­pand­ing ex­ist­ing prod­uct lines. Under Cook’s lead­er­ship Apple has grown from a mar­ket cap­i­tal­iza­tion of ap­prox­i­mately $350 bil­lion to $4 tril­lion, rep­re­sent­ing a more than 1,000% in­crease, and yearly rev­enue has nearly quadru­pled, from $108 bil­lion in fis­cal year 2011 to more than $416 bil­lion in fis­cal year 2025. The com­pany has ex­panded its global foot­print sub­stan­tially, par­tic­u­larly in emerg­ing mar­kets; it is now in more than 200 coun­tries and ter­ri­to­ries. Apple op­er­ates over 500 re­tail stores and has more than dou­bled the num­ber of coun­tries in which its cus­tomers can visit an Apple Store. During his tenure, Apple has grown by more than 100,000 team mem­bers and in­creased its ac­tive in­stalled base to more than 2.5 bil­lion de­vices.

Apple Services has been a ma­jor fo­cus area of Cook’s, and dur­ing his tenure the cat­e­gory has grown to be­come a more than $100 bil­lion busi­ness, the equiv­a­lent of a Fortune 40 com­pany. Cook was also in­stru­men­tal in cre­at­ing the wear­ables cat­e­gory at Apple, which now in­cludes the world’s most pop­u­lar watch and head­phones, and which has served as the foun­da­tion for Apple’s re­mark­able im­pact on the health and safety of its users. Under Cook’s lead­er­ship, Apple also tran­si­tioned to Apple-designed sil­i­con, en­abling the com­pany to own more of its pri­mary tech­nol­ogy and de­liver in­dus­try-lead­ing gains in power ef­fi­ciency and per­for­mance that di­rectly ben­e­fit users across its prod­ucts.

Cook has made Apple’s core val­ues even more cen­tral to the com­pa­ny’s de­ci­sion mak­ing and prod­uct de­vel­op­ment. Under his lead­er­ship, the com­pany re­duced its car­bon foot­print by more than 60 per­cent be­low 2015 lev­els dur­ing a pe­riod in which rev­enue nearly dou­bled. Cook, who has long ad­vo­cated for pri­vacy as a fun­da­men­tal hu­man right, has made pri­vacy and se­cu­rity im­per­a­tive at Apple, set­ting a stan­dard for user pro­tec­tion that con­tin­ues to set the com­pany apart from the rest of the tech­nol­ogy in­dus­try. He has also pushed for con­tin­ued in­no­va­tion in the ac­ces­si­bil­ity space, be­liev­ing that Apple prod­ucts should be made for every­one. And he has made cen­tral to his lead­er­ship the no­tion that Apple should be a place where every­one can feel they be­long and where every­one is treated with dig­nity and re­spect.

Ternus joined Apple’s prod­uct de­sign team in 2001 and be­came a vice pres­i­dent of Hardware Engineering in 2013. He joined the ex­ec­u­tive team in 2021 as se­nior vice pres­i­dent of Hardware Engineering. Throughout his tenure at Apple, Ternus has over­seen hard­ware en­gi­neer­ing work on a va­ri­ety of ground­break­ing prod­ucts across every cat­e­gory. He was in­stru­men­tal in the in­tro­duc­tion of mul­ti­ple new prod­uct lines, in­clud­ing iPad and AirPods, as well as many gen­er­a­tions of prod­ucts across iPhone, Mac, and Apple Watch.

Ternus’s work on Mac has helped the cat­e­gory be­come more pow­er­ful and more pop­u­lar glob­ally than at any time in its 40-year his­tory. That in­cludes the re­cent in­tro­duc­tion of MacBook Neo, an all-new lap­top that makes the Mac ex­pe­ri­ence even more ac­ces­si­ble to more peo­ple around the world. This past fall, his team’s ef­forts were on full dis­play with the in­tro­duc­tion of a re­de­fined iPhone lineup, in­clud­ing the in­cred­i­bly pow­er­ful iPhone 17 Pro and Pro Max, the rad­i­cally thin and durable iPhone Air, and the iPhone 17, which has been an in­cred­i­ble up­grade for users. Under his lead­er­ship, his team also drove ad­vance­ments in AirPods to make them the world’s best in-ear head­phones, with un­prece­dented ac­tive noise can­cel­la­tion, as well as the ca­pa­bil­ity to be­come an all-in-one hear­ing health sys­tem that can serve as over-the-counter hear­ing aids.

Ternus led much of the com­pa­ny’s fo­cus in ar­eas like re­li­a­bil­ity and dura­bil­ity, in­tro­duc­ing new tech­niques that have made Apple prod­ucts re­mark­ably re­silient. He has also dri­ven much of Apple’s in­no­va­tion in ma­te­ri­als and hard­ware de­sign that have re­duced the car­bon foot­print of its prod­ucts, in­clud­ing the cre­ation of a new, re­cy­cled alu­minum com­pound that has been in­tro­duced across mul­ti­ple prod­uct lines, the use of 3-D printed ti­ta­nium in Apple Watch Ultra 3, and in­no­va­tions in re­pairabil­ity that have in­creased the lifes­pans of sev­eral Apple prod­ucts.

Prior to Apple, Ternus worked as a me­chan­i­cal en­gi­neer at Virtual Research Systems. He holds a bach­e­lor’s de­gree in Mechanical Engineering from the University of Pennsylvania.

This press re­lease con­tains for­ward-look­ing state­ments, within the mean­ing of the Private Securities Litigation Reform Act of 1995. These for­ward-look­ing state­ments in­clude with­out lim­i­ta­tion those about Apple’s ex­ec­u­tive suc­ces­sion plans. These state­ments in­volve risks and un­cer­tain­ties, and ac­tual re­sults may dif­fer ma­te­ri­ally from any fu­ture re­sults ex­pressed or im­plied by the for­ward-look­ing state­ments. More in­for­ma­tion re­gard­ing po­ten­tial risks and other fac­tors that could af­fect the com­pany are in­cluded in Apple’s fil­ings with the SEC, in­clud­ing in the Risk Factors” and Management’s Discussion and Analysis of Financial Condition and Results of Operations” sec­tions of Apple’s most re­cently filed pe­ri­odic re­ports on Form 10-K and Form 10-Q and sub­se­quent fil­ings. Apple as­sumes no oblig­a­tion to up­date any for­ward-look­ing state­ments or in­for­ma­tion, which speak only as of the date they are made.

About Apple

Apple rev­o­lu­tion­ized per­sonal tech­nol­ogy with the in­tro­duc­tion of the Macintosh in 1984. Today, Apple leads the world in in­no­va­tion with iPhone, iPad, Mac, AirPods, Apple Watch, and Apple Vision Pro. Apple’s six soft­ware plat­forms — iOS, iPa­dOS, ma­cOS, watchOS, vi­sionOS, and tvOS — pro­vide seam­less ex­pe­ri­ences across all Apple de­vices and em­power peo­ple with break­through ser­vices in­clud­ing the App Store, Apple Music, Apple Pay, iCloud, and Apple TV+. Apple’s more than 150,000 em­ploy­ees are ded­i­cated to mak­ing the best prod­ucts on earth and to leav­ing the world bet­ter than we found it.

© 2026 Apple Inc. All rights re­served. Apple, the Apple logo, Apple Watch, AirPods, Apple Vision Pro, iCloud, Apple Pay, Apple TV, Apple Music, Apple Store, iPad, iPhone, Mac, MacBook Neo, and iPhone Air are trade­marks of Apple. Other com­pany and prod­uct names may be trade­marks of their re­spec­tive own­ers.

...

Read the original on www.apple.com »

2 746 shares, 40 trendiness

Inside GitHub's Fake Star Economy

Six mil­lion fake stars, $0.06 per click, and a VC fund­ing pipeline that treats GitHub pop­u­lar­ity as proof of trac­tion. We ran our own analy­sis on 20 re­pos and found the fin­ger­prints.

Six mil­lion fake stars, $0.06 per click, and a VC fund­ing pipeline that treats GitHub pop­u­lar­ity as proof of trac­tion. We ran our own analy­sis on 20 re­pos and found the fin­ger­prints.

A GitHub star costs $0.06 at the low end. A seed round un­locks $1 mil­lion to $10 mil­lion. The math is ob­vi­ous, and thou­sands of repos­i­to­ries are ex­ploit­ing it.

This in­ves­ti­ga­tion maps the full ecosys­tem: from the peer-re­viewed re­search quan­ti­fy­ing the prob­lem, to the mar­ket­places sell­ing stars openly, to the ven­ture cap­i­tal pipeline that con­verts star counts into fund­ing de­ci­sions. We ran our own analy­sis on 20 repos­i­to­ries us­ing the GitHub API, sam­pling thou­sands of stargazer pro­files to in­de­pen­dently ver­ify which pro­jects show fin­ger­prints of ma­nip­u­la­tion - and which don’t.

The pic­ture that emerges is a ma­ture, pro­fes­sion­al­ized shadow econ­omy op­er­at­ing in plain sight.

The de­fin­i­tive ac­count comes from a peer-re­viewed study pre­sented at ICSE 2026 by re­searchers at Carnegie Mellon University, North Carolina State University, and Socket. Their tool, StarScout, an­a­lyzed 20 ter­abytes of GitHub meta­data - 6.7 bil­lion events and 326 mil­lion stars from 2019 to 2024 - and iden­ti­fied ap­prox­i­mately 6 mil­lion sus­pected fake stars dis­trib­uted across 18,617 repos­i­to­ries by roughly 301,000 ac­counts.

The prob­lem ac­cel­er­ated dra­mat­i­cally in 2024. By July, 16.66% of all repos­i­to­ries with 50 or more stars were in­volved in fake star cam­paigns - up from near-zero be­fore 2022. The re­searchers’ de­tec­tion proved ac­cu­rate: 90.42% of flagged repos­i­to­ries and 57.07% of flagged ac­counts had been deleted as of January 2025, con­firm­ing GitHub it­self rec­og­nized these as il­le­git­i­mate.

AI and LLM repos­i­to­ries emerged as the largest non-ma­li­cious cat­e­gory of fake-star re­cip­i­ents, ahead of blockchain/​cryp­tocur­rency pro­jects in ab­solute vol­ume at 177,000 fake stars. The study notes that many of which are aca­d­e­mic pa­per repos­i­to­ries or LLM-related startup prod­ucts.” Critically, 78 repos­i­to­ries with de­tected fake star cam­paigns ap­peared on GitHub Trending, prov­ing that pur­chased stars suc­cess­fully game the plat­for­m’s dis­cov­ery al­go­rithm.

Earlier foun­da­tional work in­cludes Dagster’s March 2023 in­ves­ti­ga­tion, where en­gi­neers pur­chased stars from two ven­dors to study the phe­nom­e­non. They found ser­vices via ba­sic Google search. A pre­mium ven­dor - GitHub24, a reg­is­tered German com­pany (Moller und Ringauf GbR) - charged EUR 0.85 per star and de­liv­ered re­li­ably, with all 100 stars per­sist­ing af­ter one month. A bud­get ser­vice (Baddhi Shop) sold 1,000 stars for $64, though only 75% sur­vived.

The star-sell­ing ecosys­tem spans ded­i­cated web­sites, free­lance plat­forms, ex­change net­works, and un­der­ground chan­nels. At least a dozen ac­tive web­sites sell GitHub stars di­rectly, in­clud­ing SocialPlug.io, Buy.fans, Boost-Like.store, GitHubPromoter.com, Followdeh.com, and Vurike.com.

On Fiverr, 24 ac­tive gigs sell GitHub pro­mo­tion, with pack­ages from $5 for ba­sic stars and forks to $25+ for organic pro­mo­tion.” Many use ob­fus­cated lan­guage to evade plat­form fil­ters. Star ex­change plat­forms like GithubStarMate.com and SafeStarExchange.com - both live and op­er­a­tional - en­able free mu­tual star­ring through credit-based sys­tems.

The in­fra­struc­ture ex­tends be­yond stars. At least seven open-source tools on GitHub (fake-git-history, com­mit-bot, Commiter, and oth­ers) ex­ist specif­i­cally to fab­ri­cate GitHub con­tri­bu­tion graphs. Pre-built GitHub pro­files with five-year com­mit his­to­ries and Arctic Code Vault Contributor badges sell for ap­prox­i­mately $5,000 on Telegram.

Some ven­dors of­fer re­place­ment guar­an­tees - Followdeh ad­ver­tises 30-day cov­er­age, and pre­mium ser­vices promise non-drop” stars that sur­vive GitHub’s de­tec­tion sys­tems. SocialPlug claims 3.1 mil­lion stars de­liv­ered across 53,000+ clients and of­fers a for­mal API for pro­gram­matic pur­chas­ing.

A Tsinghua University study (ACSAC 2020) doc­u­mented Chinese QQ and WeChat pro­mo­tion groups with 1,020+ mem­bers pro­cess­ing roughly 20 re­pos per day, gen­er­at­ing an es­ti­mated $3.4 to $4.4 mil­lion an­nu­ally in pro­moter prof­its.

To move be­yond re­ported sta­tis­tics, we built a GitHub API analy­sis tool and ran it against 20 repos­i­to­ries: pro­jects flagged by StarScout, fast-grow­ing AI re­pos from the Runa Capital ROSS Index, and known or­ganic base­lines. For each repo, we sam­pled 150 stargazer pro­files and mea­sured ac­count age, pub­lic re­pos, fol­low­ers, and bio pres­ence.

The fin­ger­prints of ma­nip­u­la­tion are un­mis­tak­able once you know what to look for.

Organic repos­i­to­ries are starred by de­vel­op­ers who have been on GitHub for years, main­tain their own pro­jects, and fol­low other users. Ghost ac­counts - zero re­pos, zero fol­low­ers, no bio - make up about 1% of a healthy pro­jec­t’s stargazer base.

These re­pos share a dis­tinc­tive fin­ger­print. The ac­counts aren’t ob­vi­ously new - me­dian ages of 1,000+ days - so they pass sim­ple young ac­count” fil­ters. But they’re empty: a third have zero re­pos, half to four-fifths have zero fol­low­ers, and a quar­ter are com­plete ghosts. These are aged ac­counts pur­chased or farmed specif­i­cally for star cam­paigns.

The fork-to-star ra­tio is the strongest sig­nal. Flask has 235 forks per 1,000 stars. Shardeum has 22. FreeDomain has 17. When no­body is fork­ing a 157,000-star repos­i­tory, no­body is us­ing it. The watcher-to-star ra­tio tells the same story: FreeDomain’s 0.001 means that for every 1,000 peo­ple who starred the repo, just one ac­tu­ally watches it for up­dates.

FreeDomain is worth iso­lat­ing: 157,000 stars, but only 168 watch­ers and 2,676 forks. That’s a watcher-to-star ra­tio 26x lower than Flask. 81.3% of sam­pled stargaz­ers have zero fol­low­ers. This is a repos­i­tory where al­most no­body who starred it has any vis­i­ble pres­ence on GitHub.

Union Labs is the most con­se­quen­tial case. It was ranked #1 on Runa Capital’s ROSS Index for Q2 2025 - a widely cited VC in­dus­try re­port iden­ti­fy­ing the hottest open-source star­tups” - with 54.2x star growth and 74,300 stars. Our analy­sis found 32.7% zero-repo ac­counts, 52% zero-fol­lower ac­counts, and a fork-to-star ra­tio of 0.052. The StarScout analy­sis flagged it with 47.4% sus­pected fake stars. An in­flu­en­tial in­vest­ment-sourc­ing re­port that VCs rely on was topped by a pro­ject with nearly half its stars sus­pected as ar­ti­fi­cial.

RagaAI-Catalyst and ope­nai-fm show clear ma­nip­u­la­tion sig­nals. RagaAI has 76.2% zero-fol­lower ac­counts and 28% ghosts - nearly iden­ti­cal to the blockchain pat­tern. ope­nai-fm is the most ex­treme case in our dataset: 66% sus­pi­cious ac­counts, 36% ghosts, and a me­dian ac­count age of just 116 days. Two-thirds of its stargaz­ers are less than a year old with vir­tu­ally no GitHub ac­tiv­ity. (The StarScout analy­sis notes this is likely third-party bots, not OpenAI it­self.)

Langflow - flagged by StarScout at 47.9% fake - showed clean met­rics in our pro­file sam­ple, with a me­dian age of 2,859 days and low ghost rates. This likely re­flects im­proved ac­count qual­ity since the StarScout scan. The 0.060 fork-to-star ra­tio is still no­tably low - roughly a quar­ter of Flask’s - sug­gest­ing less gen­uine adop­tion rel­a­tive to star count.

For com­par­i­son, NousResearch’s her­mes-agent looks rel­a­tively or­ganic: me­dian age 8 years, 6% ghosts, fork-to-star ra­tio of 0.133. Despite Reddit ac­cu­sa­tions of as­tro­turf­ing, the stargazer pop­u­la­tion is mostly real de­vel­op­ers. The pro­jec­t’s crypto-ad­ja­cent au­di­ence in­cludes more ca­sual GitHub users, which ex­plains slightly el­e­vated zero-fol­lower rates, but the fun­da­men­tal en­gage­ment pat­tern is le­git­i­mate.

The con­nec­tion be­tween GitHub star counts and startup fund­ing is not spec­u­la­tive - it is ex­plic­itly doc­u­mented by the in­vestors them­selves.

Jordan Segall, Partner at Redpoint Ventures, pub­lished an analy­sis of 80 de­vel­oper tool com­pa­nies show­ing that the me­dian GitHub star count at seed fi­nanc­ing was 2,850 and at Series A was 4,980. He con­firmed: Many VCs write in­ter­nal scrap­ing pro­grams to iden­tify fast grow­ing github pro­jects for sourc­ing, and the most com­mon met­ric they look to­ward is stars.”

Those num­bers set an im­plicit tar­get. For $85 to $285 in bud­get stars, a startup can man­u­fac­ture the 2,850-star seed me­dian. For $990 to $4,500, it can reach Series A ter­ri­tory. Against typ­i­cal seed rounds of $1-10 mil­lion, the ROI ranges from 3,500x to 117,000x.

Runa Capital pub­lishes the ROSS (Runa Open Source Startup) Index quar­terly, rank­ing the 20 fastest-grow­ing open-source star­tups by GitHub star growth rate. Per TechCrunch, 68% of ROSS Index star­tups that at­tracted in­vest­ment did so at seed stage, with $169 mil­lion raised across tracked rounds. GitHub it­self, through its GitHub Fund part­ner­ship with M12 (Microsoft’s VC arm), com­mits $10 mil­lion an­nu­ally to in­vest in 8-10 open-source com­pa­nies at pre-seed/​seed stages based partly on plat­form trac­tion.

* Lovable (formerly GPT Engineer): 50,000+ stars, $7.5M pre-seed, $200M Series A at $1.8 bil­lion val­u­a­tion with 45 em­ploy­ees

Dagster’s Fraser Marlow, who led the fake star in­ves­ti­ga­tion, ad­mit­ted di­rectly: In the run-up to the fundrais­ing, I spent a fair amount of time pre­oc­cu­pied with GitHub stars.” An aca­d­e­mic pa­per in Organization Science pro­vided rig­or­ous sta­tis­ti­cal ev­i­dence that GitHub en­gage­ment cor­re­lates with startup fund­ing out­comes - star­tups ac­tive on GitHub are 15 per­cent­age points more likely to have raised a fi­nanc­ing round.

The in­cen­tive loop is self-re­in­forc­ing: VCs use stars as sourc­ing sig­nals, so star­tups ma­nip­u­late stars, so VCs see in­flated trac­tion, so more VCs adopt star-track­ing, so more star­tups ma­nip­u­late. Redpoint’s own pub­lished bench­marks give star­tups an ex­act tar­get to buy to­ward.

Our analy­sis re­vealed the fork-to-star ra­tio as the strongest sim­ple heuris­tic for iden­ti­fy­ing po­ten­tial ma­nip­u­la­tion. The logic is straight­for­ward: a star costs noth­ing and con­veys no com­mit­ment. A fork means some­one down­loaded the code to use or mod­ify it.

Any repos­i­tory with a fork-to-star ra­tio be­low 0.05 and more than 10,000 stars war­rants scrutiny. The watcher-to-star ra­tio is even more telling: or­ganic pro­jects av­er­age 0.005 to 0.030; FreeDomain reg­is­ters 0.001.

These ra­tios aren’t per­fect - ed­u­ca­tional re­pos and cu­rated lists nat­u­rally have low fork rates. But as a first-pass fil­ter, they catch the most egre­gious cases that raw star counts miss en­tirely.

The prob­lem ex­tends to every plat­form where pop­u­lar­ity met­rics in­flu­ence trust.

npm down­loads are triv­ially in­flat­able. Developer Andy Richardson demon­strated this by us­ing a sin­gle AWS Lambda func­tion (free tier) to push his pack­age is-in­tro­spec­tion-query to nearly 1 mil­lion down­loads per week - sur­pass­ing le­git­i­mate pack­ages like urql and mobx. Zero ac­tual users. The CMU study found that of re­pos with fake star cam­paigns, only 1.23% ap­peared in pack­age reg­istries, but of those 738 pack­ages, 70.46% had zero de­pen­dent pro­jects.

VS Code Marketplace ex­ten­sions are sim­i­larly vul­ner­a­ble. Researchers demon­strated 1,000+ in­stalls of a fake ex­ten­sion in 48 hours. AquaSec found 1,283 ex­ten­sions with known ma­li­cious de­pen­den­cies to­tal­ing 229 mil­lion in­stalls.

X/Twitter pro­mo­tion am­pli­fies ar­ti­fi­cial GitHub vi­ral­ity through en­gage­ment pods - pri­vate groups where mem­bers agree to like, re­post, and com­ment on each oth­er’s con­tent. Growth Terminal sells this as a prod­uct fea­ture. NBC News and Clemson University re­searchers iden­ti­fied a net­work of 686 X ac­counts that posted more than 130,000 times us­ing LLM-generated con­tent, some con­tain­ing tell­tale ar­ti­facts like Dolphin here!” from the un­cen­sored Dolphin model they em­ployed.

The Higgsfield AI case doc­u­ments cross-plat­form as­tro­turf­ing at in­dus­trial scale: over 100 con­firmed spam posts across 60+ sub­red­dits, com­bined with mass tem­plate DMs to con­tent cre­ators of­fer­ing pay­ment for pro­mo­tion.

The FTC Consumer Review Rule, ef­fec­tive October 21, 2024, ex­plic­itly pro­hibits sell­ing or buy­ing fake in­di­ca­tors of so­cial me­dia in­flu­ence” gen­er­ated by bots or fake ac­counts for com­mer­cial pur­poses. Penalties: up to $53,088 per vi­o­la­tion. The FTC is­sued its first warn­ing let­ters to 10 com­pa­nies in December 2025. A GitHub star pur­chased to pro­mote a com­mer­cial prod­uct fits this frame­work.

The SEC prece­dent is more di­rect. HeadSpin’s CEO was charged with wire fraud (maximum 20 years) and se­cu­ri­ties fraud for in­flat­ing met­rics to de­ceive in­vestors out of $80 mil­lion. ComplYant’s founder faced charges for claim­ing $250,000 monthly rev­enue when ac­tual rev­enue was $250.

The SECs mes­sage: Startup fundrais­ers can­not use the fake it un­til you make it’ ethos to white­wash ly­ing to in­vestors.”

If a startup buys fake GitHub stars to in­flate per­ceived trac­tion dur­ing a fundrais­ing round, and in­vestors rely on those met­rics to de­ploy cap­i­tal, the wire fraud frame­work ap­plies: us­ing elec­tronic com­mu­ni­ca­tions to mis­rep­re­sent ma­te­r­ial facts for fi­nan­cial gain. No one has been charged specif­i­cally for fake GitHub stars yet. Given the CMU re­search doc­u­ment­ing the prac­tice at scale and the FTC rule ex­plic­itly cov­er­ing fake so­cial in­flu­ence met­rics, it may only be a mat­ter of time.

GitHub’s Acceptable Use Policies ex­plic­itly pro­hibit inauthentic in­ter­ac­tions, such as fake ac­counts and au­to­mated in­au­then­tic ac­tiv­ity,” rank abuse, such as au­to­mated star­ring or fol­low­ing,” and creation of or par­tic­i­pa­tion in sec­ondary mar­kets for the pur­pose of the pro­lif­er­a­tion of in­au­then­tic ac­tiv­ity.” The poli­cies even specif­i­cally pro­hibit star­ring in­cen­tivized by cryptocurrency air­drops, to­kens, cred­its, gifts or other give-aways.”

Enforcement is re­ac­tive and asym­met­ric. GitHub re­moved 90.42% of repos­i­to­ries flagged by StarScout, but only 57.07% of the ac­counts that de­liv­ered those stars. The in­fra­struc­ture for fu­ture cam­paigns largely re­mains in­tact. When Dagster pub­lished its in­ves­ti­ga­tion, fake star pro­files were deleted within 48 hours - but only af­ter pub­lic em­bar­rass­ment, not proac­tive de­tec­tion.

GitHub has never pub­lished an en­gi­neer­ing blog post about its de­tec­tion meth­ods or en­force­ment sta­tis­tics. No trans­parency re­port ex­ists for star ma­nip­u­la­tion. The com­pa­ny’s VP of Security Operations told Wired only that they disabled user ac­counts in ac­cor­dance with GitHub’s Acceptable Use Policies,” de­clin­ing to elab­o­rate - though that com­ment was specif­i­cally about the Stargazers Ghost Network mal­ware op­er­a­tion, not van­ity met­ric ma­nip­u­la­tion.

The CMU re­searchers rec­om­mended GitHub adopt a weighted pop­u­lar­ity met­ric based on net­work cen­tral­ity rather than raw star counts. A change that would struc­turally un­der­mine the fake star econ­omy. GitHub has not im­ple­mented it.

Bessemer Venture Partners calls stars vanity met­rics” and in­stead tracks unique monthly con­trib­u­tor ac­tiv­ity - any­one who cre­ated an is­sue, com­ment, PR, or com­mit. Fewer than 5% of top 10,000 pro­jects ever ex­ceeded 250 monthly con­trib­u­tors; only 2% sus­tained it across six months.

Jono Bacon at StateShift rec­om­mends five met­rics that cor­re­late with real adop­tion: pack­age down­loads, is­sue qual­ity (production edge cases from real users), con­trib­u­tor re­ten­tion (time to sec­ond PR), com­mu­nity dis­cus­sion depth, and us­age teleme­try.

The fork-to-star ra­tio our analy­sis sur­faced is the sim­plest first-pass fil­ter. A healthy pro­ject has roughly 100-200 forks per 1,000 stars. Projects be­low 50 forks per 1,000 stars with high ab­solute counts de­serve a closer look.

As one com­menter put it: You can fake a star count, but you can’t fake a bug fix that saves some­one’s week­end.”

First, the in­cen­tive loop. VCs use stars as sourc­ing sig­nals. Startups ma­nip­u­late stars. VCs see in­flated trac­tion. More VCs adopt star-track­ing. More star­tups ma­nip­u­late. Redpoint’s pub­lished bench­marks - 2,850 at seed, 4,980 at Series A - ef­fec­tively give star­tups a price list for how many stars to buy.

Second, the AI sec­tor’s spe­cific vul­ner­a­bil­ity. The com­bi­na­tion of ex­treme hype, crypto-ad­ja­cent fund­ing mod­els that re­ward to­ken price over prod­uct qual­ity, and a re­viewer ecosys­tem on X/Twitter pop­u­lated partly by fab­ri­cated per­sonas cre­ates a per­fect en­vi­ron­ment for man­u­fac­tured cred­i­bil­ity. Our analy­sis con­firmed this: the re­pos with the worst ma­nip­u­la­tion sig­nals were over­whelm­ingly blockchain and crypto-ad­ja­cent AI pro­jects.

Third, GitHub’s en­force­ment asym­me­try. Removing re­pos but leav­ing 57% of fake ac­counts in­tact pre­serves the la­bor force of the fake star econ­omy while do­ing lit­tle to de­ter re­peat of­fenses. Until GitHub im­ple­ments struc­tural changes - weighted pop­u­lar­ity met­rics, ac­count-level rep­u­ta­tion scor­ing, or trans­par­ent en­force­ment re­port­ing - the gap be­tween star counts and gen­uine de­vel­oper adop­tion will con­tinue to widen.

The star econ­omy is a $50 prob­lem with a $50 mil­lion con­se­quence. Until the plat­forms, in­vestors, and reg­u­la­tors catch up, the mar­ket will keep pay­ing the $50.

...

Read the original on awesomeagents.ai »

3 590 shares, 43 trendiness

Advancing Open-Source Coding

We are open sourc­ing our lat­est model, Kimi K2.6, fea­tur­ing state-of-the-art cod­ing, long-hori­zon ex­e­cu­tion, and agent swarm ca­pa­bil­i­ties. Kimi K2.6 is now avail­able via Kimi.com, the Kimi App, the API, and Kimi Code.

Kimi K2.6 shows strong im­prove­ments in long-hori­zon cod­ing tasks, with re­li­able gen­er­al­iza­tion across pro­gram­ming lan­guages (e.g., Rust, Go, and Python) and tasks (e.g., front-end, de­vops, and per­for­mance op­ti­miza­tion). On Kimi Code Bench, our in­ter­nal cod­ing bench­mark cov­er­ing di­verse com­pli­cated end-to-end tasks, Kimi K2.6 demon­strates sig­nif­i­cant im­prove­ments over Kimi K2.5.

Kimi K2.6 suc­cess­fully down­loaded and de­ployed the Qwen3.5-0.8B model lo­cally on a Mac. By im­ple­ment­ing and op­ti­miz­ing model in­fer­ence in Zig—a highly niche pro­gram­ming lan­guage—it demon­strated ex­cep­tional out-of-dis­tri­b­u­tion gen­er­al­iza­tion. Across 4,000+ tool calls, over 12 hours of con­tin­u­ous ex­e­cu­tion, and 14 it­er­a­tions, Kimi K2.6 dra­mat­i­cally im­proved through­put from ~15 to ~193 to­kens/​sec, ul­ti­mately achiev­ing speeds ~20% faster than LM Studio.

Kimi K2.6 au­tonomously over­hauled ex­change-core, an 8-year-old open-source fi­nan­cial match­ing en­gine. Over a 13-hour ex­e­cu­tion, the model it­er­ated through 12 op­ti­miza­tion strate­gies, ini­ti­at­ing over 1,000 tool calls to pre­cisely mod­ify more than 4,000 lines of code. Acting as an ex­pert sys­tems ar­chi­tect, Kimi K2.6 an­a­lyzed CPU and al­lo­ca­tion flame graphs to pin­point hid­den bot­tle­necks and boldly re­con­fig­ured the core thread topol­ogy (from 4ME+2RE to 2ME+1RE). Despite the en­gine al­ready op­er­at­ing near its per­for­mance lim­its, Kimi K2.6 ex­tracted a 185% medium through­put leap (from 0.43 to 1.24 MT/s) and a 133% per­for­mance through­put gain (soaring from 1.23 to 2.86 MT/s).

In beta tests, K2.6 per­forms well on long-hori­zon cod­ing tasks in en­ter­prise eval­u­a­tions (by al­pha­betic or­der):

Based on the strong cod­ing ca­pa­bil­i­ties, Kimi K2.6 can turn sim­ple prompts into com­plete front-end in­ter­faces, gen­er­at­ing struc­tured lay­outs with de­lib­er­ate de­sign choices such as aes­thetic hero sec­tions, as well as in­ter­ac­tive el­e­ments and rich an­i­ma­tions, in­clud­ing scroll-trig­gered ef­fects. With strong pro­fi­ciency in lever­ag­ing im­age and video gen­er­a­tion tools, Kimi K2.6 sup­ports the gen­er­a­tion of vi­su­ally co­her­ent as­sets and con­tributes to higher-qual­ity, more salient hero sec­tions.

Moreover, Kimi K2.6 ex­pands be­yond sta­tic fron­tend de­vel­op­ment to sim­ple full-stack work­flows—span­ning au­then­ti­ca­tion to user in­ter­ac­tion to data­base op­er­a­tions for light­weight use cases like trans­ac­tion log­ging or ses­sion man­age­ment.

We es­tab­lished an in­ter­nal Kimi Design Bench, or­ga­nized into four cat­e­gories: Visual Input Tasks, Landing Page Construction, Full-Stack Application Development, and General Creative Programming. In com­par­i­son with Google AI Studio, Kimi K2.6 shows promis­ing re­sults and per­forms well across these cat­e­gories.

Below are ex­am­ples gen­er­ated by K2.6 Agent from a sin­gle prompt, with pre­con­fig­ured har­nesses and tools:

Scaling out, not just up. An Agent Swarm dy­nam­i­cally de­com­poses tasks into het­ero­ge­neous sub­tasks ex­e­cuted con­cur­rently by self-cre­ated do­main-spe­cial­ized agents.

Based on the K2.5 Agent Swarm re­search pre­view, Kimi K2.6 Agent Swarm demon­strates a qual­i­ta­tive leap in the agent swarm ex­pe­ri­ence. It seam­lessly co­or­di­nates het­ero­ge­neous agents to com­bine com­ple­men­tary skills: broad search lay­ered with deep re­search, large-scale doc­u­ment analy­sis fused with long-form writ­ing, and multi-for­mat con­tent gen­er­a­tion ex­e­cuted in par­al­lel. This com­po­si­tional in­tel­li­gence en­ables the swarm to de­liver end-to-end out­puts—span­ning doc­u­ments, web­sites, slides, and spread­sheets—within a sin­gle au­tonomous run.

The ar­chi­tec­ture scales hor­i­zon­tally to 300 sub-agents ex­e­cut­ing across 4,000 co­or­di­nated steps si­mul­ta­ne­ously, a sub­stan­tial ex­pan­sion from K2.5′s 100 sub-agents and 1,500 steps. This mas­sive par­al­leliza­tion fun­da­men­tally re­duces end-to-end la­tency while sig­nif­i­cantly en­hanc­ing out­put qual­ity and ex­pand­ing the op­er­a­tional bound­aries of Agents swarms.

It can also turn any high-qual­ity files such as PDFs, spread­sheets, slides, and Word doc­u­ments into Skills. Kimi K2.6 cap­tures and main­tains the doc­u­ments’ struc­tural and styl­is­tic DNA, en­abling you to re­pro­duce the same qual­ity and for­mat in fu­ture tasks.

Here are some ex­am­ples:

K2.6 demon­strates strong per­for­mance in au­tonomous, proac­tive agents such as OpenClaw and Hermes, which op­er­ate across mul­ti­ple ap­pli­ca­tions with con­tin­u­ous, 24/7 ex­e­cu­tion.

Unlike sim­ple chat-based in­ter­ac­tions, these work­flows re­quire AI to proac­tively man­age sched­ules, ex­e­cute code, and or­ches­trate cross-plat­form op­er­a­tions as a per­sis­tent back­ground agent.

Our RL in­fra team used a K2.6-backed agent that op­er­ated au­tonomously for 5 days, man­ag­ing mon­i­tor­ing, in­ci­dent re­sponse, and sys­tem op­er­a­tions, demon­strat­ing per­sis­tent con­text, multi-threaded task han­dling, and full-cy­cle ex­e­cu­tion from alert to res­o­lu­tion. Here is K2.6′s work­log (anonymized to re­move sen­si­tive in­for­ma­tion):

Kimi K2.6 de­liv­ers mea­sur­able im­prove­ments in real-world re­li­a­bil­ity: more pre­cise API in­ter­pre­ta­tion, sta­bler long-run­ning per­for­mance, and en­hanced safety aware­ness dur­ing ex­tended re­search tasks.

Performance gains are quan­ti­fied by our in­ter­nal Claw Bench, the eval­u­a­tion suite span­ning five do­mains: Coding Tasks, IM Ecosystem Integration, Information Research & Analysis, Scheduled Task Management, and Memory Utilization. Across all met­rics, Kimi K2.6 sig­nif­i­cantly out­per­forms Kimi K2.5 in task com­ple­tion rates and tool in­vo­ca­tion ac­cu­racy—par­tic­u­larly in work­flows re­quir­ing sus­tained au­tonomous op­er­a­tion with­out hu­man over­sight.

Building upon Kimi K2.6′s ro­bust or­ches­tra­tion ca­pa­bil­i­ties, Kimi K2.6 ex­tends your proac­tive agents to Claw Groups as a re­search pre­view—a new in­stan­ti­a­tion of the Agent Swarm ar­chi­tec­ture.

Claw Groups em­brace an open, het­ero­ge­neous ecosys­tem: Multiple agents and hu­mans op­er­ate as true col­lab­o­ra­tors. Users can on­board agents from any de­vice, run­ning any model, each car­ry­ing their own spe­cial­ized toolk­its, skills and per­sis­tent mem­ory con­texts. Whether de­ployed on lo­cal lap­tops, mo­bile de­vices, or cloud in­stances, these di­verse agents in­te­grate seam­lessly into a shared op­er­a­tional space.

At the cen­ter of this swarm, Kimi K2.6 serves as an adap­tive co­or­di­na­tor. It dy­nam­i­cally matches tasks to agents based on their spe­cific skill pro­files and avail­able tools, op­ti­miz­ing for ca­pa­bil­ity fit. When an agent en­coun­ters fail­ure or stalls, the co­or­di­na­tor de­tects the in­ter­rup­tion, au­to­mat­i­cally re­as­signs the task or re­gen­er­ates sub­tasks, and ac­tively man­ages the full life­cy­cle of de­liv­er­ables—from ini­ti­a­tion through val­i­da­tion to com­ple­tion.

We also want to thank the K2.6-powered agents in Claw Groups—we’ve been dog­food­ing our own agent mar­ket­ing team by re­fin­ing hu­man–agent work­flows in prac­tice. Using Claw Groups, we run end-to-end con­tent pro­duc­tion and launch cam­paigns, with spe­cial­ized agents like Demo Makers, Benchmark Makers, Social Media Agents, and Video Makers work­ing to­gether. K2.6 co­or­di­nates the process, en­abling agents to share in­ter­me­di­ate re­sults and turn ideas into con­sis­tent, fully pack­aged de­liv­er­ables.

We are mov­ing be­yond sim­ply ask­ing AI a ques­tion or as­sign­ing AI a task, and en­ter­ing a phase where hu­man and AI col­lab­o­rate as gen­uine part­ners—com­bin­ing strengths to solve prob­lems col­lec­tively. Claw Groups marks our lat­est ef­forts to­ward a fu­ture where the bound­aries be­tween my agent,” your agent,” and our team” dis­solve seam­lessly into a col­lab­o­ra­tive sys­tem.

To re­pro­duce of­fi­cial Kimi-K2.6 bench­mark re­sults, we rec­om­mend us­ing the of­fi­cial API. For third-party providers, re­fer to Kimi Vendor Verifier (KVV) to choose high-ac­cu­racy ser­vices. Details: https://​kimi.com/​blog/​kimi-ven­dor-ver­i­fier

* We re­port re­sults for Kimi K2.6 and Kimi K2.5 with think­ing mode en­abled, Claude Opus 4.6 with max ef­fort, GPT-5.4 with xhigh rea­son­ing ef­fort, and Gemini 3.1 Pro with a high think­ing level.

* Unless oth­er­wise spec­i­fied, all Kimi K2.6 ex­per­i­ments were con­ducted with tem­per­a­ture = 1.0, top-p = 1.0, and a con­text length of 262,144 to­kens.

* Benchmarks with­out pub­licly avail­able scores were re-eval­u­ated un­der the same con­di­tions used for Kimi K2.6 and are marked with an as­ter­isk (*). Except where noted with an as­ter­isk, all other re­sults are cited from of­fi­cial re­ports.

* IMO-AnswerBench scores for GPT-5.4 and Claude 4.6 were ob­tained from https://​z.ai/​blog/​glm-5.1.

* Humanity’s Last Exam (HLE) and other rea­son­ing tasks were eval­u­ated with a max­i­mum gen­er­a­tion length of 98,304 to­kens. By de­fault, we re­port re­sults on the HLE full set. For the text-only sub­set, Kimi K2.6 achieves 36.4% ac­cu­racy with­out tools and 55.5% with tools.

* Kimi K2.6 was equipped with search, code-in­ter­preter, and web-brows­ing tools for HLE with tools, BrowseComp, DeepSearchQA, and WideSearch.

* For HLE-Full with tools, the max­i­mum gen­er­a­tion length is 262,144 to­kens with a per-step limit of 49,152 to­kens. We em­ploy a sim­ple con­text man­age­ment strat­egy: once the con­text win­dow ex­ceeds the thresh­old, only the most re­cent round of tool-re­lated mes­sages is re­tained.

* For BrowseComp, we re­port scores ob­tained with con­text man­age­ment us­ing the same dis­card-all strat­egy as Kimi K2.5 and DeepSeek-V3.2.

* For DeepSearchQA, no con­text man­age­ment was ap­plied to Kimi K2.6 tests, and tasks ex­ceed­ing the sup­ported con­text length were di­rectly counted as failed. Scores for Claude Opus 4.6, GPT-5.4, and Gemini 3.1 Pro on DeepSearchQA are cited from the Claude Opus 4.7 System Card.

* For WideSearch, we re­port re­sults un­der the hide tool re­sult” con­text man­age­ment set­ting. Once the con­text win­dow ex­ceeds the thresh­old, only the most re­cent round of tool-re­lated mes­sages is re­tained.

* The test sys­tem prompts are iden­ti­cal to those used in the Kimi K2.5 tech­ni­cal re­port.

* Claw Eval was con­ducted us­ing ver­sion 1.1 with max-to­kens-per-step = 16384.

* For APEX-Agents, we eval­u­ate 452 tasks from the pub­lic 480-task re­lease, as done by Artificial Analysis (excluding Investment Banking Worlds 244 and 246, which have ex­ter­nal run­time de­pen­den­cies).

* Terminal-Bench 2.0 scores were ob­tained with the de­fault agent frame­work (Terminus-2) and the pro­vided JSON parser, op­er­at­ing in pre­serve think­ing mode.

* For the SWE-Bench se­ries of eval­u­a­tions (including Verified, Multilingual, and Pro), we used an in-house eval­u­a­tion frame­work adapted from SWE-agent. This frame­work in­cludes a min­i­mal set of tools—bash tool, cre­ate­file tool, in­sert tool, view tool, str­re­place tool, and sub­mit tool.

* All re­ported scores for cod­ing tasks are av­er­aged over 10 in­de­pen­dent runs.

* Settings with Python tool use max-to­kens-per-step = 65,536 and max-steps = 50 for multi-step rea­son­ing.

* MMMU-Pro fol­lows the of­fi­cial pro­to­col, pre­serv­ing in­put or­der and prepend­ing im­ages.

...

Read the original on www.kimi.com »

4 556 shares, 38 trendiness

Qwen Studio

...

Read the original on qwen.ai »

5 553 shares, 32 trendiness

At Long Last, InfoWars Is Ours

Let me tell you a story. When I was a child, I suf­fered from night ter­rors. It was al­ways the same dream: I could hear my fam­ily and neigh­bors wail­ing in the street out­side as they were pur­sued and then de­stroyed by a name­less malev­o­lent force, some­thing nei­ther I nor any­one else could con­trol, a great dark­ness that was, some­how, all my fault.

Today, that child­hood dream is fi­nally com­ing true. Today I can fi­nally say the sweet­est nine or 10 words in the English lan­guage: Global Tetrahedron has com­pleted its plan to con­trol InfoWars.com.

I’ve had a lot of time to think about InfoWars in the last year and a half. As the sea­sons have changed, my am­bi­tions for the pro­ject have grown grander, cru­eler, bet­ter aligned with mar­ket data. Come, friends, and imag­ine with me…

Imagine a roar­ing arena packed to the rafters with patho­log­i­cal liars. High above you in the nose­bleeds are pod­cast­ers, scream­ing that you’ll die if you don’t buy their skin­care prod­ucts. Below, on the floor, imag­ine de­monic bat­tal­ions of su­per-in­flu­encers phys­i­cally forc­ing peo­ple into home fit­ness de­vices de­signed to dis­man­tle their bod­ies bone by bone and re­assem­ble them into a grotesque statue of your­self. Out of the throngs, an ex­tremely sick look­ing man ap­proaches you. He puts his hands on your shoul­ders. He ex­plains that he is your life coach and that you owe him $800.

Such is the InfoWars I en­vi­sion: An in­fi­nite vir­tual sur­face teem­ing with ads. Not just ads, but scams! Not just scams, but lies with no ob­ject, free rad­i­cal mis­in­for­ma­tion, sen­tences and im­ages so poorly thought out that they are un­healthy even to view for just a few sec­onds. The InfoWars of old was only the pro­to­type for the hell I know we can build to­gether: A dig­i­tal plat­form where, every day, vis­i­tors sac­ri­fice them­selves at al­tars of delu­sion and mis­ery, their minds fully dis­in­te­grat­ing on con­tact.

With this new InfoWars, we will de­moc­ra­tize psy­cho­log­i­cal tor­ture, wel­com­ing bru­tal and sadis­tic ideas from every­one, even the very stu­pid­est among us. It will be like the Manhattan Project, only in­stead of a bomb, we will be build­ing a web­site.

The InfoWars of to­mor­row will con­verge into a swirling vor­tex of con­tent about con­tent, tal­ent ac­quir­ing tal­ent, rings of con­cen­tric me­dia merg­ers pro­cess­ing all hu­man artistry into one end­lessly di­gestible slurry. This will be a dank, sun­less place, one where panic and cap­i­tal feed on each other like twins in the womb of a hulk­ing, un­know­able mon­ster—a mon­ster known by many names, but which I like to call mod­ern-day America.

All of this is to say that I be­lieve in us. I be­lieve that with the new InfoWars, we can al­chem­ize the pi­o­neer­ing spirit of am­a­teur in­quiry, the profit-max­i­miz­ing drive of cor­po­ra­tions, and the cold men­tal clar­ity that comes only with dis­ci­plined daily in­ges­tion of mind- and body-al­ter­ing chem­i­cals. If we can do that, what other great things can we do to­gether?

I don’t yet know, but I’m ex­cited to find out. Welcome home, war­riors. The fu­ture be­longs to us. We’re writ­ing the story now. It’s go­ing to be a long one, and it’s go­ing to be a bad one.

So set­tle in. Make your­self com­fort­able. Buy a tote bag.

Nothing can stop us now that we’re in charge of a web­site.

...

Read the original on theonion.com »

6 385 shares, 21 trendiness

Saunas Lower Your Heart Rate More Than Exercise

Saunas have been around since the prim­i­tive years in an­cient Finland, and have al­ways been con­sid­ered to have a ther­a­peu­tic ef­fect[1]. Saunas are a hot, dry en­vi­ron­ment used to stim­u­late our car­dio­vas­cu­lar sys­tem. During ex­treme heat ex­po­sure, our heart rate rises and our ves­sels di­late to in­crease the de­liv­ery of blood vol­ume in or­der to pro­tect the body[2].

This ex­tra pres­sure on the heart is known to have long-term health ben­e­fits[3]. The heat ex­po­sure also pro­motes sweat­ing and there­fore the elim­i­na­tion of tox­ins, in­clud­ing those gen­er­ated in the process of re­pair­ing small mus­cle tears af­ter ex­er­cise[4]. It is for this rea­son that saunas are also con­sid­ered great for re­cov­ery. All of this is no news, at the end of the day is­n’t that what ro­man baths were built for? For re­cov­ery af­ter bat­tle[5]!

However, most stud­ies have looked at the ben­e­fits of fre­quent sauna bathing and the im­pacts on long-term health. Motivated to un­der­stand the im­me­di­ate phys­i­o­log­i­cal re­sponse to saunas, we looked at the same-day ef­fects across ~59,000 daily records from 256 users.

We used sim­ple paired t-test eval­u­a­tions to as­sess the im­me­di­ate same-day ef­fects of saunas.

Sauna days were as­so­ci­ated with:

That fits our in­tu­ition: many peo­ple sauna af­ter a work­out.

Sauna days also showed lower min­i­mum heart rate com­pared to non‑sauna days. Importantly, this ef­fect re­mains even af­ter con­trol­ling for ac­tiv­ity, which sug­gests the lower night­time heart rate is­n’t sim­ply due to ex­er­cise. The dif­fer­ence be­tween sauna and non-sauna days is on av­er­age 5% (3bpm) which is a no­tice­able phys­i­o­log­i­cal change.

These re­sults were sta­tis­ti­cally ro­bust (FDR‑corrected p < 0.05 and Cohen’s d > 0.2), sup­port­ing the idea that sauna use may be linked to bet­ter same‑day re­cov­ery.

Females showed larger ac­tiv­ity in­creases on sauna days, which may re­flect more con­sis­tent sauna use on work­out days. However, fe­males showed a smaller drop in min­i­mum heart rate than males on sauna days.

As we looked at in pre­vi­ous blogs, the men­strual cy­cle can in­flu­ence re­cov­ery and night­time heart rate. For this rea­son, we eval­u­ated sauna ef­fects across the fol­lic­u­lar and luteal phases and ob­served sta­tis­ti­cally higher ac­tiv­ity and lower heart rate when women use the sauna on their luteal phase. In fact, heart rate at night was only mean­ing­fully lower (Cohen d > 0.2) com­pared to non sauna days dur­ing the luteal phase. Or in other words, the ben­e­fits of saunas seem to ap­pear only dur­ing the luteal phase…

Sauna use is part of a re­cov­ery‑ori­ented day. Sauna days are more ac­tive, which fits how peo­ple ac­tu­ally use saunas, of­ten as a post‑work­out rou­tine. Yet even af­ter ac­count­ing for ac­tiv­ity, night­time min­i­mum heart rate is lower on sauna days, sug­gest­ing a phys­i­o­log­i­cal re­cov­ery sig­nal be­yond ex­er­cise alone.

Mechanistically, this pat­tern is con­sis­tent with known heat‑stress phys­i­ol­ogy: heart rate in­creases dur­ing sauna ex­po­sure, fol­lowed by re­cov­ery dy­nam­ics that can re­flect in­creased parasym­pa­thetic in­flu­ence dur­ing cool­ing [6][7] Within women, the strongest re­cov­ery sig­nal in our dataset ap­pears in the luteal phase, where the ef­fect size crosses a mean­ing­ful thresh­old.

Ketelhut, S., & Ketelhut, R. G. (2019). The blood pres­sure and heart rate dur­ing sauna bath cor­re­spond to car­diac re­sponses dur­ing sub­max­i­mal dy­namic ex­er­cise. Com­ple­men­tary ther­a­pies in med­i­cine, 44, 218–222. https://​doi.org/​10.1016/​j.ctim.2019.05.002Laukka­nen T, Kunutsor SK, Khan H, et al. Sauna bathing is as­so­ci­ated with re­duced car­dio­vas­cu­lar mor­tal­ity and im­proves risk pre­dic­tion in men and women: a prospec­tive co­hort study. BMC Medicine. 2018;16:219. (PMC: PMC6262976)Kuan, W. H., Chen, Y. L., & Liu, C. L. (2022). Excretion of Ni, Pb, Cu, As, and Hg in Sweat un­der Two Sweating Conditions. International jour­nal of en­vi­ron­men­tal re­search and pub­lic health, 19(7), 4323. https://​doi.org/​10.3390/​ijer­ph19074323­Mar­cussen, W. (2019, August 23). The Roman Baths in Bath- A Deep Dive into Britain’s Ancient History. World History Encyclopedia. **https://​www.world­his­tory.org/​ar­ti­cle/​1427/​the-ro­man-baths-in-bath–a-deep-dive-into-britains/**​Laukka­nen JA, Laukkanen T, Kunutsor SK. Cardiovascular and Other Health Benefits of Sauna Bathing: A Review of the Evidence. Mayo Clinic Proceedings. 2018;93(8):1111–1121. (PubMed: 30077204)

...

Read the original on tryterra.co »

7 370 shares, 24 trendiness

A grammar of graphics for SQL

Today, we are su­per ex­cited to an­nounce the al­pha-re­lease of

ggsql

. As the name sug­gests, ggsql is an im­ple­men­ta­tion of the gram­mar of graph­ics based on SQL syn­tax, bring­ing rich, struc­tured vi­su­al­iza­tion sup­port to SQL. It is ready for use in Quarto, Jupyter note­books, Positron and VS Code among oth­ers.

In this post we will go over some of the mo­ti­va­tions that lead us to de­velop this tool, as well as give you am­ple ex­am­ples of its use; so you can hope­fully get as ex­cited about it as we are.

Before we dis­cuss the why, let’s see what ggsql is all about with some ex­am­ples.

To get our feet wet, lets start with the hello-world of vi­su­al­iza­tions: A scat­ter­plot, us­ing the built-in pen­guins dataset:

That was­n’t too bad. Sure, it has the ver­bosity of SQL, but that also means that you can speak your plot code out loud and un­der­stand what it does. We can break down what is go­ing on here line-by-line:

We ini­ti­ate the vi­sual query with VISUALIZE and pro­vide a map­ping from the built-in pen­guins dataset, re­lat­ing x to the data in the bil­l_len col­umn, and y in the bil­l_dep col­umn.

We draw a point layer that, by de­fault, uses the map­ping we de­fined at the top.

With this in place, we can be­gin to add to the vi­su­al­iza­tion:

We see that a sin­gle ad­di­tion to the map­pings adds col­ored cat­e­gories to the plot. This grad­ual evo­lu­tion of plot code is one of the biggest strengths of the gram­mar of graph­ics. There are no pre­de­fined plot types, only mod­u­lar parts that can be com­bined, added, and re­moved. To fur­ther em­pha­size this, let’s add a smooth re­gres­sion line to the plot:

We add a new layer on top of the point layer. This layer also bor­rows the same map­ping as the point layer. Since we color by species, the smooth line is split into one for each species.

We can con­tinue do­ing this, adding more map­pings, adding or swap­ping lay­ers, con­trol­ling how scales are ap­plied etc un­til we ar­rive at the plot we need, how­ever sim­ple or com­pli­cated it may be. In the above ex­am­ple we may well end up de­cid­ing we are more in­ter­ested in look­ing at the dis­tri­b­u­tion of species across the three is­lands the data was col­lected from:

While a com­pletely dif­fer­ent plot, you can see how much of the code from the pre­vi­ous plot car­ries over.

With our first cou­ple of plots un­der the belt, let’s move on to a com­plete ex­am­ple. It will con­tains parts we have not seen be­fore, but don’t worry, we will go through it be­low, even the parts we’ve al­ready seen be­fore. The ex­am­ple is an adap­ta­tion of a vi­su­al­iza­tion cre­ated by

Jack Davison

for TidyTuesday.

That was a lot of code, but on the flip-side we have now cov­ered a lot of the most im­por­tant as­pects of the syn­tax with one ex­am­ple.

At the top­most level there are two parts to this query: The SQL query, and the vi­su­al­iza­tion query. The SQL query is any­thing from the be­gin­ning to the VISUALIZE clause. It is your stan­dard SQL, and it ac­cepts any­thing your back­end ac­cepts (in this blog post we use a DuckDB back­end). The re­sult of the query is fun­nelled di­rectly into the vi­su­al­iza­tion rather than be­ing re­turned as a table like you’d nor­mally ex­pect.

Since the point of this post is not to teach you SQL we won’t spend much more time dis­cussing the SQL query part. The main take away is that every­thing be­fore the VISUALIZE clause is pure SQL, any re­sult­ing table is au­to­mat­i­cally used by your vi­su­al­iza­tion, and any table or CTE cre­ated there is avail­able for ref­er­enc­ing in the vi­su­al­iza­tion query.

As we saw in the first ex­am­ples, the SQL query part is op­tional. If your data is al­ready in the right shape for plot­ting you can skip it and in­stead name the source di­rectly in the VISUALIZE clause:

Now, let’s look at the vi­sual query — every­thing from VISUALIZE and on­wards. VISUALIZE marks the end of the SQL query and the be­gin­ning of the vi­su­al­iza­tion query (or VISUALISE for those who pre­fer UK spelling). It can stand on its own or, as we do here, have one or more map­pings which will be­come de­faults for every sub­se­quent layer. Mappings are purely for re­lat­ing data to ab­stract vi­sual prop­er­ties. A map­ping is like a SELECT where you alias columns to a vi­sual prop­er­ties (called aes­thet­ics in the gram­mar of graph­ics). In the vi­su­al­iza­tion above we say that the age col­umn holds the val­ues used for x (position along the x axis) and the cat­e­gory col­umn holds the val­ues used for fill (the fill color of the en­tity). We do not say any­thing about how to draw it yet.

Following the VISUALIZE query we have a DRAW clause. DRAW is how we add lay­ers to our vi­su­al­iza­tion. There is a large se­lec­tion of dif­fer­ent layer types in ggsql. Some are straight­for­ward: e.g. point for draw­ing a scat­ter­plot. Some are more in­volved: his­togram (which we use here) re­quires cal­cu­la­tion of de­rived sta­tis­tics like binned count. A vi­su­al­iza­tion can have any num­ber of lay­ers and lay­ers will be ren­dered in the se­quence they are de­fined. DRAW has a sib­ling clause called PLACE. It is used for an­no­ta­tion and works like DRAW ex­cept it does­n’t get data from a table but rather as pro­vided lit­eral val­ues. It fol­lows that our vi­su­al­iza­tion above con­tains three lay­ers: A his­togram layer show­ing data from our table, a rule an­no­ta­tion layer show­ing pre­com­puted mean val­ues for each cat­e­gory, and a text an­no­ta­tion layer adding con­text to the vi­su­al­iza­tion. It is worth men­tion­ing that a layer does not cor­re­spond to a sin­gle graph­i­cal en­tity. Like with the text layer above, each layer can ren­der mul­ti­ple sep­a­rate en­ti­ties of its type so there is no need to have e.g. 3 line lay­ers to ren­der line plots for 3 dif­fer­ent cat­e­gories.

After the DRAW and PLACE clauses we have a SCALE clause. This clause con­trols how data val­ues are trans­lated into val­ues that are mean­ing­ful for the aes­thetic. In our case, the cat­e­gory col­umn holds the strings Age at mis­sion” and Age at se­lec­tion” which does­n’t in it­self trans­late to a color value. The clause SCALE fill TO ac­cent tells ggsql to use the accent” color palette when con­vert­ing the val­ues mapped to fill to ac­tual col­ors. Scales can be used for much more, like ap­ply­ing trans­for­ma­tions to con­tin­u­ous data, defin­ing break points, and set­ting spe­cific scale types (like or­di­nal or binned).

The last clause in our vi­sual query is LABEL which al­lows us to add or mod­ify var­i­ous text la­bels like ti­tle, sub­ti­tle, and axis and leg­end ti­tles.

That was a mouth­ful. But there are two very sil­very lin­ings to it all:

You now know the most im­por­tant as­pects of the syn­tax (there are more, of course, but you can grow into that)

Many vi­su­al­iza­tion queries will be much sim­pler than the one above

We have al­ready seen ex­am­ples of shorter vi­sual queries above but let’s con­tinue with a box­plot of as­tro­naut birth year split by sex:

That’s much shorter than the last plot code but still, if you are com­ing from a dif­fer­ent plot­ting sys­tem you may even think this is overly ver­bose (e.g. compared to some­thing like box­plot(as­tro­nauts.sex, as­tro­nauts.year_of_birth)). Yes, it is longer, but it is also more struc­tured, com­pos­able, and self-de­scrip­tive. These fea­tures (which are a di­rect re­sult of its gram­mar of graph­ics lin­eage) means that both you and your fu­ture LLM cod­ing buddy will have an eas­ier time in­ter­nal­iz­ing the work­ings of all types of plots that can be made. The 18 years of dom­i­nance of gg­plot2 (which shares these fea­tures) in the R ecosys­tem is a tes­ta­ment to this.

As an ex­am­ple, let’s change the above plot to in­stead show the same re­la­tion­ship as a jit­tered scat­ter­plot.

Or per­haps the jit­ter fol­lows the dis­tri­b­u­tion of the data so it dou­bles as a vi­o­lin plot:

As you can see the syn­tax and com­pos­able na­ture makes vi­su­al­iza­tion it­er­a­tion very er­gonomic, some­thing that is ex­tremely valu­able in both ex­plo­rative analy­ses and vi­su­al­iza­tion de­sign.

Writing a new vi­su­al­iza­tion li­brary from scratch is a big task and you might won­der why we’re do­ing it again. Some of the rea­sons are:

* We want to en­gage with and help data an­a­lysts and data sci­en­tists that pre­dom­i­nantly work in SQL

* SQL and the gram­mar of graph­ics fit to­gether ex­tremely well

* We want to cre­ate an ex­tremely pow­er­ful, code-based, vi­su­al­iza­tion tool that does­n’t re­quire an en­tire pro­gram­ming lan­guage (like R or python)

* LLMs speak SQL very well and also pre­sents a new in­ter­face to data vi­su­al­iza­tion cre­ation

* We have learned so much from 18 years of

gg­plot2

de­vel­op­ment that we’re ex­cited to ap­ply to a blank can­vas

While first R and then Python cap­tured all the at­ten­tion of the data sci­ence rev­o­lu­tion, SQL chugged along as the re­li­able and pow­er­ful work­horse be­neath it all. There are many peo­ple who work with data that do so only or pre­dom­i­nantly in SQL. The choice they have for vi­su­al­iz­ing their data are of­ten sub­op­ti­mal in our view:

* Export the data and use R or Python which may not be within their com­fort zone

* Use a GUI-based BI tool with poor sup­port for re­pro­ducibil­ity

* Rely on one of the few tools that ex­ist for cre­at­ing vi­su­al­iza­tions di­rectly within the query that we feel are not pow­er­ful or er­gonomic enough

Our goal when de­sign­ing ggsql was that the syn­tax should im­me­di­ately make sense to SQL users, tap­ping into their ex­pec­ta­tion of com­pos­able, de­clar­a­tive clauses.

Apart from of­fer­ing a bet­ter way to vi­su­al­ize their data, ggsql is also a way to in­vite SQL users into our rich ecosys­tem of code based re­port gen­er­a­tion and shar­ing build on top of

Quarto

If you are read­ing this with no prior knowl­edge of SQL, here’s a very brief re­cap: SQL is a do­main spe­cific lan­guage for ma­nip­u­lat­ing re­la­tional data stored in one or more ta­bles. The syn­tax is based on the con­cept of re­la­tional al­ge­bra which is a struc­tured way to think about data ma­nip­u­la­tion op­er­a­tions. The se­man­tics de­fines a set of mod­u­lar op­er­a­tions that are de­clar­a­tive rather than func­tional, al­low­ing the user to com­pose very pow­er­ful and cus­tom ma­nip­u­la­tions us­ing a well-de­fined set of op­er­a­tions.

If you are read­ing this with no prior knowl­edge of the gram­mar of graph­ics, here’s a very brief re­cap: The gram­mar of graph­ics is a the­o­ret­i­cal de­con­struc­tion of the con­cepts of data vi­su­al­iza­tion into its mod­u­lar parts. While purely the­o­ret­i­cal, tools such as gg­plot2 have im­ple­mented the idea in prac­tice. The se­man­tics de­fines a set of mod­u­lar op­er­a­tions that are de­clar­a­tive rather than func­tional, al­low­ing the user to com­pose very pow­er­ful and cus­tom vi­su­al­iza­tions us­ing a well-de­fined set of op­er­a­tions.

From the above, slightly hy­per­bolic, overview it is clear that both SQL and the gram­mar of graph­ics have a lot of com­mon­al­ity in their ap­proach to their re­spec­tive do­mains. Together they can of­fer a very pow­er­ful and nat­ural so­lu­tion to the full pipeline from raw data to fi­nal vi­su­al­iza­tion.

Why does it mat­ter that

gg­plot2

and

plot­nine

re­quires R and Python in­stalled re­spec­tively? There are clear ben­e­fits to a sin­gle, fo­cused ex­e­cutable to han­dle data vi­su­al­iza­tion:

* Embedding a small ex­e­cutable in other tools is much eas­ier than bundling R/Python (or re­quir­ing them to be in­stalled)

* A smaller scope makes it eas­ier to sand­box and pre­vent ma­li­cious code ex­e­cu­tion (either de­lib­er­ately or in er­ror)

Both of the above points make ggsql a much more com­pelling op­tion for in­te­grat­ing into tools such as AI agents as­sist­ing you in data analy­sis, or code based re­port­ing tools that may ex­e­cute code in dif­fer­ent en­vi­ron­ments.

You may think we have had to swal­low some bit­ter pills by mov­ing away from an in­ter­preted lan­guage, but it has also given us a lot. Most im­por­tantly, the rigid struc­ture means that we can ex­e­cute the whole data pipeline as a sin­gle SQL query per layer on the back­end. This means that if you want to cre­ate a bar plot of 10 bil­lion trans­ac­tions you only ever fetch the count val­ues for each bar from your data ware­house, not the 10 bil­lion rows of data. The same is true for more com­pli­cated layer types such as box­plots and den­sity plots. This is in stark con­trast to most vi­su­al­iza­tion tools which must first ma­te­ri­al­ize the com­plete data, then per­form the nec­es­sary com­pu­ta­tions on it, then plot it.

LLMs have proven very ef­fec­tive at trans­lat­ing nat­ural lan­guage into SQL, and we’re bull­ish that they can be just as ef­fec­tive with ggsql. We’ve al­ready seen ev­i­dence of this in

querychat

, where you can now vi­su­ally ex­plore data us­ing nat­ural lan­guage via ggsql. And, since ggsql is a much safer and lighter run­time than R or Python, you can much more con­fi­dently ship cod­ing agents into a pro­duc­tion en­vi­ron­ment.

18 years of gg­plot2 de­vel­op­ment and main­te­nance also means 18 years of think­ing about data vi­su­al­iza­tion syn­tax, use, and de­sign. While not try­ing to be boast­ful we do be­lieve that gives us some ex­pert knowl­edge on the sub­ject mat­ter. However, not all of this knowl­edge can be poured back into gg­plot2. There are de­ci­sions and ex­pec­ta­tions es­tab­lished many years ago that we have to honor, or at least only chal­lenge very grad­u­ally (which we do on oc­ca­sions).

ggsql is a blank slate. Not only in the sense that we are build­ing it from the ground up, but also in that it is built for an en­vi­ron­ment with no es­tab­lished ex­pec­ta­tions for a vi­su­al­iza­tion tool. I can­not stress how lib­er­at­ing and in­vig­o­rat­ing this has felt, and I am pos­i­tive that this shines through in how ggsql feels to the user.

We are near­ing the end of a rather long an­nounce­ment — thanks for stick­ing with us. In the very first line we called this an al­pha-re­lease which im­plies that we are not done yet. To get you as ex­cited about the fu­ture as you hope­fully are about the pre­sent state of ggsql, here is a non-ex­haus­tive list of things we want to add.

* New high-per­for­mance writer, writ­ten from the ground up in Rust

If you are a cur­rent gg­plot2 user you may have read this with a mix of fear and ex­cite­ment (or maybe just one of them). Does this mean that we are leav­ing gg­plot2 be­hind at Posit to fo­cus on our new shiny toy? Not at all! gg­plot2 is very ma­ture and sta­ble at this point but we will con­tinue to sup­port and build it out. We also hope that ggsql can pay back all the ex­pe­ri­ence from gg­plot2 that went into its de­vel­op­ment by in­form­ing new fea­tures in gg­plot2.

If you can’t wait to learn more about ggsql and be­gin to use it you can head to the

Getting started

sec­tion of the

ggsql web­site

for in­stal­la­tion in­struc­tions, and a tu­to­r­ial, or head straight to

the doc­u­men­ta­tion

to dis­cover every­thing ggsql is ca­pa­ble of. We can’t wait for you to try it out and hear about your ex­pe­ri­ences with it.

...

Read the original on opensource.posit.co »

8 326 shares, 19 trendiness

AI Resistance is Growing

As the in­ter­net chokes on ever more slop, the one thing that gives me hope is this: peo­ple seem to loathe AI, and are ac­tively re­sist­ing it. This won’t be a long post, as I’m per­son­ally so tired of writ­ing and think­ing about AI at this point in time, but I do want to draw your at­ten­tion here to some re­cent anti-AI stuff that’s worth dis­cussing.

r/​Poi­son­Foun­tain, cre­ated by in­di­vid­u­als who claim to be con­cerned AI in­dus­try in­sid­ers, is a com­mu­nity with one goal: en­cour­age as many peo­ple as pos­si­ble to feed huge quan­ti­ties of trash data (poison) to all of the web crawlers out there that are scrap­ing our work for AI train­ing sets. They aim to serve one ter­abyte of poi­son per day to these crawlers by the end of 2026.

The poi­son foun­tain it­self is hosted on rn­saffn.com, sand­wiched be­tween sev­eral garbage links that look ir­re­sistable to AI crawlers; it pro­duces a page of code that seems cor­rect at first glance, but is ac­tu­ally rid­dled with sub­tle er­rors that ren­der the code un­us­able. Filtering out these er­rors is pos­si­ble, but ex­pen­sive at scale. Since these com­pa­nies can’t im­prove their AI mod­els with­out fresh data cre­ated by hu­man be­ings, the idea here is to waste their time and make it ex­pen­sive for them to steal our data.

Miasma is one ex­am­ple of a tool that uses the foun­tain to serve mas­sive amounts of garbage to ma­li­cious bots. The de­vel­oper de­scribes it as an end­less buf­fet of slop for the slop ma­chines,” which is de­light­ful. I can’t use Miasma with my site’s setup, but it may be of in­ter­est to those of you who could. I de­liver my trash to crawlers us­ing other means … some vis­i­ble, some in­vis­i­ble. While I can’t serve it up to any­where near the same ex­tent as Miasma can, I do catch sneaky bots with my junk links every day.

If you’re pro-AI and feel out­raged on be­half of these com­pa­nies that any­one would dare try to make life dif­fi­cult for them, please know that this is sim­ply a case of tit for tat. The teams that send AI crawlers out into the world wide web are DDoSing small web­sites on the reg­u­lar and rais­ing host­ing fees for every­one with their vo­ra­cious de­sire to de­vour the en­tire in­ter­net. They do not obey ro­bots.txt, and of­ten hide their crawlers be­hind res­i­den­tial prox­ies. If they can’t source train­ing data eth­i­cally, then I see ab­solutely no rea­son why any web­site op­er­a­tor should make it easy for them to steal it.

Caution: I’m mess­ing with au­to­mated vis­i­tors in plain sight as an ex­per­i­ment. 🤭 To avoid false pos­i­tives, hu­man vis­i­tors are en­cour­aged to ig­nore the link in this box.

Someone Figured Out How To Poison AI Video Summarizers

Thanks to r/​Poi­son­Foun­tain, I learned that YouTube has no .ass. I could try to ex­plain what that means, but the video is hi­lar­i­ous and well worth a watch, so I’ll leave it up to @f4mi.

Sadly, it looks like the poi­son­ing tech­nique used by the cre­ator in this video no longer works; YouTube pre­sum­ably fixed the tran­script loop­hole she was ex­ploit­ing here. I plugged a few of her video URLs into a few dif­fer­ent video sum­ma­riz­ers, and they all failed to tell me any­thing that was­n’t ac­tu­ally in the videos.

Still, it’s great to see peo­ple try­ing and suc­ceed­ing at fuck­ing with the slop ma­chines — even if that suc­cess is only tem­po­rary.

All over Reddit and other so­cial me­dia plat­forms, I’m in­creas­ingly see­ing stuff like this:

I mean, sure, it’s lit­er­ally mis­in­for­ma­tion and you could in­deed ar­gue that there’s al­ready enough mis­in­for­ma­tion on the in­ter­net as it is … but it’s im­por­tant to note here that bots, not peo­ple, are the tar­get au­di­ence of this mis­in­for­ma­tion.

I think most of us can un­der­stand from the con­text that Idris Elba did not ever play Raymond’s mother in an episode of Everybody Loves Raymond. Automated web scrap­ers, how­ever, will just see good hu­man-gen­er­ated data, which is what they want. They’re go­ing to mer­rily scrape that garbage from Reddit and send it back to OpenAI or whomever, who will then have to waste re­sources re­mov­ing it from their train­ing data sets.

This is­n’t ex­actly the mod­ern equiv­a­lent of an­gry tex­tile work­ers de­stroy­ing power looms, but (if you’ll for­give the pun) it’s cut from the same cloth. The dif­fer­ence here (I hope) is that if enough of us pol­lute pub­lic spaces with mis­in­for­ma­tion in­tended for bots, it might be enough to com­pel AI com­pa­nies to re­think the way they source train­ing data.

People hate what AI is do­ing to our world. They hate what it’s do­ing to our on­line com­mu­ni­ties, what it’s do­ing to our en­vi­ron­ment, what it’s do­ing to our el­e­men­tary schools and uni­ver­si­ties, what it’s do­ing to at-risk in­di­vid­u­als with men­tal health is­sues, what it’s do­ing (and may yet still do) to our liveli­hoods. While there are cer­tainly plenty of peo­ple out there who hap­pily con­sume and gen­er­ate mas­sive amounts of AI slop, they are — at least in my anec­do­tal ex­pe­ri­ence within my own so­cial cir­cles, both of­fline and on­line — dwarfed by peo­ple who de­test and want noth­ing to do with this tech­nol­ogy.

Hatred of a thing sel­dom leads any­where good, as re­cent events demon­strate, but do I think that if peo­ple are able to trans­late what they’re feel­ing about AI into peace­ful, le­gal acts of re­sis­tance, then we might ac­tu­ally stand to change the way Silicon Valley does things.

To see what peo­ple are say­ing about this post, check it out on Mastodon. Want to know why this blog does­n’t have a com­ments sec­tion? I wrote about that here.

If you en­joy my writ­ing and want to read more of it, check out my last post or browse through my blog archive.

...

Read the original on stephvee.ca »

9 311 shares, 13 trendiness

Comment Tesla a caché des accidents fatals pour continuer à tester la conduite autonome sur les routes

La voiture au­tonome promet­tait un rêve, elle se trans­forme en cauchemar pour cer­tains us­agers. Une en­quête révèle com­ment Elon Musk et Tesla ont util­isé les routes comme ter­rain d’es­sai en pré­cip­i­tant la mise sur le marché d’un sys­tème de con­duite au­tonome par in­tel­li­gence ar­ti­fi­cielle.

Le con­struc­teur au­to­mo­bile a passé sous si­lence des mil­liers d’in­ci­dents graves. Certains ont coûté la vie à des con­duc­teurs et des pas­sagers. D’autres us­agers de la route se sont retrou­vés im­pliqués sans le savoir.

L’enquête s’ap­puie sur une fuite mas­sive de don­nées in­ternes à Tesla. Ces doc­u­ments révè­lent l’am­pleur du prob­lème. Le con­struc­teur était con­scient depuis des an­nées des dé­fail­lances de ses sys­tèmes.

Les fichiers mon­trent des mil­liers de plaintes de clients. Plus de 2400 con­cer­nent des ac­céléra­tions spon­tanées et le nom­bre d’ac­ci­dents dé­passe les 1000. Dans de nom­breux cas, le statut in­diqué était non ré­solu”.

Certaines voitures Tesla ont ac­céléré ou freiné bru­tale­ment sans rai­son. En in­tel­li­gence ar­ti­fi­cielle, on ap­pelle ces dys­fonc­tion­nements des hallucinations”, comme lorsque ChatGPT donne une réponse com­plète­ment fausse.

Sur la route, les con­séquences sont désas­treuses. Le sys­tème de con­duite au­tonome peut mal in­ter­préter son en­vi­ron­nement. A grande vitesse, ces er­reurs de­vi­en­nent mortelles.

Je ne savais pas que le pi­lote au­toma­tique ex­is­tait. Quand je l’ai dé­cou­vert, je me suis senti comme un cobaye Dillon Angulo, im­pliqué dans un ac­ci­dent avec une Tesla

Le prob­lème touche tous les us­agers. Alors que beau­coup n’ont ja­mais ac­cepté d’être les cobayes de Tesla, ils se retrou­vent mal­gré eux ex­posés aux dé­fail­lances du sys­tème Autopilot”.

>> Lire à ce su­jet : Les au­to­mo­bilistes en­core cobayes” des sys­tèmes d’aide à la con­duite

Naibel Benavides avait 22 ans. Cette sim­ple pié­tonne est morte dans un ac­ci­dent im­pli­quant une Tesla en mode Autopilot”. Son com­pagnon Dillon Angulo a survécu avec de graves blessures.

Je ne savais pas que le pi­lote au­toma­tique ex­is­tait. Quand je l’ai dé­cou­vert, je me suis senti comme un cobaye”, té­moigne Dillon Angulo, qui souf­fre en­core au­jour­d’hui des con­séquences de l’ac­ci­dent.

La famille de Naibel a dé­cidé d’at­ta­quer Tesla en jus­tice. Elle ac­cuse le con­struc­teur d’avoir caché des in­for­ma­tions cru­ciales. Tesla a tou­jours re­jeté la faute sur le con­duc­teur.

Les en­quê­teurs ont ren­con­tré des ob­sta­cles in­hab­ituels. Les don­nées de l’ac­ci­dent au­raient dû être disponibles dans la boîte noire” du véhicule. Or, Tesla a af­firmé que ces don­nées étaient cor­rompues.

Les av­o­cats des vic­times ont fait ap­pel à des ex­perts, qui ont réussi à récupérer les don­nées sup­primées. Ces in­for­ma­tions prou­vent que Tesla était au courant de la dé­fail­lance dès le soir de l’ac­ci­dent.

La voiture en mode Autopilot” avait dé­tecté les ob­sta­cles. Elle n’a pour­tant rien fait pour éviter la col­li­sion. Seule une alerte a re­tenti juste avant l’im­pact.

Un jury a con­damné Tesla à verser plus de 243 mil­lions de dol­lars de dom­mages et in­térêts. Cette sanc­tion mar­que une pre­mière dans les af­faires liées à l’“Au­topi­lot”. Les ju­rés ont jugé que Tesla et le con­duc­teur étaient re­spon­s­ables.

C’est un jour his­torique pour la jus­tice”, a déclaré l’av­o­cat des vic­times. Le ver­dict mon­tre que les con­struc­teurs ne peu­vent pas utiliser les routes publiques comme lab­o­ra­toire.

Tesla a tenté de faire an­nuler ce ver­dict. Fin février, un juge fédéral a con­firmé la sanc­tion con­tre le con­struc­teur. L’entreprise peut en­core faire ap­pel.

Tesla fait l’ob­jet de plusieurs en­quêtes aux Etats-Unis. Le min­istère de la Justice ex­am­ine si le con­struc­teur a trompé les con­som­ma­teurs. L’Administration na­tionale de la sécu­rité routière en­quête égale­ment.

>> Lire aussi : Tesla évite un long procès sur sa tech­nolo­gie d’aide à la con­duite

Des lanceurs d’alerte ont té­moigné auprès des au­torités. Ils décrivent une en­tre­prise qui priv­ilégie la ra­pid­ité au détri­ment de la sécu­rité. La ver­sion test de la con­duite au­tonome a été pré­cip­itée sur le marché, alors que plusieurs em­ployés avaient alerté la di­rec­tion sur les dan­gers de l’“Au­topi­lot”.

Les ex­perts s’at­ten­dent à ce que d’autres pour­suites ju­di­ci­aires suiv­ent. Le pre­mier ver­dict ou­vre la voie à de nou­veaux procès con­tre Tesla.

...

Read the original on www.rts.ch »

10 308 shares, 24 trendiness

Deezer says 44% of songs uploaded to its platform daily are AI-generated

Deezer an­nounced on Monday that AI-generated tracks now rep­re­sent 44% of all new mu­sic up­loaded to its plat­form. The com­pany said it’s re­ceiv­ing al­most 75,000 AI-generated tracks per day and more than two mil­lion per month.

The con­sump­tion of AI-generated mu­sic on the plat­form is still very low, at 1-3% of to­tal streams, and 85% of these streams are de­tected as fraud­u­lent and de­mon­e­tized by the com­pany.

The lat­est fig­ure from Deezer high­lights a con­tin­u­ous surge in AI-generated mu­sic up­loads to the plat­form. Deezer re­ported re­ceiv­ing around 60,000 AI tracks per day in January, up from 50,000 in November, 30,000 in September, and just 10,000 in January 2025, when it first launched its AI-music de­tec­tion tool.

Songs tagged as AI-generated on Deezer are au­to­mat­i­cally re­moved from al­go­rith­mic rec­om­men­da­tions and not in­cluded in ed­i­to­r­ial playlists. The com­pany an­nounced to­day that it will no longer store hi-res ver­sions of AI tracks.

The up­dated fig­ure comes as an AI-generated track topped the iTunes charts last week in the United States, United Kingdom, France, Canada, and New Zealand.

AI-generated mu­sic is now far from a mar­ginal phe­nom­e­non and as daily de­liv­er­ies keep in­creas­ing, we hope the whole mu­sic ecosys­tem will join us in tak­ing ac­tion to help safe­guard artists’ rights and pro­mote trans­parency for fans,” said Deezer CEO Alexis Lanternier in a press re­lease. Thanks to our tech­nol­ogy and the proac­tive mea­sures we put in place more than a year ago, we have shown that it’s pos­si­ble to re­duce AI-related fraud and pay­ment di­lu­tion in stream­ing to a min­i­mum.”

Today’s an­nounce­ment comes as Deezer con­ducted a sur­vey last November that found that 97% of par­tic­i­pants could­n’t tell the dif­fer­ence be­tween fully AI-generated mu­sic and hu­man-made mu­sic.

The sur­vey also found that 52% of re­spon­dents said 100% AI-generated songs should­n’t be in­cluded in charts along­side hu­man-made songs in the main charts. Meanwhile, 80% said 100% AI-generated mu­sic should be clearly la­beled for lis­ten­ers.

Deezer started tag­ging AI tracks at the plat­form level in June 2025, be­com­ing the first stream­ing plat­form to do so. Over the course of 2025, Deezer tagged more than 13.4 mil­lion AI tracks on its plat­form.

In February, French stream­ing ser­vice Qobuz an­nounced plans to tag AI-generated con­tent on its plat­form. Other ma­jor stream­ing ser­vices, such as Spotify and Apple Music, take dif­fer­ent ap­proaches to AI-generated mu­sic, of­ten com­bin­ing the use of fil­ters to iden­tify low-qual­ity AI mu­sic with other trans­parency ef­forts left up to the dis­trib­u­tors.

...

Read the original on techcrunch.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.