10 interesting stories served every morning and every evening.




1 606 shares, 27 trendiness

Heathrow scraps 100ml liquid container limit

Passengers at Britain’s biggest air­port, Heathrow, can leave liq­uids in con­tain­ers up to two litres in their bags while go­ing through se­cu­rity, af­ter it fi­nally com­pleted the roll­out of new high-tech CT scan­ners. Electronics such as lap­tops can also be left in lug­gage, while clear plas­tic bags for liq­uids no longer have to be used.Heathrow now says it is the biggest air­port in the world to have the new equip­ment fully rolled out across all its ter­mi­nals.But while it has be­come the largest air­port to roll out the new high-tech scan­ners, it is far from the UKs first, with Gatwick, Edinburgh and Birmingham air­ports hav­ing up­graded to them in re­cent years and in­creased to a two-litre limit.

At most UK air­ports, pas­sen­gers can keep liq­uid con­tain­ers of up to 100ml in their lug­gage, with­out hav­ing to re­move them and use clear plas­tic bags. Bristol and Belfast air­ports have also raised their liq­uid lim­its to two litres.How­ever, other air­ports that have the new scan­ners in­stalled are wait­ing for the green light from the Department for Transport (DfT) to raise the limit from 100ml.A re­cent re­port by con­sumer group Which? found that the sen­si­tiv­ity of the new scan­ners be­ing rolled out means that at some air­ports, more bag searches end up be­ing car­ried out by hand af­ter pass­ing through them. Heathrow said the scan­ners, which pro­vide bet­ter im­ages of cabin bags, could ser­vice thousands of pas­sen­gers an hour with sig­nif­i­cantly greater ef­fi­ciency, while main­tain­ing high safety and se­cu­rity stan­dards”.The rule change only ap­plies to flights leav­ing Heathrow, and pas­sen­gers must check re­stric­tions on lug­gage at the air­ports they are re­turn­ing from be­fore board­ing flights to the UK. The roll­out of the new high-tech scan­ners across the UK has suf­fered a se­ries of set­backs over the past few years.Boris Johnson promised in 2019 that the rules about tak­ing liq­uids through se­cu­rity in con­tain­ers of no more than 100ml, in­side plas­tics bags, would be scrapped by the end of 2022. The pan­demic even­tu­ally put paid to that.In December 2022, the Conservative gov­ern­ment promised state-of-the-art scan­ning equip­ment would be in­stalled in se­cu­rity lanes by June 2024 in the biggest shake-up of air­port se­cu­rity rules in decades”.

Then-Transport Secretary Mark Harper said the dom­i­nance of tiny toi­letry” was nearly over. But, as it turned out, the June 2024 dead­line was not achiev­able for the biggest air­ports - al­though a num­ber of smaller ones, with fewer lanes to get sorted, did in­stall the scan­ners in place be­fore that date.Then, on the evening of Friday 13 June, 2024, the gov­ern­ment said those smaller air­ports who had al­ready in­tro­duced the new scan­ners and dropped their 100ml liq­uids rules, must re­in­state them. This trig­gered anger among air­port op­er­a­tors.The EU also an­nounced a re­ver­sion to the 100ml rule in July that year. There has since been a pe­riod of in­con­sis­tency. Last sum­mer, the Transport Secretary was telling pas­sen­gers to as­sume the 100ml rule still ap­plied.

Heathrow chief ex­ec­u­tive Thomas Woldbye said the £1bn pack­age of up­grades would mean pas­sen­gers could spend less time prepar­ing for se­cu­rity and more time en­joy­ing their jour­ney”.Of the world’s busiest 10 air­ports, Heathrow is the only one to have scrapped the 100ml rule for liq­uid con­tain­ers on in­ter­na­tional flights. A DfT spokesper­son said: Heathrow is the lat­est UK air­port to com­plete its roll­out of next-gen­er­a­tion se­cu­rity equip­ment for pas­sen­gers, help­ing en­sure se­cu­rity checks re­main ro­bust and can be com­pleted smoothly.“Air­ports are re­spon­si­ble for the in­stal­la­tion and op­er­a­tion of se­cu­rity equip­ment. Passengers should con­tinue to check se­cu­rity re­quire­ments with air­ports be­fore they travel and come pre­pared with liq­uids in con­tain­ers no larger than 100ml in hand bag­gage un­less ad­vised oth­er­wise.“The Advantage Travel Partnership, a net­work of travel agents, said air­ports set­ting their own time­lines on the lift­ing of the 100ml cap has led to con­fu­sion and frus­tra­tion” and pas­sen­gers have been tripped up”.Chief ex­ec­u­tive Julia Lo Bue-Said said: We would urge UK air­ports to work col­lec­tively with the gov­ern­ment to en­sure there is clear mes­sag­ing around the rules to avoid con­fu­sion and de­lays where pos­si­ble.”

...

Read the original on www.bbc.com »

2 570 shares, 64 trendiness

FBI is investigating Minnesota Signal groups tracking ICE, Patel says

FBI Director Kash Patel said Monday that he had opened an in­ves­ti­ga­tion into the Signal group text chats that Minnesota res­i­dents are us­ing to share in­for­ma­tion about fed­eral im­mi­gra­tion agents’ move­ments, launch­ing a new front in the Trump ad­min­is­tra­tion’s con­flict there with po­ten­tial free speech im­pli­ca­tions.

Patel said in an in­ter­view with con­ser­v­a­tive pod­caster Benny Johnson that he wanted to know whether any Minnesota res­i­dents had put fed­eral agents in har­m’s way” with ac­tiv­i­ties such as shar­ing agents’ li­cense plate num­bers and lo­ca­tions.

You can­not cre­ate a sce­nario that il­le­gally en­traps and puts law en­force­ment in har­m’s way,” he said in the in­ter­view, which was posted to YouTube.

The in­ves­ti­ga­tion quickly drew skep­ti­cism from free speech ad­vo­cates who said the First Amendment pro­tects mem­bers of the pub­lic who share legally ob­tained in­for­ma­tion, such as the names of fed­eral agents or where they are con­duct­ing en­force­ment op­er­a­tions.

There are le­git­i­mate rea­sons to share such in­for­ma­tion, in­clud­ing en­abling mem­bers of the pub­lic to ob­serve and doc­u­ment law en­force­ment ac­tiv­ity and to hold of­fi­cials ac­count­able for mis­con­duct,” Aaron Terr, di­rec­tor of pub­lic ad­vo­cacy at the Foundation for Individual Rights and Expression, said in an email.

Given this ad­min­is­tra­tion’s poor track record of dis­tin­guish­ing pro­tected speech from crim­i­nal con­duct, any in­ves­ti­ga­tion like this de­serves very close scrutiny,” he said.

For months, dig­i­tal tools have been at the cen­ter of how peo­ple have pushed back against im­mi­gra­tion en­force­ment ef­forts in Minnesota and across the coun­try. The ad­min­is­tra­tion’s op­po­nents have used group text chats to track Immigration and Customs Enforcement op­er­a­tions, share pho­tos of sus­pected ICE ve­hi­cles and raise aware­ness for neigh­bors. In June, ad­min­is­tra­tion of­fi­cials crit­i­cized ICEBlock, an app de­signed to share in­for­ma­tion about ICE sight­ings. Apple re­moved the app from its app store in October, prompt­ing a law­suit from the ap­p’s de­vel­oper al­leg­ing the ad­min­is­tra­tion un­law­fully pres­sured Apple to re­move it.

In the past few days, the group text chats — es­pe­cially those on the en­crypted mes­sag­ing app Signal — have drawn at­ten­tion from right-wing me­dia. On Saturday, Cam Higby, a con­ser­v­a­tive jour­nal­ist based near Seattle, said in a thread on X that he had infiltrated” Signal groups from around Minneapolis that he al­leged were ob­struct­ing law en­force­ment. His thread, which got 20 mil­lion views, fo­cused on how the groups share such in­for­ma­tion as the li­cense plate num­bers of sus­pected fed­eral ve­hi­cles. NBC News has not ver­i­fied Higby’s claims.

Patel said he got the idea for the in­ves­ti­ga­tion from Higby.

As soon as Higby put that post out, I opened an in­ves­ti­ga­tion on it,” he said. We im­me­di­ately opened up that in­ves­ti­ga­tion, be­cause that sort of Signal chat — be­ing co­or­di­nated with in­di­vid­u­als not just lo­cally in Minnesota, but maybe even around the coun­try — if that leads to a break in the fed­eral statute or a vi­o­la­tion of some law, then we are go­ing to ar­rest peo­ple.”

The Signal Foundation, the non­profit or­ga­ni­za­tion that op­er­ates the Signal app, did not im­me­di­ately re­spond to a re­quest for com­ment.

Signal, which is con­sid­ered one of the most se­cure chat apps, is a go-to re­source for peo­ple con­cerned about pri­vacy. It is per­haps best known as the app Defense Secretary Pete Hegseth used to share sen­si­tive mil­i­tary in­for­ma­tion last year in a group chat that ac­ci­den­tally in­cluded a jour­nal­ist.

In the Twin Cities, Signal group chats have been a stan­dard part of toolk­its — along with walkie-talkies and whis­tles — used by ac­tivists, par­ents and neigh­bor­hood-watch mem­bers who have or­ga­nized as vol­un­teers to warn fam­i­lies about im­mi­gra­tion en­force­ment ac­tiv­i­ties by re­lay­ing real-time in­for­ma­tion, es­pe­cially near schools. Patrol vol­un­teers have said that, with more than 3,000 fed­eral im­mi­gra­tion agents in Minnesota, they are mo­ti­vated by a de­sire to pro­tect par­ents, chil­dren and school staff mem­bers who are not U. S. cit­i­zens.

Patel did not say which laws he thought Minnesota res­i­dents may have vi­o­lated. An FBI spokesper­son said the bu­reau had no fur­ther in­for­ma­tion to pro­vide.

The an­nounce­ment seemed likely to have im­pli­ca­tions for the First Amendment’s guar­an­tee of free speech. Alex Abdo, lit­i­ga­tion di­rec­tor at the Knight First Amendment Institute at Columbia University, said the First Amendment pro­tects the right to record law en­force­ment of­fi­cers as they carry out their of­fi­cial re­spon­si­bil­i­ties.

The abil­ity of every­day cit­i­zens to hold gov­ern­ment agents to ac­count, by ob­serv­ing them and ad­vo­cat­ing for change, is what has dis­tin­guished the American ex­per­i­ment with democ­racy from au­thor­i­tar­ian regimes around the world,” Abdo said in an email.

Unless the FBI has ev­i­dence of a crime, and not just ev­i­dence of ac­tiv­ity the Constitution pro­tects, it should stand down,” he said.

Patel ac­knowl­edged in the in­ter­view with Johnson that an in­ves­ti­ga­tion into group text chats would raise free speech con­cerns and said the FBI would balance” the rights guar­an­teed by the First and Second amend­ments with what he said were po­ten­tial vi­o­la­tions of fed­eral law.

Now, we will bal­ance the First and Second amend­ment con­stantly, but we have to let the com­mu­nity know that we will not tol­er­ate acts of vi­o­lence and an es­ca­la­tion and a vi­o­la­tion of the fed­eral code,” he said. The Second Amendment could be at is­sue be­cause Alex Pretti, the nurse shot and killed by a fed­eral agent Saturday in Minneapolis, was per­mit­ted to carry a gun in pub­lic and had one with him.

Terr, of the Foundation for Individual Rights and Expression, said the gov­ern­ment does not get to balance” the First Amendment against its other in­ter­ests.

The Constitution takes prece­dence over any con­flict­ing state or fed­eral law, and over any of­fi­cial’s de­sire to sup­press speech they dis­like,” he said in his email.

He added: There is a First Amendment ex­cep­tion for speech in­tended and likely to pro­voke im­mi­nent un­law­ful ac­tion, but that does­n’t ap­ply to just any speech the gov­ern­ment claims puts of­fi­cials in har­m’s way. By con­trast, if in­di­vid­u­als are threat­en­ing fed­eral agents or con­spir­ing to phys­i­cally harm them, that is il­le­gal. But con­spir­acy re­quires an agree­ment to com­mit a spe­cific crime and a sub­stan­tial step to­ward car­ry­ing it out.”

Patel also said the FBI had made substantial progress” in an in­ves­ti­ga­tion into groups and peo­ple re­spon­si­ble for fund­ing re­sis­tance to im­mi­gra­tion en­force­ment. He al­leged that the protests and neigh­bor­hood mon­i­tor­ing are not hap­pen­ing or­gan­i­cally” but did not im­me­di­ately pro­vide ev­i­dence.

...

Read the original on www.nbcnews.com »

3 477 shares, 35 trendiness

Jade (@JadedBlueEyes@tech.lgbt)

To use the Mastodon web ap­pli­ca­tion, please en­able JavaScript. Alternatively, try one of the na­tive apps for Mastodon for your plat­form.

...

Read the original on tech.lgbt »

4 464 shares, 21 trendiness

Kimi K2.5: Visual Agentic Intelligence

Today, we are in­tro­duc­ing Kimi K2.5, the most pow­er­ful open-source model to date. Kimi K2.5 builds on Kimi K2 with con­tin­ued pre­train­ing over ap­prox­i­mately 15T mixed vi­sual and text to­kens. Built as a na­tive mul­ti­modal model, K2.5 de­liv­ers state-of-the-art cod­ing and vi­sion ca­pa­bil­i­ties and a self-di­rected agent swarm par­a­digm.For com­plex tasks, Kimi K2.5 can self-di­rect an agent swarm with up to 100 sub-agents, ex­e­cut­ing par­al­lel work­flows across up to 1,500 tool calls. Compared with a sin­gle-agent setup, this re­duces ex­e­cu­tion time by up to 4.5x. The agent swarm is au­to­mat­i­cally cre­ated and or­ches­trated by Kimi K2.5 with­out any pre­de­fined sub­agents or work­flow.Kimi K2.5 is avail­able via Kimi.com, the Kimi App, the API, and Kimi Code. Kimi.com & Kimi App now sup­ports 4 modes: K2.5 Instant, K2.5 Thinking, K2.5 Agent, and K2.5 Agent Swarm (Beta). Agent Swarm is cur­rently in beta on Kimi.com, with free cred­its avail­able for high-tier paid users.Across three agen­tic bench­marks—HLE, BrowseComp, and SWE-Verified—Kimi K2.5 de­liv­ers strong per­for­mance at a frac­tion of the cost.Kimi K2.5 is the strongest open-source model to date for cod­ing, with par­tic­u­larly strong ca­pa­bil­i­ties in front-end de­vel­op­ment.K2.5 can turn sim­ple con­ver­sa­tions into com­plete front-end in­ter­faces, im­ple­ment­ing in­ter­ac­tive lay­outs and rich an­i­ma­tions such as scroll-trig­gered ef­fects. Below are ex­am­ples gen­er­ated by K2.5 from a sin­gle prompt with im­age-gen tool:Be­yond text prompts, K2.5 ex­cels at cod­ing with vi­sion. By rea­son­ing over im­ages and video, K2.5 im­proves im­age/​video-to-code gen­er­a­tion and vi­sual de­bug­ging, low­er­ing the bar­rier for users to ex­press in­tent vi­su­ally.Here is an ex­am­ple of K2.5 re­con­struct­ing a web­site from video:This ca­pa­bil­ity stems from mas­sive-scale vi­sion-text joint pre-train­ing. At scale, the trade-off be­tween vi­sion and text ca­pa­bil­i­ties dis­ap­pears — they im­prove in uni­son.Be­low is an ex­am­ple of K2.5 rea­son­ing over a puz­zle and mark­ing the short­est path us­ing code:K2.5 ex­cels in real-world soft­ware en­gi­neer­ing tasks. We eval­u­ate it us­ing Kimi Code Bench, our in­ter­nal cod­ing bench­mark cov­er­ing di­verse end-to-end tasks — from build­ing to de­bug­ging, refac­tor­ing, test­ing, and script­ing — across mul­ti­ple pro­gram­ming lan­guages. On this bench­mark, K2.5 shows con­sis­tent and mean­ing­ful im­prove­ments over K2 across task types.To try out K2.5′s agen­tic cod­ing ca­pa­bil­i­ties, K2.5 Agent of­fers a set of pre­con­fig­ured tools for im­me­di­ate, hands-on ex­pe­ri­ences. For soft­ware en­gi­neer­ing use cases, we rec­om­mend pair­ing Kimi K2.5 with our new cod­ing prod­uct, Kimi Code.Kimi Code works in your ter­mi­nal and can be in­te­grated with var­i­ous IDEs in­clud­ing VSCode, Cursor, Zed, etc. Kimi Code is open-sourced and sup­ports im­ages and videos as in­puts. It also au­to­mat­i­cally dis­cov­ers and mi­grates ex­ist­ing skills and MCPs into your work­ing en­vi­ron­ment in Kimi Code.Here’s an ex­am­ple us­ing Kimi Code to trans­late the aes­thetic of Matisse’s La Danse into the Kimi App. This demo high­lights a break­through in au­tonomous vi­sual de­bug­ging. Using vi­sual in­puts and doc­u­men­ta­tion lookup, K2.5 vi­su­ally in­spects its own out­put and it­er­ates on it au­tonomously. It cre­ates an art-in­spired web­page cre­ated end to end:Scal­ing Out, Not Just Up. We re­lease K2.5 Agent Swarm as a re­search pre­view, mark­ing a shift from sin­gle-agent scal­ing to self-di­rected, co­or­di­nated swarm-like ex­e­cu­tion.Trained with Parallel-Agent Reinforcement Learning (PARL), K2.5 learns to self-di­rect an agent swarm of up to 100 sub-agents, ex­e­cut­ing par­al­lel work­flows across up to 1,500 co­or­di­nated steps, with­out pre­de­fined roles or hand-crafted work­flows.PARL uses a train­able or­ches­tra­tor agent to de­com­pose tasks into par­al­leliz­able sub­tasks, each ex­e­cuted by dy­nam­i­cally in­stan­ti­ated, frozen sub­agents. Running these sub­tasks con­cur­rently sig­nif­i­cantly re­duces end-to-end la­tency com­pared to se­quen­tial agent ex­e­cu­tion.Train­ing a re­li­able par­al­lel or­ches­tra­tor is chal­leng­ing due to de­layed, sparse, and non-sta­tion­ary feed­back from in­de­pen­dently run­ning sub­agents. A com­mon fail­ure mode is se­r­ial col­lapse, where the or­ches­tra­tor de­faults to sin­gle-agent ex­e­cu­tion de­spite hav­ing par­al­lel ca­pac­ity. To ad­dress this, PARL em­ploys staged re­ward shap­ing that en­cour­ages par­al­lelism early in train­ing and grad­u­ally shifts fo­cus to­ward task suc­cess.We de­fine the re­ward aswhere an­neals from over train­ing. Early on, the aux­il­iary re­ward in­cen­tivizes sub­agent in­stan­ti­a­tion and con­cur­rent ex­e­cu­tion, pro­mot­ing ex­plo­ration of the par­al­lel sched­ul­ing space. As train­ing pro­gresses, op­ti­miza­tion shifts to­ward end-to-end task qual­ity  , pre­vent­ing de­gen­er­ate so­lu­tions where par­al­lelism is en­abled in name only.To fur­ther force par­al­lel strate­gies to emerge, we in­tro­duce a com­pu­ta­tional bot­tle­neck that makes se­quen­tial ex­e­cu­tion im­prac­ti­cal. Instead of count­ing to­tal steps, we eval­u­ate per­for­mance us­ing Critical Steps, a la­tency-ori­ented met­ric in­spired by the crit­i­cal path in par­al­lel com­pu­ta­tion:cap­tures or­ches­tra­tion over­head, while re­flects the slow­est sub­agent at each stage. Under this met­ric, spawn­ing more sub­tasks only helps if it short­ens the crit­i­cal path.An agent swarm has an or­ches­tra­tor that dy­nam­i­cally cre­ates spe­cial­ized sub­agents (e.g., AI Researcher, Physics Researcher, Fact Checker) and de­com­poses com­plex tasks into par­al­leliz­able sub­tasks for ef­fi­cient dis­trib­uted ex­e­cu­tion.In our par­al­lel-agent re­in­force­ment learn­ing en­vi­ron­ment, the re­ward in­creases smoothly as train­ing pro­gresses. At the same time, the level of par­al­lelism dur­ing train­ing also grad­u­ally in­creases.K2.5 Agent Swarm im­proves per­for­mance on com­plex tasks through par­al­lel, spe­cial­ized ex­e­cu­tion. In our in­ter­nal eval­u­a­tions, it leads to an 80% re­duc­tion in end-to-end run­time while en­abling more com­plex, long-hori­zon work­loads, as shown be­low.Agent Swarm re­duces the min­i­mum crit­i­cal steps re­quired to achieve tar­get per­for­mance by 3×–4.5× com­pared to sin­gle-agent ex­e­cu­tion in wide search sce­nario, with sav­ings scal­ing as tar­gets rise—trans­lat­ing to up to 4.5× wall-clock time re­duc­tion via par­al­leliza­tion.Here are rep­re­sen­ta­tive tra­jec­to­ries demon­strat­ing K2.5 Agent Swarm in ac­tion:K2.5 Agent can han­dle high-den­sity, large-scale of­fice work end to end. It rea­sons over large, high-den­sity in­puts, co­or­di­nates multi-step tool use, and de­liv­ers ex­pert-level out­puts: doc­u­ments, spread­sheets, PDFs, and slide decks—di­rectly through con­ver­sa­tion.With a fo­cus on real-world pro­fes­sional tasks, we de­sign two in­ter­nal ex­pert pro­duc­tiv­ity bench­marks. The AI Office Benchmark eval­u­ates end-to-end Office out­put qual­ity, while the General Agent Benchmark mea­sures multi-step, pro­duc­tion-grade work­flows against hu­man ex­pert per­for­mance. Across both bench­marks, K2.5 shows 59.3% and 24.3% im­prove­ments over K2 Thinking, re­flect­ing stronger end-to-end per­for­mance on real-world tasks.

K2.5 agent sup­ports ad­vanced tasks such as adding an­no­ta­tions in Word, con­struct­ing fi­nan­cial mod­els with Pivot Tables, and writ­ing LaTeX equa­tions in PDFs, while scal­ing to long-form out­puts like 10,000-word pa­pers or 100-page doc­u­ments.Tasks that once took hours or days now com­plete in min­utes. Here are some ex­am­ples:Grounded in ad­vances in cod­ing with vi­sion, agent swarms, and of­fice pro­duc­tiv­ity, Kimi K2.5 rep­re­sents a mean­ing­ful step to­ward AGI for the open-source com­mu­nity, demon­strat­ing strong ca­pa­bil­ity on real-world tasks un­der real-world con­straints. Looking ahead, we will push fur­ther into the fron­tier of agen­tic in­tel­li­gence, re­defin­ing the bound­aries of AI in knowl­edge work.To re­pro­duce of­fi­cial Kimi-K2.5 bench­mark re­sults, we rec­om­mend us­ing the of­fi­cial API. For third-party providers, re­fer to Kimi Vendor Verifier (KVV) to choose high-ac­cu­racy ser­vices. Details: https://​kimi.com/​blog/​kimi-ven­dor-ver­i­fier.htmlWe re­port re­sults for Kimi K2.5 and DeepSeek-V3.2 with think­ing mode en­abled, Claude Opus 4.5 with ex­tended think­ing mode, GPT-5.2 with xhigh rea­son­ing ef­fort, and Gemini 3 Pro with a high think­ing level. For vi­sion bench­marks, we ad­di­tion­ally re­port re­sults for Qwen3-VL-235B-A22B-Thinking.Unless oth­er­wise spec­i­fied, all Kimi K2.5 ex­per­i­ments were con­ducted with tem­per­a­ture = 1.0, top-p = 0.95, and a con­text length of 256k to­kens.Bench­marks with­out pub­licly avail­able scores were re-eval­u­ated un­der the same con­di­tions used for Kimi K2.5 and are marked with an as­ter­isk (*).We could not eval­u­ate GPT-5.2 xhigh on all bench­marks due to ser­vice sta­bil­ity is­sues. For bench­marks that were not tested, we mark them as -”.HLE, AIME 2025, HMMT 2025 (Feb), GPQA-Diamond and IMO-AnswerBench were eval­u­ated with a max­i­mum com­ple­tion bud­get of 96k to­kens.Re­sults for AIME and HMMT are av­er­aged over 32 runs (avg@32); GPQA-Diamond over 8 runs (avg@8).For HLE, we re­port scores on the full set (text & im­age). Kimi K2.5 scores 31.5 (text) and 21.3 (image) with­out tools, and 51.8 (text) and 39.8 (image) with tools. The DeepSeek-V3.2 score cor­re­sponds to its text-only sub­set (marked with †) . Hugging Face ac­cess was blocked to pre­vent po­ten­tial data leak­age. HLE with tools uses sim­ple con­text man­age­ment: once the con­text ex­ceeds a thresh­old, only the lat­est round of tool mes­sages is re­tained.Kimi K2.5 was equipped with search, code-in­ter­preter, and web-brows­ing tools for HLE with tools and all agen­tic search bench­marks.Ex­cept for BrowseComp (where K2.5 and DeepSeek-V3.2 used the dis­card-all strat­egy), no con­text man­age­ment was ap­plied, and tasks ex­ceed­ing the sup­ported con­text length were di­rectly counted as failed.The test sys­tem prompts em­pha­size deep and proac­tive tool use, in­struct­ing mod­els to rea­son care­fully, lever­age tools, and ver­ify un­cer­tain in­for­ma­tion. Full prompts will be pro­vided in the tech­ni­cal re­port.Re­sults for Seal-0 and WideSearch are av­er­aged over four runs (avg@4).ZeroBench (w/ tools) uses max-to­kens-per-step = 24k and max-steps = 30 for multi-step rea­son­ing.MMMU-Pro fol­lows the of­fi­cial pro­to­col, pre­serv­ing in­put or­der and prepend­ing im­ages.GPT-5.2-xhigh had ~10% fail­ure rate (no out­put de­spite 3 re­tries), treated as in­cor­rect; re­ported scores likely un­der­es­ti­mate true per­for­mance.Om­niDocBench Score is com­puted as (1 − nor­mal­ized Levenshtein dis­tance) × 100, where a higher score de­notes su­pe­rior ac­cu­racy.Ter­mi­nal-Bench 2.0 scores were ob­tained with the de­fault agent frame­work (Terminus-2) and the pro­vided JSON parser. In our im­ple­men­ta­tion, we eval­u­ated Terminal-Bench 2.0 un­der non-think­ing mode. This choice was made be­cause our cur­rent con­text man­age­ment strat­egy for the think­ing mode is in­com­pat­i­ble with Terminus-2.For the SWE-Bench se­ries of eval­u­a­tions (including ver­i­fied, mul­ti­lin­gual, and pro), we used an in­ter­nally de­vel­oped eval­u­a­tion frame­work. This frame­work in­cludes a min­i­mal set of tools—bash tool, cre­ate­file tool, in­sert tool, view tool, str­re­place tool, and sub­mit tool—along with tai­lored sys­tem prompts de­signed for the tasks. The high­est scores were achieved un­der non-think­ing mode.The score of Claude Opus 4.5 on CyberGym is re­ported un­der the non-think­ing set­ting.All re­ported scores of cod­ing tasks are av­er­aged over 5 in­de­pen­dent runs.

...

Read the original on www.kimi.com »

5 389 shares, 1 trendiness

moltbot/clawdbot: Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞

Clawdbot is a per­sonal AI as­sis­tant you run on your own de­vices. It an­swers you on the chan­nels you al­ready use (WhatsApp, Telegram, Slack, Discord, Google Chat, Signal, iMes­sage, Microsoft Teams, WebChat), plus ex­ten­sion chan­nels like BlueBubbles, Matrix, Zalo, and Zalo Personal. It can speak and lis­ten on ma­cOS/​iOS/​An­droid, and can ren­der a live Canvas you con­trol. The Gateway is just the con­trol plane — the prod­uct is the as­sis­tant.

If you want a per­sonal, sin­gle-user as­sis­tant that feels lo­cal, fast, and al­ways-on, this is it.

Preferred setup: run the on­board­ing wiz­ard (clawdbot on­board). It walks through gate­way, work­space, chan­nels, and skills. The CLI wiz­ard is the rec­om­mended path and works on ma­cOS, Linux, and Windows (via WSL2; strongly rec­om­mended). Works with npm, pnpm, or bun. New in­stall? Start here: Getting started

Model note: while any model is sup­ported, I strongly rec­om­mend Anthropic Pro/Max (100/200) + Opus 4.5 for long‑con­text strength and bet­ter prompt‑in­jec­tion re­sis­tance. See Onboarding.

npm in­stall -g clawd­bot@lat­est

# or: pnpm add -g clawd­bot@lat­est

clawd­bot on­board –install-daemon

The wiz­ard in­stalls the Gateway dae­mon (launchd/systemd user ser­vice) so it stays run­ning.

clawd­bot on­board –install-daemon

clawd­bot gate­way –port 18789 –verbose

# Send a mes­sage

clawd­bot mes­sage send –to +1234567890 –message Hello from Clawdbot”

# Talk to the as­sis­tant (optionally de­liver back to any con­nected chan­nel: WhatsApp/Telegram/Slack/Discord/Google Chat/Signal/iMessage/BlueBubbles/Microsoft Teams/Matrix/Zalo/Zalo Personal/WebChat)

clawd­bot agent –message Ship check­list” –thinking high

Prefer pnpm for builds from source. Bun is op­tional for run­ning TypeScript di­rectly.

git clone https://​github.com/​clawd­bot/​clawd­bot.git

cd clawd­bot

pnpm in­stall

pnpm ui:build # auto-in­stalls UI deps on first run

pnpm build

pnpm clawd­bot on­board –install-daemon

# Dev loop (auto-reload on TS changes)

pnpm gate­way:watch

Note: pnpm clawd­bot … runs TypeScript di­rectly (via tsx). pnpm build pro­duces dist/ for run­ning via Node / the pack­aged clawd­bot bi­nary.

* DM pair­ing (dmPolicy=“pairing” / chan­nels.dis­cord.dm.pol­icy=“pair­ing” / chan­nels.slack.dm.pol­icy=“pair­ing”): un­known senders re­ceive a short pair­ing code and the bot does not process their mes­sage.

* Approve with: clawd­bot pair­ing ap­prove (then the sender is added to a lo­cal al­lowlist store).

* Public in­bound DMs re­quire an ex­plicit opt-in: set dm­Pol­icy=“open” and in­clude *” in the chan­nel al­lowlist (allowFrom / chan­nels.dis­cord.dm.al­lowFrom / chan­nels.slack.dm.al­lowFrom).

Clawdbot can auto-con­fig­ure Tailscale Serve (tailnet-only) or Funnel (public) while the Gateway stays bound to loop­back. Configure gate­way.tailscale.mode:

* serve: tail­net-only HTTPS via tailscale serve (uses Tailscale iden­tity head­ers by de­fault).

* gate­way.bind must stay loop­back when Serve/Funnel is en­abled (Clawdbot en­forces this).

* Serve can be forced to re­quire a pass­word by set­ting gate­way.auth.mode: password” or gate­way.auth.al­low­Tailscale: false.

* Funnel re­fuses to start un­less gate­way.auth.mode: password” is set.

It’s per­fectly fine to run the Gateway on a small Linux in­stance. Clients (macOS app, CLI, WebChat) can con­nect over Tailscale Serve/Funnel or SSH tun­nels, and you can still pair de­vice nodes (macOS/iOS/Android) to ex­e­cute de­vice‑lo­cal ac­tions when needed.

* Gateway host runs the exec tool and chan­nel con­nec­tions by de­fault.

* Device nodes run de­vice‑lo­cal ac­tions (system.run, cam­era, screen record­ing, no­ti­fi­ca­tions) via node.in­voke.

In short: exec runs where the Gateway lives; de­vice ac­tions run where the de­vice lives.

The ma­cOS app can run in node mode and ad­ver­tises its ca­pa­bil­i­ties + per­mis­sion map over the Gateway WebSocket (node.list / node.de­scribe). Clients can then ex­e­cute lo­cal ac­tions via node.in­voke:

* sys­tem.run runs a lo­cal com­mand and re­turns std­out/​stderr/​exit code; set needsS­creen­Record­ing: true to re­quire screen-record­ing per­mis­sion (otherwise you’ll get PERMISSION_MISSING).

* sys­tem.no­tify posts a user no­ti­fi­ca­tion and fails if no­ti­fi­ca­tions are de­nied.

* can­vas.*, cam­era.*, screen.record, and lo­ca­tion.get are also routed via node.in­voke and fol­low TCC per­mis­sion sta­tus.

* Use /elevated on|off to tog­gle per‑ses­sion el­e­vated ac­cess when en­abled + al­lowlisted.

* Gateway per­sists the per‑ses­sion tog­gle via ses­sions.patch (WS method) along­side think­ingLevel, ver­bose­Level, model, send­Pol­icy, and groupActi­va­tion.

* Use these to co­or­di­nate work across ses­sions with­out jump­ing be­tween chat sur­faces.

ClawdHub is a min­i­mal skill reg­istry. With ClawdHub en­abled, the agent can search for skills au­to­mat­i­cally and pull in new ones as needed.

Send these in WhatsApp/Telegram/Slack/Google Chat/Microsoft Teams/WebChat (group com­mands are owner-only):

* /new or /reset — re­set the ses­sion

The Gateway alone de­liv­ers a great ex­pe­ri­ence. All apps are op­tional and add ex­tra fea­tures.

If you plan to build/​run com­pan­ion apps, fol­low the plat­form run­books be­low.

* Menu bar con­trol for the Gateway and health.

Note: signed builds re­quired for ma­cOS per­mis­sions to stick across re­builds (see docs/​mac/​per­mis­sions.md).

* Pairs as a node via the Bridge.

* Pairs via the same Bridge + pair­ing flow as iOS.

agent: {

model: anthropic/claude-opus-4-5”

* Default: tools run on the host for the main ses­sion, so the agent has full ac­cess when it’s just you.

* Group/channel safety: set agents.de­faults.sand­box.mode: non-main” to run non‑main ses­sions (groups/channels) in­side per‑ses­sion Docker sand­boxes; bash then runs in Docker for those ses­sions.

* Allowlist who can talk to the as­sis­tant via chan­nels.what­sapp.al­lowFrom.

* If chan­nels.what­sapp.groups is set, it be­comes a group al­lowlist; in­clude *” to al­low all.

* Optional: set chan­nels.telegram.groups (with chan­nels.telegram.groups.“*”.re­quire­Men­tion); when set, it is a group al­lowlist (include *” to al­low all). Also chan­nels.telegram.al­lowFrom or chan­nels.telegram.web­hookUrl as needed.

chan­nels: {

telegram: {

bot­To­ken: 123456:ABCDEF”

* Optional: set com­mands.na­tive, com­mands.text, or com­mands.use­Ac­cess­Groups, plus chan­nels.dis­cord.dm.al­lowFrom, chan­nels.dis­cord.guilds, or chan­nels.dis­cord.me­dia­MaxMb as needed.

chan­nels: {

dis­cord: {

to­ken: 1234abcd”

* ma­cOS only; Messages must be signed in.

* If chan­nels.imes­sage.groups is set, it be­comes a group al­lowlist; in­clude *” to al­low all.

* Allowlist who can talk via msteams.al­lowFrom; group ac­cess via msteams.groupAl­lowFrom or msteams.group­Pol­icy: open”.

browser: {

en­abled: true,

color: #FF4500

Use these when you’re past the on­board­ing flow and want the deeper ref­er­ence.

Clawdbot was built for Clawd, a space lob­ster AI as­sis­tant. 🦞 by Peter Steinberger and the com­mu­nity.

See CONTRIBUTING.md for guide­lines, main­tain­ers, and how to sub­mit PRs. AI/vibe-coded PRs wel­come! 🤖

Special thanks to Mario Zechner for his sup­port and for

pi-mono.

Thanks to all clawtrib­u­tors:

...

Read the original on github.com »

6 389 shares, 1 trendiness

moltbot/moltbot: Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞

Moltbot is a per­sonal AI as­sis­tant you run on your own de­vices. It an­swers you on the chan­nels you al­ready use (WhatsApp, Telegram, Slack, Discord, Google Chat, Signal, iMes­sage, Microsoft Teams, WebChat), plus ex­ten­sion chan­nels like BlueBubbles, Matrix, Zalo, and Zalo Personal. It can speak and lis­ten on ma­cOS/​iOS/​An­droid, and can ren­der a live Canvas you con­trol. The Gateway is just the con­trol plane — the prod­uct is the as­sis­tant.

If you want a per­sonal, sin­gle-user as­sis­tant that feels lo­cal, fast, and al­ways-on, this is it.

Preferred setup: run the on­board­ing wiz­ard (moltbot on­board). It walks through gate­way, work­space, chan­nels, and skills. The CLI wiz­ard is the rec­om­mended path and works on ma­cOS, Linux, and Windows (via WSL2; strongly rec­om­mended). Works with npm, pnpm, or bun. New in­stall? Start here: Getting started

Model note: while any model is sup­ported, I strongly rec­om­mend Anthropic Pro/Max (100/200) + Opus 4.5 for long‑con­text strength and bet­ter prompt‑in­jec­tion re­sis­tance. See Onboarding.

npm in­stall -g molt­bot@lat­est

# or: pnpm add -g molt­bot@lat­est

molt­bot on­board –install-daemon

The wiz­ard in­stalls the Gateway dae­mon (launchd/systemd user ser­vice) so it stays run­ning. Legacy note: clawd­bot re­mains avail­able as a com­pat­i­bil­ity shim.

molt­bot on­board –install-daemon

molt­bot gate­way –port 18789 –verbose

# Send a mes­sage

molt­bot mes­sage send –to +1234567890 –message Hello from Moltbot”

# Talk to the as­sis­tant (optionally de­liver back to any con­nected chan­nel: WhatsApp/Telegram/Slack/Discord/Google Chat/Signal/iMessage/BlueBubbles/Microsoft Teams/Matrix/Zalo/Zalo Personal/WebChat)

molt­bot agent –message Ship check­list” –thinking high

Prefer pnpm for builds from source. Bun is op­tional for run­ning TypeScript di­rectly.

git clone https://​github.com/​molt­bot/​molt­bot.git

cd molt­bot

pnpm in­stall

pnpm ui:build # auto-in­stalls UI deps on first run

pnpm build

pnpm molt­bot on­board –install-daemon

# Dev loop (auto-reload on TS changes)

pnpm gate­way:watch

Note: pnpm molt­bot … runs TypeScript di­rectly (via tsx). pnpm build pro­duces dist/ for run­ning via Node / the pack­aged molt­bot bi­nary.

* DM pair­ing (dmPolicy=“pairing” / chan­nels.dis­cord.dm.pol­icy=“pair­ing” / chan­nels.slack.dm.pol­icy=“pair­ing”): un­known senders re­ceive a short pair­ing code and the bot does not process their mes­sage.

* Approve with: molt­bot pair­ing ap­prove (then the sender is added to a lo­cal al­lowlist store).

* Public in­bound DMs re­quire an ex­plicit opt-in: set dm­Pol­icy=“open” and in­clude *” in the chan­nel al­lowlist (allowFrom / chan­nels.dis­cord.dm.al­lowFrom / chan­nels.slack.dm.al­lowFrom).

Moltbot can auto-con­fig­ure Tailscale Serve (tailnet-only) or Funnel (public) while the Gateway stays bound to loop­back. Configure gate­way.tailscale.mode:

* serve: tail­net-only HTTPS via tailscale serve (uses Tailscale iden­tity head­ers by de­fault).

* gate­way.bind must stay loop­back when Serve/Funnel is en­abled (Moltbot en­forces this).

* Serve can be forced to re­quire a pass­word by set­ting gate­way.auth.mode: password” or gate­way.auth.al­low­Tailscale: false.

* Funnel re­fuses to start un­less gate­way.auth.mode: password” is set.

It’s per­fectly fine to run the Gateway on a small Linux in­stance. Clients (macOS app, CLI, WebChat) can con­nect over Tailscale Serve/Funnel or SSH tun­nels, and you can still pair de­vice nodes (macOS/iOS/Android) to ex­e­cute de­vice‑lo­cal ac­tions when needed.

* Gateway host runs the exec tool and chan­nel con­nec­tions by de­fault.

* Device nodes run de­vice‑lo­cal ac­tions (system.run, cam­era, screen record­ing, no­ti­fi­ca­tions) via node.in­voke.

In short: exec runs where the Gateway lives; de­vice ac­tions run where the de­vice lives.

The ma­cOS app can run in node mode and ad­ver­tises its ca­pa­bil­i­ties + per­mis­sion map over the Gateway WebSocket (node.list / node.de­scribe). Clients can then ex­e­cute lo­cal ac­tions via node.in­voke:

* sys­tem.run runs a lo­cal com­mand and re­turns std­out/​stderr/​exit code; set needsS­creen­Record­ing: true to re­quire screen-record­ing per­mis­sion (otherwise you’ll get PERMISSION_MISSING).

* sys­tem.no­tify posts a user no­ti­fi­ca­tion and fails if no­ti­fi­ca­tions are de­nied.

* can­vas.*, cam­era.*, screen.record, and lo­ca­tion.get are also routed via node.in­voke and fol­low TCC per­mis­sion sta­tus.

* Use /elevated on|off to tog­gle per‑ses­sion el­e­vated ac­cess when en­abled + al­lowlisted.

* Gateway per­sists the per‑ses­sion tog­gle via ses­sions.patch (WS method) along­side think­ingLevel, ver­bose­Level, model, send­Pol­icy, and groupActi­va­tion.

* Use these to co­or­di­nate work across ses­sions with­out jump­ing be­tween chat sur­faces.

ClawdHub is a min­i­mal skill reg­istry. With ClawdHub en­abled, the agent can search for skills au­to­mat­i­cally and pull in new ones as needed.

Send these in WhatsApp/Telegram/Slack/Google Chat/Microsoft Teams/WebChat (group com­mands are owner-only):

* /new or /reset — re­set the ses­sion

The Gateway alone de­liv­ers a great ex­pe­ri­ence. All apps are op­tional and add ex­tra fea­tures.

If you plan to build/​run com­pan­ion apps, fol­low the plat­form run­books be­low.

* Menu bar con­trol for the Gateway and health.

Note: signed builds re­quired for ma­cOS per­mis­sions to stick across re­builds (see docs/​mac/​per­mis­sions.md).

* Pairs as a node via the Bridge.

* Pairs via the same Bridge + pair­ing flow as iOS.

agent: {

model: anthropic/claude-opus-4-5”

* Default: tools run on the host for the main ses­sion, so the agent has full ac­cess when it’s just you.

* Group/channel safety: set agents.de­faults.sand­box.mode: non-main” to run non‑main ses­sions (groups/channels) in­side per‑ses­sion Docker sand­boxes; bash then runs in Docker for those ses­sions.

* Allowlist who can talk to the as­sis­tant via chan­nels.what­sapp.al­lowFrom.

* If chan­nels.what­sapp.groups is set, it be­comes a group al­lowlist; in­clude *” to al­low all.

* Optional: set chan­nels.telegram.groups (with chan­nels.telegram.groups.“*”.re­quire­Men­tion); when set, it is a group al­lowlist (include *” to al­low all). Also chan­nels.telegram.al­lowFrom or chan­nels.telegram.web­hookUrl as needed.

chan­nels: {

telegram: {

bot­To­ken: 123456:ABCDEF”

* Optional: set com­mands.na­tive, com­mands.text, or com­mands.use­Ac­cess­Groups, plus chan­nels.dis­cord.dm.al­lowFrom, chan­nels.dis­cord.guilds, or chan­nels.dis­cord.me­dia­MaxMb as needed.

chan­nels: {

dis­cord: {

to­ken: 1234abcd”

* ma­cOS only; Messages must be signed in.

* If chan­nels.imes­sage.groups is set, it be­comes a group al­lowlist; in­clude *” to al­low all.

* Allowlist who can talk via msteams.al­lowFrom; group ac­cess via msteams.groupAl­lowFrom or msteams.group­Pol­icy: open”.

browser: {

en­abled: true,

color: #FF4500

Use these when you’re past the on­board­ing flow and want the deeper ref­er­ence.

Moltbot was built for Molty, a space lob­ster AI as­sis­tant. 🦞 by Peter Steinberger and the com­mu­nity.

See CONTRIBUTING.md for guide­lines, main­tain­ers, and how to sub­mit PRs. AI/vibe-coded PRs wel­come! 🤖

Special thanks to Mario Zechner for his sup­port and for

pi-mono.

Thanks to all clawtrib­u­tors:

...

Read the original on github.com »

7 360 shares, 31 trendiness

430,000-year-old well-preserved wooden tools are the oldest ever found

Researchers work­ing in south­ern Greece have iden­ti­fied the old­est known hand­held wooden tools, dated to about 430,000 years ago. The ob­jects came from Marathousa 1, a site in the Megalopolis Basin in the cen­tral Peloponnese. The area once held a lakeshore dur­ing the Middle Pleistocene, a pe­riod be­tween about 774,000 and 129,000 years ago.

Excavations at Marathousa 1 have pro­duced stone flakes, an­i­mal bones with cut marks, and the re­mains of a straight-tusked ele­phant. Archaeologists link these finds to re­peated vis­its by early hu­mans who processed large car­casses near wa­ter. Waterlogged sed­i­ments at the site cre­ated low oxy­gen con­di­tions. Such con­di­tions slowed de­cay and pre­served pieces of wood that usu­ally rot away over long spans of time.

Researchers ex­am­ined dozens of wood frag­ments un­der mi­cro­scopes. They stud­ied sur­face marks, in­ter­nal struc­ture, and wood species. This work helped the team sep­a­rate hu­man mod­i­fi­ca­tion from dam­age caused by roots, sed­i­ment pres­sure, or an­i­mals. Two frag­ments showed clear signs of shap­ing and use.

One piece comes from alder. The sur­face shows cut marks from stone tools and rounded ar­eas formed through re­peated con­tact with soil. The shape and wear fit use as a dig­ging stick near the lakeshore. Such a tool would have helped with loos­en­ing wet ground or ex­tract­ing plant foods. The sec­ond ar­ti­fact, a very small frag­ment from wil­low or poplar, shows carved edges and smooth­ing from han­dling. The size points to a fin­ger held tool. Researchers link this piece to fine tasks, such as ad­just­ing stone flakes dur­ing tool pro­duc­tion.

A third alder frag­ment drew at­ten­tion dur­ing sort­ing. Deep par­al­lel grooves run across the sur­face, with crushed fibers along the edges. Microscopic study matched these marks to claw dam­age from a large car­ni­vore, likely a bear. This ev­i­dence places large preda­tors at the same lo­ca­tion where hu­mans butchered ele­phants. Both groups used the lakeshore and may have com­peted for ac­cess to car­casses.

Before this work, the old­est known hand­held wooden tools came from sites in Africa, Europe, and Asia, all younger than 430,000 years. One older wooden struc­ture from Kalambo Falls in Zambia dates to about 476,000 years ago. Researchers in­ter­pret wood as part of built fea­tures rather than a hand­held im­ple­ment. The Marathousa finds push the record for shaped wooden tools back by at least 40,000 years and pro­vide the first such ev­i­dence from south­east­ern Europe.

The tools show care­ful se­lec­tion of lo­cal trees that grow in wet set­tings, in­clud­ing alder, wil­low, and poplar. Alongside stone and bone ar­ti­facts from the same lay­ers, the wooden pieces show broad knowl­edge of nat­ural ma­te­ri­als and var­ied tech­ni­cal skill dur­ing the Middle Pleistocene.

...

Read the original on archaeologymag.com »

8 338 shares, 21 trendiness

TonyStr

...

Read the original on tonystr.net »

9 292 shares, 14 trendiness

Places to Telnet

The text based in­ter­net can be ex­cit­ing, in­for­ma­tive, and fun. Using tel­net, you can ac­cess a va­ri­ety of these re­sources on the in­ter­net. Below you’ll find lists of a few places to get you started.

If you have an in­ter­est­ing item to add, just send an email to us:

Rainmaker was pretty great, and it lasted at least as far as 2018. I don’t re­call what hap­pened to it.

* nyan­cat.dakko.us

ANSI art an­i­ma­tion of poptart cat”, with sup­port for many dif­fer­ent ter­mi­nals (cool screen­shots!)

The tel­net server is of­fline, but the web­site is still up for this one!

Both are of­fline at the time of this up­date.

http://​tel­netbb­s­guide.com

A large ac­tive list­ing of Dial-Up and Telnet ac­ces­si­ble Bulletin Board Systems on the Internet:

http://​www.jump­jet.info/​Off­beat-In­ter­net/​Pub­lic/​Tel­Net/​url.htm

Jumpjet has a nice list of tel­net lo­ca­tions or­ga­nized by cat­e­gory:

http://​www.mud­con­nect.com/

Mudconnect keeps a good list of muds and moos:

http://​www.lights.ca/​hytel­net/

Hytelnet is an old (an now un­main­tained) di­rec­tory:

...

Read the original on telnet.org »

10 248 shares, 29 trendiness

Prakhar Gupta

Thinking about do­ing the thing is not do­ing the thing.

Dreaming about do­ing the thing is not do­ing the thing.

Visualizing suc­cess from do­ing the thing is not do­ing the thing.

Waiting to feel ready to do the thing is not do­ing the thing.

Talking about do­ing the thing is not do­ing the thing.

Explaining the thing to oth­ers is not do­ing the thing.

Arguing on­line about the thing is not do­ing the thing.

Announcing that you’ll start the thing is not do­ing the thing.

Listening to pod­casts about do­ing the thing is not do­ing the thing.

Watching tu­to­ri­als about do­ing the thing is not do­ing the thing.

Reading threads about how oth­ers did the thing is not do­ing the thing.

Planning the per­fect sys­tem for the thing is not do­ing the thing.

Buying tools for the thing is not do­ing the thing.

Reorganizing your work­space for the thing is not do­ing the thing.

Feeling guilty about not do­ing the thing is not do­ing the thing.

Being busy” in­stead of do­ing the thing is not do­ing the thing.

Telling your­self you’ll start to­mor­row is not do­ing the thing.

Failing while do­ing the thing is do­ing the thing.

Doing it badly is do­ing the thing.

Doing it timidly is do­ing the thing.

Doing a small part of the thing is do­ing the thing.

Writing a blog about do­ing the thing is not do­ing the thing.

I should prob­a­bly get back to work.

...

Read the original on www.softwaredesign.ing »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.