10 interesting stories served every morning and every evening.




1 733 shares, 35 trendiness

Ghostty Docs

Ghostty is a fast, fea­ture-rich, and cross-plat­form ter­mi­nal em­u­la­tor that uses plat­form-na­tive UI and GPU ac­cel­er­a­tion.

Ghostty is a fast, fea­ture-rich, and cross-plat­form ter­mi­nal em­u­la­tor that uses plat­form-na­tive UI and GPU ac­cel­er­a­tion.

Install Ghostty and run!

Zero con­fig­u­ra­tion re­quired to get up and run­ning.

Ready-to-run bi­na­ries for ma­cOS. Packages or build from source for Linux.

...

Read the original on ghostty.org »

2 546 shares, 25 trendiness

Ad-Supported AI Chat Demo — See Every Ad Type in Action

A satir­i­cal (but real!) demo of what AI chat could look like in an ad-sup­ported fu­ture. Chat with an AI while ex­pe­ri­enc­ing every mon­e­ti­za­tion pat­tern imag­in­able — ban­ners, in­ter­sti­tials, spon­sored re­sponses, freemium gates, and more.

Join 2 mil­lion pro­fes­sion­als who think faster, fo­cus bet­ter, and ac­com­plish more. AI-powered goal track­ing, habit build­ing, and mem­ory en­hance­ment. First 30 days FREE!Think 10x Faster with AI. First Month FREE! 🧠Fun fact: Just 10 min­utes of daily med­i­ta­tion re­duces stress by 35%. Start your free ZenFocus jour­ney to­day!Your AI as­sis­tant, proudly pow­ered by the finest ad­ver­tis­ing money can buy 💸⚠️ Warning: This AI may spon­ta­neously rec­om­mend prod­ucts at any time🏷️ This con­ver­sa­tion is proudly pow­ered by BrainBoost Pro™ • Ad-supported free tier • Remove adsStressed by all these ads? 10 min­utes of AI-guided med­i­ta­tion changes every­thing. AI-curated meal prep kits de­liv­ered weekly. $30 off your first box!🎨 Today’s chat theme spon­sored by BrainBoost Pro • Colors, fonts, and vibes cu­rated by our ad­ver­tis­ing team

This tool is a satir­i­cal but fully func­tional demon­stra­tion of what AI chat as­sis­tants could look like if they were mon­e­tized through ad­ver­tis­ing — sim­i­lar to how free apps, web­sites, and stream­ing ser­vices fund them­selves to­day. As AI chat be­comes main­stream, com­pa­nies face a fun­da­men­tal ques­tion: how do you make it free for users while cov­er­ing the sig­nif­i­cant com­pute costs? Advertising is one ob­vi­ous an­swer — and this demo shows every ma­jor ad pat­tern that could be ap­plied to a chat in­ter­face.We built this as an ed­u­ca­tional tool to help mar­keters, prod­uct man­agers, and de­vel­op­ers un­der­stand the land­scape of AI mon­e­ti­za­tion, and to give users a glimpse of the fu­ture they might want to avoid (or em­brace, de­pend­ing on your per­spec­tive).

This demo cov­ers the full spec­trum of ad­ver­tis­ing pat­terns that could ap­pear in an AI chat prod­uct.

This tool is ed­u­ca­tional and use­ful for a wide range of pro­fes­sion­als think­ing about the fu­ture of AI prod­ucts.

Are the ads in this demo real?No — all brands and ads are com­pletely fic­tional and cre­ated for this demo. BrainBoost Pro, QuickLearn Academy, ZenFocus, TaskMaster AI, ReadyMeal, and all other brands are made up. No ac­tual ad­ver­tis­ing rev­enue is be­ing gen­er­ated. Does this show what AI chat will ac­tu­ally look like?It shows one pos­si­ble fu­ture. Some ad-sup­ported AI prod­ucts al­ready ex­ist and use sev­eral of these pat­terns. Others are spec­u­la­tive. The goal is to make these pos­si­bil­i­ties con­crete and tan­gi­ble so peo­ple can have in­formed con­ver­sa­tions about what kind of AI fu­ture they want.Is the AI ac­tu­ally work­ing or is every­thing scripted?The AI is real — your mes­sages are processed by a live lan­guage model and you get gen­uine re­sponses. The ads are the scripted part. Some AI re­sponses will in­clude spon­sored prod­uct men­tions as part of the demon­stra­tion.What hap­pens to my chat data?Like all our free tools, con­ver­sa­tions are logged to im­prove the ser­vice. We do not sell this data to ad­ver­tis­ers — this is a demo, not an ac­tual ad net­work.How does the freemium gate work?Af­ter 5 free mes­sages, you can ei­ther watch an ad’ (a sim­u­lated 5-second count­down) to un­lock 5 more mes­sages, or you can up­grade to our ac­tual ad-free ser­vice. This mir­rors how real freemium prod­ucts work.

All of our tools are gen­uinely free — no ads, no pay­walls, no spon­sored re­sponses. Just AI that works.

Build Your Own AI Chatbot — No Ads RequiredNow that you’ve seen what ad-sup­ported AI looks like, imag­ine giv­ing your cus­tomers a clean, fo­cused AI ex­pe­ri­ence with zero in­ter­rup­tions. With 99helpers, you can de­ploy an AI chat­bot trained on your con­tent in min­utes. No credit card re­quired • Setup in min­utes • No ads, ever

...

Read the original on 99helpers.com »

3 437 shares, 172 trendiness

Motorola's new partnership with GrapheneOS

Motorola, a Lenovo Company, an­nounced the ad­di­tion of new con­sumer and en­ter­prise so­lu­tions to its port­fo­lio to­day at Mobile World Congress. The com­pany un­veiled a part­ner­ship with the GrapheneOS Foundation, to bring cut­ting-edge se­cu­rity to every­day users across the globe. In ad­di­tion, Motorola in­tro­duced a new Moto Secure fea­ture and Moto Analytics, to ex­pand Motorola’s B2B ecosys­tem with ad­vanced se­cu­rity and deeper op­er­a­tional in­sights for or­ga­ni­za­tions across in­dus­tries. These an­nounce­ments re­in­force Motorola’s com­mit­ment to de­liv­er­ing in­tel­li­gent, and highly ca­pa­ble tech­nol­ogy with en­hanced se­cu­rity for cus­tomers world­wide.

GrapheneOS Foundation Partnership

Motorola is in­tro­duc­ing a new era of smart­phone se­cu­rity through a long‑term part­ner­ship with the GrapheneOS Foundation, the lead­ing non­profit in ad­vanced mo­bile se­cu­rity and cre­ators of a hard­ened, op­er­at­ing sys­tem based on the Android Open Source Project. Together, Motorola and the GrapheneOS Foundation will work to strengthen smart­phone se­cu­rity and col­lab­o­rate on fu­ture de­vices en­gi­neered with GrapheneOS com­pat­i­bil­ity.

We are thrilled to be part­ner­ing with Motorola to bring GrapheneOS’s in­dus­try‑lead­ing pri­vacy and se­cu­rity‑fo­cused mo­bile op­er­at­ing sys­tem to their next-gen­er­a­tion smart­phone”, said a spokesper­son at GrapheneOS. This col­lab­o­ra­tion marks a sig­nif­i­cant mile­stone in ex­pand­ing the reach of GrapheneOS, and we ap­plaud Motorola for tak­ing this mean­ing­ful step to­wards ad­vanc­ing mo­bile se­cu­rity.”

By com­bin­ing GrapheneOS’s pi­o­neer­ing en­gi­neer­ing with Motorola’s decades of se­cu­rity ex­per­tise, real‑world user in­sights, and Lenovo’s ThinkShield so­lu­tions, the col­lab­o­ra­tion will ad­vance a new gen­er­a­tion of pri­vacy and se­cu­rity tech­nolo­gies. In the com­ing months, Motorola and the GrapheneOS Foundation will con­tinue to col­lab­o­rate on joint re­search, soft­ware en­hance­ments, and new se­cu­rity ca­pa­bil­i­ties, with more de­tails and so­lu­tions to roll out as the part­ner­ship evolves.

Moto Analytics

Today, Motorola also in­tro­duced Moto Analytics, an en­ter­prise‑grade an­a­lyt­ics plat­form de­signed to give IT ad­min­is­tra­tors real‑time vis­i­bil­ity into de­vice per­for­mance across their fleet. Unlike tra­di­tional EMM tools that fo­cus pri­mar­ily on ac­cess con­trol, Moto Analytics pro­vides deep op­er­a­tional in­sights, from app sta­bil­ity to bat­tery health and con­nec­tiv­ity per­for­mance.

With this data, IT teams can trou­bleshoot more ef­fi­ciently, pre­vent is­sues be­fore they es­ca­late, and main­tain em­ployee pro­duc­tiv­ity. As part of the ThinkShield ecosys­tem, Moto Analytics in­te­grates seam­lessly with ex­ist­ing en­ter­prise en­vi­ron­ments and scales ef­fort­lessly as or­ga­ni­za­tions grow.

Private Image Data

Motorola is also ex­pand­ing its Moto Secure plat­form with a new fea­ture, Private Image Data. This tool gives users greater con­trol over the hid­den data stored in their pho­tos. When en­abled, it au­to­mat­i­cally re­moves sen­si­tive meta­data from all new cam­era im­ages on the de­vice, help­ing pro­tect de­tails like lo­ca­tion and de­vice in­for­ma­tion. This pro­tec­tion runs qui­etly in the back­ground, pre­serv­ing the im­age it­self while clear­ing some of the pri­vate data at­tached to it.

Private Image Data joins a grow­ing set of pro­tec­tions within the Moto Secure app, Motorola’s cen­tral hub for es­sen­tial pri­vacy and se­cu­rity tools pow­ered by ThinkShield. From man­ag­ing app per­mis­sions to se­cur­ing sen­si­tive files and mon­i­tor­ing de­vice in­tegrity, Moto Secure brings key Android and Motorola safe­guards to­gether in one place, mak­ing it eas­ier for users to un­der­stand and man­age their de­vice’s se­cu­rity.

Private Image Data will be­gin rolling out to mo­torola sig­na­ture de­vices in the com­ing months, with ad­di­tional up­dates and re­fine­ments ex­pected over time.

With the in­tro­duc­tion of these new so­lu­tions, Motorola is ex­pand­ing its en­ter­prise port­fo­lio with so­lu­tions built for to­day’s most de­mand­ing busi­ness en­vi­ron­ments. From ad­vanced se­cu­rity to op­er­a­tional ef­fi­ciency and in­tel­li­gent de­vice man­age­ment, these in­no­va­tions re­flect Motorola’s com­mit­ment to em­pow­er­ing or­ga­ni­za­tions with tech­nol­ogy that is se­cu­rity-fo­cused, re­li­able, and ready for the fu­ture.

Legal Disclaimers

Certain fea­tures, func­tion­al­ity, and prod­uct spec­i­fi­ca­tions may be net­work-de­pen­dent and sub­ject to ad­di­tional terms, con­di­tions, and charges. All are sub­ject to change with­out no­tice. MOTOROLA, the Stylized M Logo, MOTO, and the MOTO fam­ily of marks are trade­marks of Motorola Trademark Holdings, LLC. LENOVO and THINKSHIELD are trade­marks of Lenovo. Android is a trade­mark of Google, LLC. All other trade­marks are the prop­erty of their re­spec­tive own­ers. ©2026 Motorola Mobility LLC. All rights re­served.

...

Read the original on motorolanews.com »

4 383 shares, 15 trendiness

AI Made Writing Code Easier. It Made Engineering Harder.

Yes, writ­ing code is eas­ier than ever.

AI as­sis­tants au­to­com­plete your func­tions. Agents scaf­fold en­tire fea­tures. You can de­scribe what you want in plain English and watch work­ing code ap­pear in sec­onds. The bar­rier to pro­duc­ing code has never been lower.

And yet, the day-to-day life of soft­ware en­gi­neers has got­ten more com­plex, more de­mand­ing, and more ex­haust­ing than it was two years ago.

This is not a con­tra­dic­tion. It is the re­al­ity of what hap­pens when an in­dus­try adopts a pow­er­ful new tool with­out paus­ing to con­sider the sec­ond-or­der ef­fects on the peo­ple us­ing it.

If you are a soft­ware en­gi­neer read­ing this and feel­ing like your job qui­etly be­came harder while every­one around you cel­e­brates how easy every­thing is now, you are not imag­in­ing things. The job changed. The ex­pec­ta­tions changed. And no­body sent a memo.

There is a phe­nom­e­non hap­pen­ing right now that most en­gi­neers feel but strug­gle to ar­tic­u­late. The ex­pected out­put of a soft­ware en­gi­neer in 2026 is dra­mat­i­cally higher than it was in 2023. Not be­cause any­one held a meet­ing and an­nounced new tar­gets. Not be­cause your man­ager sat you down and ex­plained the new rules. The base­line just moved.

It moved be­cause AI tools made cer­tain tasks faster. And when tasks be­come faster, the as­sump­tion fol­lows im­me­di­ately: you should be do­ing more. Not in the fu­ture. Now.

A February 2026 study pub­lished in Harvard Business Review tracked 200 em­ploy­ees at a U. S. tech com­pany over eight months. The re­searchers found some­thing that will sound fa­mil­iar to any­one liv­ing through this shift. Workers did not use AI to fin­ish ear­lier and go home. They used it to do more. They took on broader tasks, worked at a faster pace, and ex­tended their hours, of­ten with­out any­one ask­ing them to. The re­searchers de­scribed a self-re­in­forc­ing cy­cle: AI ac­cel­er­ated cer­tain tasks, which raised ex­pec­ta­tions for speed. Higher speed made work­ers more re­liant on AI. Increased re­liance widened the scope of what work­ers at­tempted. And a wider scope fur­ther ex­panded the quan­tity and den­sity of work.

The num­bers tell the rest of the story. Eighty-three per­cent of work­ers in the study said AI in­creased their work­load. Burnout was re­ported by 62 per­cent of as­so­ci­ates and 61 per­cent of en­try-level work­ers. Among C-suite lead­ers? Just 38 per­cent. The peo­ple do­ing the ac­tual work are car­ry­ing the in­ten­sity. The peo­ple set­ting the ex­pec­ta­tions are not feel­ing it the same way.

This gap mat­ters enor­mously. If lead­er­ship be­lieves AI is mak­ing every­thing eas­ier while en­gi­neers are drown­ing in a new kind of com­plex­ity, the re­sult is a slow ero­sion of trust, morale, and even­tu­ally tal­ent.

A sep­a­rate sur­vey of over 600 en­gi­neer­ing pro­fes­sion­als found that nearly two-thirds of en­gi­neers ex­pe­ri­ence burnout de­spite their or­ga­ni­za­tions us­ing AI in de­vel­op­ment. Forty-three per­cent said lead­er­ship was out of touch with team chal­lenges. Over a third re­ported that pro­duc­tiv­ity had ac­tu­ally de­creased over the past year, even as their com­pa­nies in­vested more in AI tool­ing.

The base­line moved. The ex­pec­ta­tions rose. And for many en­gi­neers, no one ac­knowl­edged that the job they signed up for had fun­da­men­tally changed.

Here is some­thing that gets lost in all the ex­cite­ment about AI pro­duc­tiv­ity: most soft­ware en­gi­neers be­came en­gi­neers be­cause they love writ­ing code.

Not man­ag­ing code. Not re­view­ing code. Not su­per­vis­ing sys­tems that pro­duce code. Writing it. The act of think­ing through a prob­lem, de­sign­ing a so­lu­tion, and ex­press­ing it pre­cisely in a lan­guage that makes a ma­chine do ex­actly what you in­tended. That is what drew most of us to this pro­fes­sion. It is a cre­ative act, a form of crafts­man­ship, and for many en­gi­neers, the most sat­is­fy­ing part of their day.

Now they are be­ing told to stop.

Not ex­plic­itly, of course. Nobody walks into a standup and says stop writ­ing code.” But the mes­sage is there, sub­tle and per­sis­tent. Use AI to write it faster. Let the agent han­dle the im­ple­men­ta­tion. Focus on higher-level tasks. Your value is not in the code you write any­more, it is in how well you di­rect the sys­tems that write it for you.

For early adopters, this feels ex­cit­ing. It feels like evo­lu­tion. For a sig­nif­i­cant por­tion of work­ing en­gi­neers, it feels like be­ing told that the thing they spent years mas­ter­ing, the skill that de­fines their pro­fes­sional iden­tity, is sud­denly less im­por­tant.

One en­gi­neer cap­tured this shift per­fectly in a widely shared es­say, de­scrib­ing how AI trans­formed the en­gi­neer­ing role from builder to re­viewer. Every day felt like be­ing a judge on an as­sem­bly line that never stops. You just keep stamp­ing those pull re­quests. The pro­duc­tion vol­ume went up. The sense of crafts­man­ship went down.

This is not a mi­nor ad­just­ment. It is a fun­da­men­tal shift in pro­fes­sional iden­tity. Engineers who built their ca­reers around deep tech­ni­cal skill are be­ing asked to re­de­fine what they do and who they are, es­sen­tially overnight, with­out any tran­si­tion pe­riod, train­ing, or ac­knowl­edg­ment that some­thing sig­nif­i­cant was lost in the process.

Having led en­gi­neer­ing teams for over two decades, I have seen tech­nol­ogy shifts be­fore. New frame­works, new lan­guages, new method­olo­gies. Engineers adapt. They al­ways have. But this is dif­fer­ent be­cause it is not ask­ing en­gi­neers to learn a new way of do­ing what they do. It is ask­ing them to stop do­ing the thing that made them en­gi­neers in the first place and be­come some­thing else en­tirely.

That is not an up­grade. That is a ca­reer iden­tity cri­sis. And pre­tend­ing it is not hap­pen­ing does not make it go away.

While en­gi­neers are be­ing asked to write less code, they are si­mul­ta­ne­ously be­ing asked to do more of every­thing else.

More prod­uct think­ing. More ar­chi­tec­tural de­ci­sion-mak­ing. More code re­view. More con­text switch­ing. More plan­ning. More test­ing over­sight. More de­ploy­ment aware­ness. More risk as­sess­ment.

The scope of what it means to be a software en­gi­neer” ex­panded dra­mat­i­cally in the last two years, and it hap­pened with­out a pause to catch up.

This is partly a di­rect con­se­quence of AI ac­cel­er­a­tion. When code gets pro­duced faster, the bot­tle­neck shifts. It moves from im­ple­men­ta­tion to every­thing sur­round­ing im­ple­men­ta­tion: re­quire­ments clar­ity, ar­chi­tec­ture de­ci­sions, in­te­gra­tion test­ing, de­ploy­ment strat­egy, mon­i­tor­ing, and main­te­nance. These were al­ways part of the en­gi­neer­ing life­cy­cle, but they were dis­trib­uted across roles. Product man­agers han­dled re­quire­ments. QA han­dled test­ing. DevOps han­dled de­ploy­ment. Senior ar­chi­tects han­dled sys­tem de­sign.

Now, with AI col­laps­ing the im­ple­men­ta­tion phase, or­ga­ni­za­tions are qui­etly re­dis­trib­ut­ing those re­spon­si­bil­i­ties to the en­gi­neers them­selves. The Harvard Business Review study doc­u­mented this ex­act pat­tern. Product man­agers be­gan writ­ing code. Engineers took on prod­uct work. Researchers started do­ing en­gi­neer­ing tasks. Roles that once had clear bound­aries blurred as work­ers used AI to han­dle jobs that pre­vi­ously sat out­side their re­mit.

The in­dus­try is openly talk­ing about this as a pos­i­tive de­vel­op­ment. Engineers should be T-shaped” or full-stack” in a broader sense. Nearly 45 per­cent of en­gi­neer­ing roles now ex­pect pro­fi­ciency across mul­ti­ple do­mains. AI tools aug­ment gen­er­al­ists more ef­fec­tively, mak­ing it eas­ier for one per­son to han­dle mul­ti­ple com­po­nents of a sys­tem.

On pa­per, this sounds em­pow­er­ing. In prac­tice, it means that a mid-level back­end en­gi­neer is now ex­pected to un­der­stand prod­uct strat­egy, re­view AI-generated fron­tend code they did not write, think about de­ploy­ment in­fra­struc­ture, con­sider se­cu­rity im­pli­ca­tions of code they can­not fully trace, and main­tain a big-pic­ture ar­chi­tec­tural aware­ness that used to be some­one else’s job.

That is not em­pow­er­ment. That is scope creep with­out a cor­re­spond­ing in­crease in com­pen­sa­tion, au­thor­ity, or time.

From my ex­pe­ri­ence build­ing and scal­ing teams in fin­tech and high-traf­fic plat­forms, I can tell you that role ex­pan­sion with­out clear bound­aries al­ways leads to the same out­come: peo­ple try to do every­thing, noth­ing gets done with the depth it re­quires, and burnout fol­lows. The en­gi­neers who sur­vive are the ones who learn to say no, to pri­or­i­tize ruth­lessly, and to push back when the scope of their role qui­etly dou­bles with­out any­one ac­knowl­edg­ing it.

There is an irony at the cen­ter of the AI-assisted en­gi­neer­ing work­flow that no­body wants to talk about: re­view­ing AI-generated code is of­ten harder than writ­ing the code your­self.

When you write code, you carry the con­text of every de­ci­sion in your head. You know why you chose this data struc­ture, why you han­dled this edge case, why you struc­tured the mod­ule this way. The code is an ex­pres­sion of your think­ing, and re­view­ing it later is straight­for­ward be­cause the rea­son­ing is al­ready stored in your mem­ory.

When AI writes code, you in­herit the out­put with­out the rea­son­ing. You see the code, but you do not see the de­ci­sions. You do not know what trade­offs were made, what as­sump­tions were baked in, what edge cases were con­sid­ered or ig­nored. You are re­view­ing some­one else’s work, ex­cept that some­one is not a col­league you can ask ques­tions. It is a sta­tis­ti­cal model that pro­duces plau­si­ble-look­ing code with­out any un­der­stand­ing of your sys­tem’s spe­cific con­straints.

A sur­vey by Harness found that 67 per­cent of de­vel­op­ers re­ported spend­ing more time de­bug­ging AI-generated code, and 68 per­cent spent more time re­view­ing it than they did with hu­man-writ­ten code. This is not a fail­ure of the tools. It is a struc­tural prop­erty of the work­flow. Code re­view with­out shared con­text is in­her­ently more de­mand­ing than re­view­ing code you par­tic­i­pated in cre­at­ing.

Yet the ex­pec­ta­tion from man­age­ment is that AI should be mak­ing every­thing faster. So en­gi­neers find them­selves in a bind: they are pro­duc­ing more code than ever, but the qual­ity as­sur­ance bur­den has in­creased, the con­text-per-line-of-code has de­creased, and the cog­ni­tive load of main­tain­ing a sys­tem they only par­tially built is grow­ing with every sprint.

This is the su­per­vi­sion para­dox. The faster AI gen­er­ates code, the more hu­man at­ten­tion is re­quired to en­sure that code ac­tu­ally works in the con­text of a real sys­tem with real users and real busi­ness con­straints. The pro­duc­tion bot­tle­neck did not dis­ap­pear. It moved from writ­ing to un­der­stand­ing, and un­der­stand­ing is harder to speed up.

What makes all of this es­pe­cially dif­fi­cult is the self-re­in­forc­ing na­ture of the cy­cle.

AI makes cer­tain tasks faster. Faster tasks cre­ate the per­cep­tion of more avail­able ca­pac­ity. More per­ceived ca­pac­ity leads to more work be­ing as­signed. More work leads to more AI re­liance. More AI re­liance leads to more code that needs re­view, more con­text that needs to be main­tained, more sys­tems that need to be un­der­stood, and more cog­ni­tive load on en­gi­neers who are al­ready stretched thin.

The Harvard Business Review re­searchers de­scribed this as workload creep.” Workers did not con­sciously de­cide to work harder. The ex­pan­sion hap­pened nat­u­rally, al­most in­vis­i­bly. Each in­di­vid­ual step felt rea­son­able. In ag­gre­gate, it pro­duced an un­sus­tain­able pace.

Before AI, there was a nat­ural ceil­ing on how much you could pro­duce in a day. That ceil­ing was set by think­ing speed, typ­ing speed, and the time it takes to look things up. It was frus­trat­ing some­times, but it was also a gov­er­nor. A nat­ural speed limit that pre­vented you from out­run­ning your own abil­ity to main­tain qual­ity.

AI re­moved the gov­er­nor. Now the only limit is your cog­ni­tive en­durance. And most peo­ple do not know their cog­ni­tive lim­its un­til they have al­ready blown past them.

This is where many en­gi­neers find them­selves right now. Shipping more code than any quar­ter in their ca­reer. Feeling more drained than any quar­ter in their ca­reer. The two facts are not un­re­lated.

The trap is that it looks like pro­duc­tiv­ity from the out­side. Metrics go up. Velocity charts look great. More fea­tures shipped. More pull re­quests merged. But un­der­neath the num­bers, qual­ity is qui­etly erod­ing, tech­ni­cal debt is ac­cu­mu­lat­ing faster than it can be ad­dressed, and the peo­ple do­ing the work are run­ning on fumes.

If the pic­ture is dif­fi­cult for ex­pe­ri­enced en­gi­neers, it is even harder for those start­ing their ca­reers.

Junior en­gi­neers have tra­di­tion­ally learned by do­ing the sim­pler, more task-ori­ented work. Fixing small bugs. Writing straight­for­ward fea­tures. Implementing well-de­fined tick­ets. This hands-on work built the foun­da­tional un­der­stand­ing that even­tu­ally al­lowed them to take on more com­plex chal­lenges.

AI is rapidly con­sum­ing that train­ing ground. If an agent can han­dle the rou­tine API hookup, the boil­er­plate mod­ule, the straight­for­ward CRUD end­point, what is left for a ju­nior en­gi­neer to learn from? The ex­pec­ta­tion is shift­ing to­ward need­ing to con­tribute at a higher level al­most from day one, with­out the grad­ual ramp-up that pre­vi­ous gen­er­a­tions of en­gi­neers re­lied on.

Entry-level hir­ing at the 15 largest tech firms fell 25 per­cent from 2023 to 2024. The HackerRank 2025 Developer Skills Report con­firmed that ex­pec­ta­tions are ris­ing faster than pro­duc­tiv­ity gains, and that early-ca­reer hir­ing re­mains slug­gish com­pared to se­nior-level roles. Companies are pri­or­i­tiz­ing ex­pe­ri­enced tal­ent, but the pipeline that pro­duces ex­pe­ri­enced tal­ent is be­ing qui­etly dis­man­tled.

This is a prob­lem that ex­tends be­yond in­di­vid­ual ca­reer con­cerns. If ju­nior en­gi­neers do not get the op­por­tu­nity to build foun­da­tional skills through hands-on work, the in­dus­try will even­tu­ally face a short­age of se­nior en­gi­neers who truly un­der­stand the sys­tems they over­see. You can­not su­per­vise what you never learned to build.

As I have writ­ten be­fore, code is for hu­mans to read. If the next gen­er­a­tion of en­gi­neers never de­vel­ops the flu­ency to read, un­der­stand, and rea­son about code at a deep level, no amount of AI tool­ing will com­pen­sate for that gap.

If you lead en­gi­neer­ing teams, the most im­por­tant thing you can do right now is ac­knowl­edge that this tran­si­tion is gen­uinely dif­fi­cult. Not the­o­ret­i­cally. Not ab­stractly. For the ac­tual peo­ple on your team.

The ca­reer they signed up for changed fast. The skills they were hired for are be­ing repo­si­tioned. The ex­pec­ta­tions they are work­ing un­der shifted with­out a clear an­nounce­ment. Acknowledging this re­al­ity is not a sign of weak­ness. It is a pre­req­ui­site for main­tain­ing a team that trusts you.

Start with em­pa­thy, but do not stop there.

Give your team real train­ing. Not a lunch-and-learn about prompt en­gi­neer­ing. Real in­vest­ment in the skills that the new en­gi­neer­ing land­scape ac­tu­ally re­quires: sys­tem de­sign, ar­chi­tec­tural think­ing, prod­uct rea­son­ing, se­cu­rity aware­ness, and the abil­ity to crit­i­cally eval­u­ate code they did not write. These are not triv­ial skills. They take time to de­velop, and your team needs struc­tured sup­port to build them.

Give them space to ex­per­i­ment with­out the pres­sure of im­me­di­ate pro­duc­tiv­ity gains. The en­gi­neers who will thrive in this en­vi­ron­ment are the ones who have room to fig­ure out how AI fits into their work­flow with­out be­ing pe­nal­ized for the learn­ing curve. Every ex­pe­ri­enced tech­nol­o­gist I know who has suc­cess­fully in­te­grated AI tools went through an ad­just­ment pe­riod where they were less pro­duc­tive be­fore they be­came more pro­duc­tive. That ad­just­ment pe­riod is nor­mal, and it needs to be pro­tected.

Set ex­plicit bound­aries around role scope. If you are ask­ing en­gi­neers to take on prod­uct think­ing, plan­ning, and risk as­sess­ment in ad­di­tion to their tech­ni­cal work, name it. Define it. Compensate for it. Do not let it hap­pen silently and then won­der why your team is burned out.

Rethink your met­rics. If your en­gi­neer­ing suc­cess met­rics are still cen­tered on ve­loc­ity, tick­ets closed, and lines of code, you are mea­sur­ing the wrong things in an AI-assisted world. System sta­bil­ity, code qual­ity, de­ci­sion qual­ity, cus­tomer out­comes, and team health are bet­ter in­di­ca­tors of whether your en­gi­neer­ing or­ga­ni­za­tion is ac­tu­ally pro­duc­ing value or just pro­duc­ing vol­ume.

Protect the ju­nior pipeline. If you have stopped hir­ing ju­nior en­gi­neers be­cause AI can han­dle en­try-level tasks, you are solv­ing a short-term ef­fi­ciency prob­lem by cre­at­ing a long-term tal­ent cri­sis. The se­nior en­gi­neers you rely on to­day were ju­nior en­gi­neers who learned by do­ing the work that AI is now con­sum­ing. That path still mat­ters.

And fi­nally, keep chal­leng­ing your team. I have never met a good en­gi­neer who did not love a good chal­lenge. The en­gi­neers on your team are not frag­ile. They are ca­pa­ble, in­tel­li­gent peo­ple who signed up for hard prob­lems. They can han­dle this tran­si­tion. Just make sure they are set up to meet it.

If you are an en­gi­neer nav­i­gat­ing this shift, here is what I would tell you based on two decades of watch­ing tech­nol­ogy cy­cles re­shape this pro­fes­sion.

First, do not aban­don your fun­da­men­tals. The pres­sure to be­come an AI-first” en­gi­neer is real, but the en­gi­neers who will be most valu­able in five years are the ones who deeply un­der­stand the sys­tems they work on. AI is a tool. Understanding ar­chi­tec­ture, de­bug­ging com­plex sys­tems, rea­son­ing about per­for­mance and se­cu­rity: these skills are not be­com­ing less im­por­tant. They are be­com­ing more im­por­tant be­cause some­one needs to be the adult in the room when AI-generated code breaks in pro­duc­tion at 2 AM.

Second, learn to set bound­aries with the ac­cel­er­a­tion trap. Just be­cause you can pro­duce more does not mean you should. Sustainable pace mat­ters. The en­gi­neers who burn out try­ing to match the the­o­ret­i­cal max­i­mum out­put AI makes pos­si­ble are not the ones who build last­ing ca­reers. The ones who learn to work with AI de­lib­er­ately, choos­ing when to use it and when to think in­de­pen­dently, are the ones who will still be thriv­ing in this pro­fes­sion a decade from now.

Third, em­brace the parts of the ex­panded role that gen­uinely in­ter­est you. If the en­gi­neer­ing role now in­cludes more prod­uct think­ing, more ar­chi­tec­tural de­ci­sion-mak­ing, more cross-func­tional com­mu­ni­ca­tion, treat that as an op­por­tu­nity rather than an im­po­si­tion. These are skills that se­nior en­gi­neers and tech­ni­cal lead­ers need. You are be­ing given ac­cess to a broader set of ca­pa­bil­i­ties ear­lier in your ca­reer than any pre­vi­ous gen­er­a­tion of en­gi­neers. That is not a bur­den. It is a head start.

Fourth, talk about what you are ex­pe­ri­enc­ing. The iso­la­tion of feel­ing like you are the only one strug­gling with this tran­si­tion is one of the most dam­ag­ing as­pects of the cur­rent mo­ment. You are not the only one. The data con­firms it. Two-thirds of en­gi­neers re­port burnout. The ex­pec­ta­tion gap be­tween lead­er­ship and en­gi­neer­ing teams is well doc­u­mented. Talking openly about these chal­lenges, with your team, with your man­ager, with your broader net­work, is not com­plain­ing. It is pro­fes­sional hon­esty.

And fifth, re­mem­ber that this pro­fes­sion has sur­vived every pre­dic­tion of its demise. COBOL was sup­posed to elim­i­nate pro­gram­mers. Expert sys­tems were sup­posed to re­place them. Fourth-generation lan­guages, CASE tools, vi­sual pro­gram­ming, no-code plat­forms, out­sourc­ing. Every decade brings a new tech­nol­ogy that promises to make soft­ware en­gi­neers ob­so­lete, and every decade the de­mand for skilled en­gi­neers grows. AI will not be dif­fer­ent. The tools change. The fun­da­men­tals en­dure.

AI made writ­ing code eas­ier and made be­ing an en­gi­neer harder. Both of these things are true at the same time, and pre­tend­ing that only the first one mat­ters is how or­ga­ni­za­tions lose their best peo­ple.

The en­gi­neers who are strug­gling right now are not strug­gling be­cause they are bad at their jobs. They are strug­gling be­cause their jobs changed un­der­neath them while the in­dus­try cel­e­brated the part that got eas­ier and ig­nored the parts that got harder.

Expectations rose with­out an­nounce­ment. Roles ex­panded with­out bound­aries. Output de­mands in­creased with­out cor­re­spond­ing in­creases in sup­port, train­ing, or ac­knowl­edg­ment. And the en­gi­neers who raised con­cerns were told, im­plic­itly or ex­plic­itly, that they just needed to adapt faster.

That is not how you build a sus­tain­able en­gi­neer­ing cul­ture. That is how you build a burnout ma­chine.

The in­dus­try needs to name this para­dox hon­estly. AI is an in­cred­i­ble tool. It is also plac­ing enor­mous new de­mands on the peo­ple us­ing it. Both things can be true. Both things need to be ad­dressed.

The or­ga­ni­za­tions that get this right, that in­vest in their peo­ple along­side their tools, that ac­knowl­edge the hu­man cost of rapid tech­no­log­i­cal change while still push­ing for­ward, those are the or­ga­ni­za­tions that will at­tract and re­tain the best en­gi­neer­ing tal­ent in the years ahead.

The ones that do not will dis­cover some­thing that every tech­nol­ogy cy­cle even­tu­ally teaches: tools do not build prod­ucts. People do. And peo­ple have lim­its that no amount of AI can au­to­mate away.

If this res­onated with you, I would love to hear your per­spec­tive. What has changed most about your en­gi­neer­ing role in the last year? Drop me a mes­sage or con­nect with me on LinkedIn. I write reg­u­larly about the in­ter­sec­tion of AI, soft­ware en­gi­neer­ing, and lead­er­ship at ivan­turkovic.com. Follow along if you want hon­est, ex­pe­ri­ence-dri­ven per­spec­tives on how tech­nol­ogy is ac­tu­ally chang­ing this pro­fes­sion.

...

Read the original on www.ivanturkovic.com »

5 376 shares, 21 trendiness

MCP is dead. Long live the CLI

I’m go­ing to make a bold claim: MCP is al­ready dy­ing. We may not fully re­al­ize it yet, but the signs are there. OpenClaw does­n’t sup­port it. Pi does­n’t sup­port it. And for good rea­son.

When Anthropic an­nounced the Model Context Protocol, the in­dus­try col­lec­tively lost its mind. Every com­pany scram­bled to ship MCP servers as proof they were AI first.” Massive re­sources poured into new end­points, new wire for­mats, new au­tho­riza­tion schemes, all so LLMs could talk to ser­vices they could al­ready talk to.

I’ll ad­mit, I never fully un­der­stood the need for it. You know what LLMs are re­ally good at? Figuring things out on their own. Give them a CLI and some docs and they’re off to the races.

I tried to avoid writ­ing this for a long time, but I’m con­vinced MCP pro­vides no real-world ben­e­fit, and that we’d be bet­ter off with­out it. Let me ex­plain.

LLMs are re­ally good at us­ing com­mand-line tools. They’ve been trained on mil­lions of man pages, Stack Overflow an­swers, and GitHub re­pos full of shell scripts. When I tell Claude to use gh pr view 123, it just works.

MCP promised a cleaner in­ter­face, but in prac­tice I found my­self writ­ing the same doc­u­men­ta­tion any­way: what each tool does, what pa­ra­me­ters it ac­cepts, and more im­por­tantly, when to use it. The LLM did­n’t need a new pro­to­col.

When Claude does some­thing un­ex­pected with Jira, I can run the same jira is­sue view com­mand and see ex­actly what it saw. Same in­put, same out­put, no mys­tery.

With MCP, the tool only ex­ists in­side the LLM con­ver­sa­tion. Something goes wrong and now I’m spelunk­ing through JSON trans­port logs in­stead of just run­ning the com­mand my­self. Debugging should­n’t re­quire a pro­to­col de­coder.

This is where the gap gets wide. CLIs com­pose. I can pipe through jq, chain with grep, redi­rect to files. This is­n’t just con­ve­nient; it’s of­ten the only prac­ti­cal ap­proach.

With MCP, your op­tions are dump­ing the en­tire plan into the con­text win­dow (expensive, of­ten im­pos­si­ble) or build­ing cus­tom fil­ter­ing into the MCP server it­self. Either way, you’re do­ing more work for a worse re­sult. The CLI ap­proach uses tools that al­ready ex­ist, are well-doc­u­mented, and that both hu­mans and agents un­der­stand.

MCP is un­nec­es­sar­ily opin­ion­ated about auth. Why should a pro­to­col for giv­ing an LLM tools to use need to con­cern it­self with au­then­ti­ca­tion?

CLI tools don’t care. aws uses pro­files and SSO. gh uses gh auth lo­gin. kubectl uses kube­con­fig. These are bat­tle-tested auth flows that work the same whether I’m at the key­board or Claude is dri­ving. When auth breaks, I fix it the way I al­ways would: aws sso lo­gin, gh auth re­fresh. No MCP-specific trou­bleshoot­ing re­quired.

Local MCP servers are processes. They need to start up, stay run­ning, and not silently hang. In Claude Code, they’re spawned as child processes, which works un­til it does­n’t.

CLI tools are just bi­na­ries on disk. No back­ground processes, no state to man­age, no ini­tial­iza­tion dance. They’re there when you need them and in­vis­i­ble when you don’t.

Beyond the de­sign phi­los­o­phy, MCP has real day-to-day fric­tion:

Initialization is flaky. I’ve lost count of the times I’ve restarted Claude Code be­cause an MCP server did­n’t come up. Sometimes it works on retry, some­times I’m clear­ing state and start­ing over.

Re-auth never ends. Using mul­ti­ple MCP tools? Have fun au­then­ti­cat­ing each one. CLIs with SSO or long-lived cre­den­tials just don’t have this prob­lem. Auth once and you’re done.

Permissions are all-or-noth­ing. Claude Code lets you al­lowlist MCP tools by name, but that’s it. You can’t scope to read-only op­er­a­tions or re­strict pa­ra­me­ters. With CLIs, I can al­lowlist gh pr view but re­quire ap­proval for gh pr merge. That gran­u­lar­ity mat­ters.

I’m not say­ing MCP is com­pletely use­less. If a tool gen­uinely has no CLI equiv­a­lent, MCP might be the right call. I still use plenty in my day-to-day, when it’s the only op­tion avail­able.

I might even ar­gue there’s some value in hav­ing a stan­dard­ized in­ter­face, and that there are prob­a­bly use­cases where it makes more sense than a CLI.

But for the vast ma­jor­ity of work, the CLI is sim­pler, faster to de­bug, and more re­li­able.

The best tools are the ones that work for both hu­mans and ma­chines. CLIs have had decades of de­sign it­er­a­tion. They’re com­pos­able, de­bug­gable, and they pig­gy­back on auth sys­tems that al­ready ex­ist.

MCP tried to build a bet­ter ab­strac­tion. Turns out we al­ready had a pretty good one.

If you’re a com­pany in­vest­ing in an MCP server but you don’t have an of­fi­cial CLI, stop and re­think what you’re do­ing. Ship a good API, then ship a good CLI. The agents will fig­ure it out.

...

Read the original on ejholmes.github.io »

6 293 shares, 16 trendiness

New iron nanomaterial wipes out cancer cells without harming healthy tissue

Researchers at Oregon State University have cre­ated a new nano­ma­te­r­ial de­signed to de­stroy can­cer cells from the in­side. The ma­te­r­ial ac­ti­vates two sep­a­rate chem­i­cal re­ac­tions once in­side a tu­mor cell, over­whelm­ing it with ox­ida­tive stress while leav­ing sur­round­ing healthy tis­sue un­harmed.

Researchers at Oregon State University have cre­ated a new nano­ma­te­r­ial de­signed to de­stroy can­cer cells from the in­side. The ma­te­r­ial ac­ti­vates two sep­a­rate chem­i­cal re­ac­tions once in­side a tu­mor cell, over­whelm­ing it with ox­ida­tive stress while leav­ing sur­round­ing healthy tis­sue un­harmed.

The work, led by Oleh Taratula, Olena Taratula, and Chao Wang from the OSU College of Pharmacy, was pub­lished in Advanced Functional Materials.

The dis­cov­ery strength­ens the grow­ing field of chemo­dy­namic ther­apy or CDT. This emerg­ing can­cer treat­ment strat­egy takes ad­van­tage of the unique chem­i­cal con­di­tions found in­side tu­mors. Compared with nor­mal tis­sue, can­cer cells tend to be more acidic and con­tain higher lev­els of hy­dro­gen per­ox­ide.

Traditional CDT uses these tu­mor con­di­tions to spark the for­ma­tion of hy­droxyl rad­i­cals, highly re­ac­tive mol­e­cules made of oxy­gen and hy­dro­gen that con­tain an un­paired elec­tron. These re­ac­tive oxy­gen species dam­age cells through ox­i­da­tion, strip­ping elec­trons from es­sen­tial com­po­nents such as lipids, pro­teins, and DNA.

More re­cent CDT ap­proaches have also suc­ceeded in gen­er­at­ing sin­glet oxy­gen in­side tu­mors. Singlet oxy­gen is an­other re­ac­tive oxy­gen species, named for its sin­gle elec­tron spin state rather than the three spin states seen in the more sta­ble oxy­gen mol­e­cules pre­sent in the air.

However, ex­ist­ing CDT agents are lim­ited,” Oleh Taratula said. They ef­fi­ciently gen­er­ate ei­ther rad­i­cal hy­drox­yls or sin­glet oxy­gen but not both, and they of­ten lack suf­fi­cient cat­alytic ac­tiv­ity to sus­tain ro­bust re­ac­tive oxy­gen species pro­duc­tion. Consequently, pre­clin­i­cal stud­ies of­ten only show par­tial tu­mor re­gres­sion and not a durable ther­a­peu­tic ben­e­fit.”

To ad­dress these short­com­ings, the team de­vel­oped a new CDT nanoa­gent built from an iron-based metal-or­ganic frame­work or MOF. This struc­ture is ca­pa­ble of pro­duc­ing both hy­droxyl rad­i­cals and sin­glet oxy­gen, in­creas­ing its can­cer-fight­ing po­ten­tial. The MOF demon­strated strong tox­i­c­ity across mul­ti­ple can­cer cell lines while caus­ing min­i­mal harm to non­cancer­ous cells.

When we sys­tem­i­cally ad­min­is­tered our nanoa­gent in mice bear­ing hu­man breast can­cer cells, it ef­fi­ciently ac­cu­mu­lated in tu­mors, ro­bustly gen­er­ated re­ac­tive oxy­gen species and com­pletely erad­i­cated the can­cer with­out ad­verse ef­fects,” Olena Taratula said. We saw to­tal tu­mor re­gres­sion and long-term pre­ven­tion of re­cur­rence, all with­out see­ing any sys­temic tox­i­c­ity.”

In these pre­clin­i­cal ex­per­i­ments, tu­mors dis­ap­peared en­tirely and did not re­turn, and the an­i­mals showed no signs of harm­ful side ef­fects.

Before mov­ing into hu­man tri­als, the re­searchers plan to test the treat­ment in ad­di­tional can­cer types, in­clud­ing ag­gres­sive pan­cre­atic can­cer, to de­ter­mine whether the ap­proach can be ef­fec­tive across a wide range of tu­mors.

Other con­trib­u­tors to the study in­cluded Oregon State re­searchers Kongbrailatpam Shitaljit Sharma, Yoon Tae Goo, Vladislav Grigoriev, Constanze Raitmayr, Ana Paula Mesquita Souza, and Manali Parag Phawde. Funding was pro­vided by the National Cancer Institute of the National Institutes of Health and the Eunice Kennedy Shriver National Institute of Child Health and Human Development.

...

Read the original on www.sciencedaily.com »

7 283 shares, 25 trendiness

WebMCP is available for early preview

As the agen­tic web evolves, we want to help web­sites play an ac­tive role in how AI agents in­ter­act with them. WebMCP aims to pro­vide a stan­dard way for ex­pos­ing struc­tured tools, en­sur­ing AI agents can per­form ac­tions on your site with in­creased speed, re­li­a­bil­ity, and pre­ci­sion.

By defin­ing these tools, you tell agents how and where to in­ter­act with your site, whether it’s book­ing a flight, fil­ing a sup­port ticket, or nav­i­gat­ing com­plex data. This di­rect com­mu­ni­ca­tion chan­nel elim­i­nates am­bi­gu­ity and al­lows for faster, more ro­bust agent work­flows.

WebMCP pro­poses two new APIs that al­low browser agents to take ac­tion on be­half of the user:

Declarative API: Perform stan­dard ac­tions that can be de­fined di­rectly in HTML forms.

These APIs serve as a bridge, mak­ing your web­site agent-ready” and en­abling more re­li­able and per­for­mant agent work­flows com­pared to raw DOM ac­tu­a­tion.

Imagine an agent that can han­dle com­plex tasks for your users with con­fi­dence and speed.

Customer sup­port: Help users cre­ate de­tailed cus­tomer sup­port tick­ets, by en­abling agents to fill in all of the nec­es­sary tech­ni­cal de­tails au­to­mat­i­cally.

Ecommerce: Users can bet­ter shop your prod­ucts when agents can eas­ily find what they’re look­ing for, con­fig­ure par­tic­u­lar shop­ping op­tions, and nav­i­gate check­out flows with pre­ci­sion.

Travel: Users could more eas­ily get the ex­act flights they want, by al­low­ing the agent to search, fil­ter re­sults, and han­dle book­ings us­ing struc­tured data to en­sure ac­cu­rate re­sults every time.

WebMCP is avail­able for pro­to­typ­ing to early pre­view pro­gram par­tic­i­pants.

Sign up for the early pre­view pro­gram to gain ac­cess to the doc­u­men­ta­tion and demos, stay up-to-date with the lat­est changes, and dis­cover new APIs.

...

Read the original on developer.chrome.com »

8 271 shares, 19 trendiness

MicroGPT explained interactively

Trying my best to vi­su­al­ize it. I’m a n00b at ma­chine learn­ing though

Andrej Karpathy wrote a 200-line Python script that trains and runs a GPT from scratch, with no li­braries or de­pen­den­cies, just pure Python. The script con­tains the al­go­rithm that pow­ers LLMs like ChatGPT.

Let’s walk through it piece by piece and watch each part work. Andrej did a walk­through on his blog, but here I take a more vi­sual ap­proach, tai­lored for be­gin­ners.

The model trains on 32,000 hu­man names, one per line: emma, olivia, ava, is­abella, sophia… Each name is a doc­u­ment. The mod­el’s job is to learn the sta­tis­ti­cal pat­terns in these names and gen­er­ate plau­si­ble new ones that sound like they could be real.

By the end of train­ing, the model pro­duces names like kamon”, karai”, anna”, and anton”.The model has learned which char­ac­ters tend to fol­low which, which sounds are com­mon at the start vs. the end, and how long a typ­i­cal name runs. From ChatGPT’s per­spec­tive, your con­ver­sa­tion is just a doc­u­ment. When you type a prompt, the mod­el’s re­sponse is a sta­tis­ti­cal doc­u­ment com­ple­tion.

Neural net­works work with num­bers, not char­ac­ters. So we need a way to con­vert text into a se­quence of in­te­gers and back. The sim­plest pos­si­ble to­k­enizer as­signs one in­te­ger to each unique char­ac­ter in the dataset. The 26 low­er­case let­ters get ids 0 through 25, and we add one spe­cial to­ken called BOS (Beginning of Sequence) with id 26 that marks where a name starts and ends.

Type a name be­low and watch it get to­k­enized. Each char­ac­ter maps to its in­te­ger id, and BOS to­kens wrap both ends:

Here’s the core task: given the to­kens we’ve seen so far, pre­dict what comes next. We slide through the se­quence one po­si­tion at a time. At po­si­tion 0, the model sees only BOS and must pre­dict the first let­ter. At po­si­tion 1, it sees BOS and the first let­ter and must pre­dict the sec­ond let­ter. And so on.

Step through the se­quence be­low and watch the con­text grow while the tar­get shifts for­ward:

Each step pro­duces one train­ing ex­am­ple: the con­text on the left is the in­put, the green to­ken on the right is what the model should pre­dict. For the name emma”, that’s five in­put-tar­get pairs. This slid­ing win­dow is how all lan­guage mod­els train, in­clud­ing ChatGPT.

At each po­si­tion, the model out­puts 27 raw num­bers, one per pos­si­ble next to­ken. These num­bers (called ) can be any­thing: pos­i­tive, neg­a­tive, large, small. We need to con­vert them into prob­a­bil­i­ties that are pos­i­tive and sum to 1. does this by ex­po­nen­ti­at­ing each score and di­vid­ing by the to­tal.

Adjust the log­its be­low and watch the prob­a­bil­ity dis­tri­b­u­tion change. Notice how one large logit dom­i­nates, and the ex­po­nen­tial am­pli­fies dif­fer­ences.

Here’s the ac­tual soft­max code from mi­crogpt. Step through it to see the in­ter­me­di­ate val­ues at each line:

How wrong was the pre­dic­tion? We need a sin­gle num­ber that cap­tures the model thought the cor­rect an­swer was un­likely.” If the model as­signs prob­a­bil­ity 0.9 to the cor­rect next to­ken, the loss is low (0.1). If it as­signs prob­a­bil­ity 0.01, the loss is high (4.6). The for­mula is where is the prob­a­bil­ity the model as­signed to the cor­rect to­ken. This is called .

Drag the slider to ad­just the prob­a­bil­ity of the cor­rect to­ken and watch the loss change:

The curve has two prop­er­ties that make it use­ful. First, it’s zero when the model is per­fectly con­fi­dent in the right an­swer (). Second, it goes to in­fin­ity as the model as­signs near-zero prob­a­bil­ity to the truth (), which pun­ishes con­fi­dent wrong an­swers se­verely. Training min­i­mizes this num­ber.

To im­prove, the model needs to an­swer: for each of my 4,192 , if I nudge it up by a tiny amount, does the loss go up or down, and by how much?” com­putes this by walk­ing the com­pu­ta­tion back­ward, ap­ply­ing the at each step.

Every math­e­mat­i­cal op­er­a­tion (add, mul­ti­ply, exp, log) is a node in a graph. Each node re­mem­bers its in­puts and knows its lo­cal de­riv­a­tive. The back­ward pass starts at the loss (where the is triv­ially 1.0) and mul­ti­plies lo­cal de­riv­a­tives along every path back to the in­puts.

Step through the for­ward pass, then the back­ward pass for a small ex­am­ple where with :

Now step through the ac­tual Value class code. Watch how each op­er­a­tion records its chil­dren and lo­cal gra­di­ents, then how back­ward() walks the graph in re­verse, ac­cu­mu­lat­ing gra­di­ents:

We know how to mea­sure er­ror and how to trace that er­ror back to every pa­ra­me­ter. Now let’s build the model it­self, start­ing with how it rep­re­sents to­kens.

A raw to­ken id like 4 is just an in­dex. The model can’t do math with a bare in­te­ger. So each to­ken looks up a learned vec­tor (a list of 16 num­bers) from an table. Think of it as each to­ken hav­ing a 16-dimensional personality” that the model can ad­just dur­ing train­ing.

Position mat­ters too. The let­ter a” at po­si­tion 0 plays a dif­fer­ent role than a” at po­si­tion 4. So there’s a sec­ond em­bed­ding table in­dexed by po­si­tion. The to­ken em­bed­ding and po­si­tion em­bed­ding are added to­gether to form the in­put to the rest of the net­work.

Click a to­ken be­low to see its em­bed­ding vec­tors and how they com­bine:

This is how work. At each po­si­tion, the model needs to gather in­for­ma­tion from pre­vi­ous po­si­tions. It does this through : each to­ken pro­duces three vec­tors from its em­bed­ding.

A Query (“what am I look­ing for?“), a Key (“what do I con­tain?“), and a Value (“what in­for­ma­tion do I of­fer if se­lected?“). The query at the cur­rent po­si­tion is com­pared against all keys from pre­vi­ous po­si­tions via . High dot prod­uct means high rel­e­vance. Softmax con­verts these scores into at­ten­tion weights, and the weighted sum of val­ues is the out­put.

Explore the at­ten­tion weights be­low. Each cell shows how much one po­si­tion at­tends to an­other. Switch be­tween the four at­ten­tion heads to see dif­fer­ent pat­terns:

The model pipes each to­ken through: em­bed, nor­mal­ize, at­tend, add , nor­mal­ize, MLP, add resid­ual, pro­ject to out­put log­its. The (multilayer per­cep­tron) is a two-layer feed-for­ward net­work: pro­ject up to 64 di­men­sions, ap­ply (zero out neg­a­tives), pro­ject back to 16. If at­ten­tion is how to­kens com­mu­ni­cate, the MLP is where each po­si­tion thinks in­de­pen­dently.

Step through the pipeline for one to­ken and watch data flow through each stage:

Here’s the ac­tual gpt() func­tion from mi­crogpt. Step through to see the code ex­e­cut­ing line by line, with the in­ter­me­di­ate vec­tor at each stage:

The train­ing loop re­peats 1,000 times: pick a name, to­k­enize it, run the model for­ward over every po­si­tion, com­pute the cross-en­tropy loss at each po­si­tion, av­er­age the losses, back­prop­a­gate to get gra­di­ents for every pa­ra­me­ter, and up­date the pa­ra­me­ters to make the loss a bit lower.

The op­ti­mizer is Adam, which is smarter than naive gra­di­ent de­scent. It main­tains a run­ning av­er­age of each pa­ra­me­ter’s re­cent gra­di­ents (momentum) and a run­ning av­er­age of the squared gra­di­ents (adaptive ). Parameters that have been get­ting con­sis­tent gra­di­ents take larger steps. Parameters that have been os­cil­lat­ing take smaller ones.

Watch the loss de­crease over 1,000 train­ing steps. The model starts at ~3.3 (random guess­ing among 27 to­kens: ) and set­tles around 2.37. The gen­er­ated names evolve from gib­ber­ish to plau­si­ble:

Step through the code for one com­plete train­ing it­er­a­tion. Watch it pick a name, run the for­ward pass at each po­si­tion, com­pute the loss, run back­ward, and up­date the pa­ra­me­ters:

Once train­ing is done, is straight­for­ward. Start with BOS, run the for­ward pass, get 27 prob­a­bil­i­ties, ran­domly sam­ple one to­ken, feed it back in, and re­peat un­til the model out­puts BOS again (meaning I’m done”) or we hit the max­i­mum length.

Temperature con­trols how we sam­ple. Before soft­max, we di­vide the log­its by the tem­per­a­ture. A tem­per­a­ture of 1.0 sam­ples di­rectly from the learned dis­tri­b­u­tion. Lower tem­per­a­tures sharpen the dis­tri­b­u­tion (the model picks its top choices more of­ten). Higher tem­per­a­tures flat­ten it (more di­verse but po­ten­tially less co­her­ent out­put).

Adjust the tem­per­a­ture and watch the prob­a­bil­ity dis­tri­b­u­tion change:

Step through the in­fer­ence loop to see a name be­ing gen­er­ated char­ac­ter by char­ac­ter. At each step, the model runs for­ward, pro­duces prob­a­bil­i­ties, and sam­ples the next to­ken:

This 200-line script con­tains the com­plete al­go­rithm. Between this and ChatGPT, litte changes con­cep­tu­ally. The dif­fer­ences are things like: tril­lions of to­kens in­stead of 32,000 names. Subword to­k­eniza­tion (100K vo­cab­u­lary) in­stead of char­ac­ters. Tensors on GPUs in­stead of scalar Value ob­jects in Python. Hundreds of bil­lions of pa­ra­me­ters in­stead of 4,192. Hundreds of lay­ers in­stead of one. Training across thou­sands of GPUs for months.

But the loop is the same. Tokenize, em­bed, at­tend, com­pute, pre­dict the next to­ken, mea­sure sur­prise, walk the gra­di­ents back­ward, nudge the pa­ra­me­ters. Repeat.

...

Read the original on growingswe.com »

9 235 shares, 34 trendiness

mandel-macaque/memento: Keep track of you codex sessions per commit

git-me­mento is a Git ex­ten­sion that records the AI cod­ing ses­sion used to pro­duce a com­mit.

It runs a com­mit and then stores a cleaned mark­down con­ver­sa­tion as a git note on the new com­mit.

* Attach the AI ses­sion trace to the com­mit (git notes).

* Keep provider sup­port ex­ten­si­ble (Codex first, oth­ers later).

git me­mento init

git me­mento init codex

git me­mento init claude

git me­mento com­mit

git me­mento com­mit

You can pass -m mul­ti­ple times, and each value is for­warded to git com­mit in or­der. When -m is omit­ted, git com­mit opens your de­fault ed­i­tor.

* Without a ses­sion id, it copies the note(s) from the pre­vi­ous HEAD onto the amended com­mit.

* With a ses­sion id, it copies pre­vi­ous note(s) and ap­pends the new fetched ses­sion as an ad­di­tional ses­sion en­try.

* A sin­gle com­mit note can con­tain ses­sions from dif­fer­ent AI providers.

git me­mento share-notes

git me­mento share-notes up­stream

This pushes refs/​notes/* and con­fig­ures lo­cal re­mote. so notes can be fetched by team­mates.

Push your branch and sync notes to the same re­mote in one com­mand (default: ori­gin):

git me­mento push

git me­mento push up­stream

This runs git push and then per­forms the same notes sync as share-notes.

git me­mento notes-sync

git me­mento notes-sync up­stream

git me­mento notes-sync up­stream –strategy union

* Merges re­mote notes into lo­cal notes and pushes synced notes back to the re­mote.

git me­mento notes-rewrite-setup

Carry notes from a rewrit­ten range (for squash/​rewrite flows) onto a new tar­get com­mit:

git me­mento notes-carry –onto

This reads notes from com­mits in .. and ap­pends a prove­nance block to .

git me­mento au­dit –range main..HEAD

git me­mento au­dit –range ori­gin/​main..HEAD –strict –format json

git me­mento doc­tor

git me­mento doc­tor up­stream –format json

git me­mento help

git me­mento –version

Provider de­faults can come from env vars, and init per­sists the se­lected provider + val­ues in lo­cal git con­fig:

* If the repos­i­tory is not con­fig­ured yet, com­mit, amend , push, share-notes, notes-sync, notes-rewrite-setup, and notes-carry fail with a mes­sage to run git me­mento init first.

If a ses­sion id is not found, git-me­mento asks Codex for avail­able ses­sions and prints them.

dot­net pub­lish src/GitMemento.Cli/GitMemento.Cli.fsproj -c Release -r osx-ar­m64 -p:PublishAot=true

dot­net pub­lish src/GitMemento.Cli/GitMemento.Cli.fsproj -c Release -r linux-x64 -p:PublishAot=true

dot­net pub­lish src/GitMemento.Cli/GitMemento.Cli.fsproj -c Release -r win-x64 -p:PublishAot=true

Copy the pro­duced ex­e­cutable to a di­rec­tory in your PATH.

Ensure the bi­nary name is git-me­mento (or git-me­mento.exe on Windows).

git me­mento com­mit

curl -fsSL https://​raw.githubuser­con­tent.com/​man­del-macaque/​me­mento/​main/​in­stall.sh | sh

* Release as­sets are built with NativeAOT (PublishAot=true) and pack­aged as a sin­gle ex­e­cutable per plat­form.

* If the work­flow runs from a tag push (for ex­am­ple v1.2.3), that tag is used as the GitHub re­lease tag/​name.

* If the work­flow runs from main with­out a tag, the re­lease tag be­comes (for ex­am­ple 1.0.0-a1b2c3d4).

* in­stall.sh al­ways down­loads from re­leases/​lat­est, so the in­staller fol­lows the lat­est pub­lished GitHub re­lease.

CI runs in­stall smoke tests on Linux, ma­cOS, and Windows that ver­ify:

* in­stall.sh down­loads the lat­est re­lease as­set for the cur­rent OS/architecture.

* The bi­nary is in­stalled for the cur­rent user into the con­fig­ured in­stall di­rec­tory.

* git me­mento –version and git me­mento help both ex­e­cute af­ter in­stal­la­tion.

dot­net test GitMemento.slnx

npm run test:js

This repos­i­tory in­cludes a reusable mar­ket­place ac­tion with two modes:

* mode: gate: runs git me­mento au­dit as a CI gate and fails if note cov­er­age checks fail. git-me­mento must al­ready be in­stalled in the job.

name: me­mento-note-com­ments

on:

push:

pul­l_re­quest:

types: [opened, syn­chro­nize, re­opened]

per­mis­sions:

con­tents: write

pull-re­quests: read

jobs:

com­ment-me­mento-notes:

runs-on: ubuntu-lat­est

steps:

- uses: ac­tions/​check­out@v4

with:

fetch-depth: 0

- uses: man­del-macaque/​me­mento@v1

with:

mode: com­ment

github-to­ken: ${{ se­crets.GITHUB_­TO­KEN }}

name: me­mento-note-gate

on:

pul­l_re­quest:

types: [opened, syn­chro­nize, re­opened]

per­mis­sions:

con­tents: read

jobs:

en­force-me­mento-notes:

runs-on: ubuntu-lat­est

steps:

- uses: ac­tions/​check­out@v4

with:

fetch-depth: 0

- uses: man­del-macaque/​me­mento/​in­stall@v1

with:

me­mento-repo: man­del-macaque/​me­mento

- uses: man­del-macaque/​me­mento@v1

with:

mode: gate

strict: true”

- uses: man­del-macaque/​me­mento/​in­stall@v1

with:

me­mento-repo: man­del-macaque/​me­mento

npm ci

npm run build:ac­tion

...

Read the original on github.com »

10 234 shares, 41 trendiness

City shuts down police license plate cameras after judge makes footage public

Example video ti­tle will go here for this video

Example video ti­tle will go here for this video

To stream WLTX 19 on your phone, you need the WLTX 19 app.

Example video ti­tle will go here for this video

Example video ti­tle will go here for this video

To stream WLTX 19 on your phone, you need the WLTX 19 app.

EVERETT, Wash. — The City of Everett has shut down its en­tire net­work of Flock li­cense plate reader cam­eras af­ter a Snohomish County judge ruled the footage those cam­eras col­lect qual­i­fies as a pub­lic record.

The de­ci­sion came af­ter a Washington man filed pub­lic records re­quests seek­ing ac­cess to data cap­tured by the cam­eras.

Jose Rodriguez of Walla Walla, rep­re­sented by at­tor­ney Tim Hall, re­quested the footage from mul­ti­ple ju­ris­dic­tions in Washington state, to see what in­for­ma­tion the au­to­mated li­cense plate reader sys­tem was col­lect­ing.

He started notic­ing that the cam­eras were every­where — he wanted to see what kind of data they col­lect,” Hall said.

The re­quests re­vealed that Flock cam­eras con­tin­u­ously cap­ture thou­sands of im­ages, re­gard­less of whether a ve­hi­cle is linked to a crime.

When sev­eral cities, in­clud­ing Everett, moved to block the re­quest, the case went to court.

On Tuesday, a Snohomish County judge ruled that footage cap­tured by Flock cam­eras qual­i­fies as a pub­lic record un­der Washington law, mean­ing mem­bers of the pub­lic can re­quest ac­cess to the data.

Everett Mayor Cassie Franklin said the city dis­agrees with the rul­ing and is con­cerned about who could ob­tain the footage.

We were very dis­ap­pointed,” Franklin said. That means per­pe­tra­tors of crime, peo­ple who are maybe en­gaged in do­mes­tic abuse or stalk­ers, they can re­quest footage and that could cause a lot of harm.”

Following the rul­ing, Everett tem­porar­ily turned off all 68 of its Flock cam­eras.

At the same time, law­mak­ers in Olympia are de­bat­ing a bill that would ex­empt Flock footage from pub­lic records law.

Supporters of the pro­posed leg­is­la­tion ar­gue that pub­lic ac­cess to the data could cre­ate safety risks, in­clud­ing the pos­si­bil­ity that fed­eral im­mi­gra­tion agents could at­tempt to ob­tain footage through pub­lic dis­clo­sure re­quests.

Hall pushed back on those con­cerns, say­ing pub­lic records re­quests are typ­i­cally a lengthy process and un­likely to be use­ful for real-time track­ing.

As some­body who has made hun­dreds of pub­lic records re­quests my­self, and rep­re­sented many, many peo­ple in pub­lic records law­suits, it’s gen­er­ally a lengthy process,” Hall said. Same would be true for ICE. They’re go­ing to get data from where you were three months, two months ago.”

Franklin said if law­mak­ers pass leg­is­la­tion al­low­ing cities to shield Flock data from pub­lic dis­clo­sure, Everett would con­sider turn­ing the cam­eras back on. She said the city is not dis­man­tling or re­mov­ing the cam­eras in the mean­time.

...

Read the original on www.king5.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.