10 interesting stories served every morning and every evening.




1 824 shares, 37 trendiness

numerique.gouv.fr

Fermer

Le numérique au sein de l’É­tat

La stratégie numérique de l’É­tat

La DINUM

Les ac­teurs du numérique de l’É­tat

En Europe et à l’in­ter­na­tional

La trans­for­ma­tion numérique des ter­ri­toires

Offre d’ac­com­pa­g­ne­ment

Services numériques

Données publiques

IA

Actualités

Dernières in­for­ma­tions

Espace presse

Blog

Postuler

Fermer

Le numérique au sein de l’É­tat

La stratégie numérique de l’É­tat

La DINUM

Les ac­teurs du numérique de l’É­tat

En Europe et à l’in­ter­na­tional

La trans­for­ma­tion numérique des ter­ri­toires

Offre d’ac­com­pa­g­ne­ment

Services numériques

Données publiques

IA

Actualités

Dernières in­for­ma­tions

Espace presse

Blog

Postuler

Souveraineté numérique : l’É­tat ac­célère la ré­duc­tion de ses dépen­dances ex­tra-eu­ropéennes

À l’ini­tia­tive du Premier min­istre, du min­istre de l’Ac­tion et des Comptes publics, et de la min­istre déléguée chargée de l’In­tel­li­gence ar­ti­fi­cielle et du Numérique, la di­rec­tion in­ter­min­istérielle du numérique (DINUM) a or­gan­isé mer­credi 8 avril 2026 avec la di­rec­tion générale des en­tre­prises (DGE), l’a­gence na­tionale de la sécu­rité des sys­tèmes d’in­for­ma­tion (ANSSI) et la di­rec­tion des achats de l’É­tat (DAE) un sémi­naire in­ter­min­istériel visant à ren­forcer la dy­namique col­lec­tive de ré­duc­tion des dépen­dances numériques ex­tra-eu­ropéennes. Réunissant min­istres, ad­min­is­tra­tions, opéra­teurs publics et ac­teurs privés, cet événe­ment mar­que une ac­céléra­tion de la stratégie française et eu­ropéenne en faveur de la sou­veraineté numérique. Dans la con­ti­nu­ité des di­rec­tives ré­centes com­mu­niquées par le Premier min­istre, no­tam­ment les cir­cu­laires rel­a­tives à la com­mande publique numérique ainsi qu’à la général­i­sa­tion de l’outil de vi­sio­con­férence « Visio », le sémi­naire a per­mis de fixer un ob­jec­tif clair : ré­duire les dépen­dances numériques ex­tra-eu­ropéennes de l’É­tat.S’agis­sant de l’évo­lu­tion du poste de tra­vail, la DINUM an­nonce sa sor­tie de Windows au profit de postes sous sys­tème d’­ex­ploita­tion Linux.S’agissant de la mi­gra­tion vers des so­lu­tions sou­veraines, la Caisse na­tionale d’As­sur­ance mal­adie a an­noncé il y a quelques jours la mi­gra­tion de ses 80 000 agents vers des out­ils du so­cle numérique in­ter­min­istériel (Tchap, Visio et FranceTransfert pour le trans­fert de doc­u­ments).Le mois dernier, le Gouvernement an­nonçait la mi­gra­tion de la plate­forme des don­nées de santé vers une so­lu­tion de con­fi­ance d’ici à fin 2026.Le sémi­naire a per­mis de lancer une nou­velle méth­ode pour sor­tir des dépen­dances en for­mant des coali­tions in­édites as­so­ciant min­istères, grands opéra­teurs publics et ac­teurs privés. Cette dé­marche vise à fédérer les én­er­gies publiques et privées au­tour de pro­jets pré­cis, en s’ap­puyant no­tam­ment sur les com­muns numériques et les stan­dards d’in­teropéra­bil­ité (initiatives Open-Interop, OpenBuro).La DINUM co­or­don­nera un plan in­ter­min­istériel de ré­duc­tion des dépen­dances ex­tra-eu­ropéennes. Chaque min­istère (opérateurs in­clus) sera tenu de for­maliser son pro­pre plan d’ici l’au­tomne, por­tant sur les axes suiv­ants : poste de tra­vail, out­ils col­lab­o­rat­ifs, anti-virus, in­tel­li­gence ar­ti­fi­cielle, bases de don­nées, vir­tu­al­i­sa­tion, équipements réseau. Ces plans d’ac­tion per­me­t­tront de don­ner de la vis­i­bil­ité quant aux be­soins de l’E­tat à la fil­ière in­dus­trielle du numérique, qui dis­pose d’atouts ma­jeurs qu’il con­vient de val­oriser par la com­mande publique.Le tra­vail de car­togra­phie et de di­ag­nos­tic des dépen­dances réal­isé par la Direction des Achats de l’É­tat (DAE), ainsi que celui au­tour de la déf­i­ni­tion d’un ser­vice numérique eu­ropéen porté par la Direction générale des Entreprises (DGE), per­me­t­tra d’affiner l’ob­jec­tif chiffré de ré­duc­tion avec un cal­en­drier clair.Les pre­mières « ren­con­tres in­dus­trielles du numérique », qui seront or­gan­isées par la DINUM en juin 2026, con­stitueront l’oc­ca­sion de con­cré­tiser des coali­tions min­istérielles publiques - privées, avec no­tam­ment la for­mal­i­sa­tion d’une « al­liance pub­lic-privé pour la sou­veraineté eu­ropéenne ».

L’État ne peut plus se con­tenter de con­stater sa dépen­dance, il doit en sor­tir. Nous de­vons nous désen­si­biliser des out­ils améri­cains et repren­dre le con­trôle de notre des­tin numérique. Nous ne pou­vons plus ac­cepter que nos don­nées, nos in­fra­struc­tures et nos dé­ci­sions stratégiques dépen­dent de so­lu­tions dont nous ne maîtrisons ni les rè­gles, ni les tar­ifs, ni les évo­lu­tions, ni les risques. La tran­si­tion est en marche : nos min­istères, nos opéra­teurs et nos parte­naires in­dus­triels s’en­ga­gent au­jour­d’hui dans une dé­marche sans précé­dent pour car­togra­phier nos dépen­dances et ren­forcer notre sou­veraineté numérique. La sou­veraineté numérique n’est pas une op­tion.

min­istre de l’Ac­tion et des Comptes publics

La sou­veraineté numérique n’est pas une op­tion, c’est une né­ces­sité stratégique. L’Europe doit se doter des moyens de ses am­bi­tions, et la France mon­tre l’ex­em­ple en ac­célérant la bas­cule vers des so­lu­tions sou­veraines, in­teropérables et durables. En ré­duisant nos dépen­dances à des so­lu­tions ex­tra-eu­ropéennes, l’É­tat en­voie un mes­sage clair : celui d’une puis­sance publique qui reprend la main sur ses choix tech­nologiques au ser­vice de sa sou­veraineté numérique.

min­istre déléguée chargée de l’In­tel­li­gence ar­ti­fi­cielle et du Numérique

À pro­pos de la di­rec­tion in­ter­min­istérielle du numérique (DINUM) : La DINUM a pour mis­sion d’éla­borer la stratégie numérique de l’É­tat et de pi­loter sa mise en œu­vre. Elle ac­com­pa­gne les pro­jets numériques de l’É­tat, au ser­vice des pri­or­ités gou­verne­men­tales et dans le souci d’une amélio­ra­tion de l’­ef­fi­cac­ité de l’ac­tion publique.

(Ouvre une nou­velle fenêtre) En savoir plus sur nu­merique.gouv.fr

...

Read the original on numerique.gouv.fr »

2 641 shares, 65 trendiness

1D-Chess

1d-chess is a new vari­ant where you can play the beau­ti­ful game with­out all those un­nec­ces­sary and com­pli­cated ex­tra di­men­sions. Play as white against the AI. You might ini­tally find it more dif­fi­cult than ex­pected, but ass­ming op­ti­mal play, is there a forced win for white?

Mouse over to re­veal an­swer: Try this line: N4 N5, N6 K7, R4 K6, R2 K7, R5++

There are three pieces in 1d-chess:

Can move one square in any di­rec­tion.

Can move 2 squares for­ward or back­ward. (jumping over any pieces in the way)

Can move in a straight line in any di­rec­tion.

Win by check­mat­ing the en­emy king. This oc­curs when the en­emy king is in check (under at­tack by one of your pieces) and there are no le­gal moves for the op­po­nent to get their king out of check.

* A player is not in check and there are no le­gal moves for them to play

* The same board po­si­tion is re­peated 3 times in a game.

* There are only kings left on the board, thus it is im­pos­si­ble to check­mate the op­po­nent

This chess vari­ant was first de­scribed by Martin Gardner in the Mathematical Games col­umn of the July 1980 is­sue of Scientific American

See The col­umn on JSTOR

...

Read the original on rowan441.github.io »

3 565 shares, 40 trendiness

FBI used iPhone notification data to retrieve deleted Signal messages

A new re­port from 404 Media re­veals that the FBI was able to re­cover deleted Signal mes­sages from an iPhone by ex­tract­ing data stored in the de­vice’s no­ti­fi­ca­tion data­base. Here are the de­tails.

According to 404 Media, tes­ti­mony in a re­cent trial in­volv­ing a group of peo­ple set­ting off fire­works and van­dal­iz­ing prop­erty at the ICE Prairieland Detention Facility in Alvarado, Texas,” showed that the FBI was able to re­cover con­tent of in­com­ing Signal mes­sages from a de­fen­dan­t’s iPhone, even though Signal had been re­moved from the de­vice:

One of the de­fen­dants was Lynette Sharp, who pre­vi­ously pleaded guilty to pro­vid­ing ma­te­r­ial sup­port to ter­ror­ists. During one day of the re­lated trial, FBI Special Agent Clark Wiethorn tes­ti­fied about some of the col­lected ev­i­dence. A sum­mary of Exhibit 158 pub­lished on a group of sup­port­ers’ web­site says, Messages were re­cov­ered from Sharp’s phone through Apple’s in­ter­nal no­ti­fi­ca­tion stor­age—Sig­nal had been re­moved, but in­com­ing no­ti­fi­ca­tions were pre­served in in­ter­nal mem­ory. Only in­com­ing mes­sages were cap­tured (no out­go­ing).”

As 404 Media notes, Signal’s set­tings in­clude an op­tion that pre­vents the ac­tual mes­sage con­tent from be­ing pre­viewed in no­ti­fi­ca­tions. However, it ap­pears the de­fen­dant did not have that set­ting en­abled, which, in turn, seem­ingly al­lowed the sys­tem to store the con­tent in the data­base.

404 Media reached out to Signal and Apple, but nei­ther com­pany pro­vided any state­ments on how no­ti­fi­ca­tions are han­dled or stored.

With lit­tle to no tech­ni­cal de­tails about the ex­act con­di­tion of the de­fen­dan­t’s iPhone, it is ob­vi­ously im­pos­si­ble to pin­point the pre­cise method the FBI used to re­cover the in­for­ma­tion.

For in­stance, there are mul­ti­ple sys­tem states an iPhone can be in, each with its own se­cu­rity and data ac­cess con­straints, such as BFU (Before First Unlock), AFU (After First Unlock) mode, and so on.

Security and data ac­cess also change even more dra­mat­i­cally when the de­vice is un­locked, since the sys­tem as­sumes the user is pre­sent and per­mits ac­cess to a wider range of pro­tected data.

That said, iOS does store and cache a lot of data lo­cally, trust­ing that it can rely on these dif­fer­ent states to keep that in­for­ma­tion safe but read­ily avail­able in case the de­vice’s right­ful owner needs it.

Another im­por­tant fac­tor to keep in mind: the to­ken used to send push no­ti­fi­ca­tions is­n’t im­me­di­ately in­val­i­dated when an app is deleted. And since the server has no way of know­ing whether the app is still in­stalled af­ter the last no­ti­fi­ca­tion it sent, it may con­tinue push­ing no­ti­fi­ca­tions, leav­ing it up to the iPhone to de­cide whether to dis­play them.

Interestingly, Apple just changed how iOS val­i­dates push no­ti­fi­ca­tion to­kens on iOS 26.4. While it is im­pos­si­ble to tell whether this is a re­sult of this case, the tim­ing is still no­table.

Back to the case, given Exhibit 158’s de­scrip­tion that the mes­sages were re­cov­ered from Sharp’s phone through Apple’s in­ter­nal no­ti­fi­ca­tion stor­age,” it is pos­si­ble the FBI ex­tracted the in­for­ma­tion from a de­vice backup.

In that case, there are many com­mer­cially avail­able tools for law en­force­ment that ex­ploit iOS vul­ner­a­bil­i­ties to ex­tract data that could have helped the FBI ac­cess this in­for­ma­tion.

To read 404 Media’s orig­i­nal re­port of this case, fol­low this link.

...

Read the original on 9to5mac.com »

4 444 shares, 33 trendiness

France to ditch Windows for Linux to reduce reliance on US tech

France is try­ing to move on from Microsoft Windows. The coun­try said it plans to move some of its gov­ern­ment com­put­ers cur­rently run­ning Windows to the open source op­er­at­ing sys­tem Linux to fur­ther re­duce its re­liance on U. S. tech­nol­ogy.

Linux is an open source op­er­at­ing sys­tem that is free to down­load and use, with var­i­ous cus­tomized dis­tri­b­u­tions that are tai­lored and de­signed for spe­cific use cases or op­er­a­tions.

In a state­ment, French min­is­ter David Amiel said (translated) that the ef­fort was to regain con­trol of our dig­i­tal des­tiny” by re­ly­ing less on U. S. tech com­pa­nies. Amiel said that the French gov­ern­ment can no longer ac­cept that it does­n’t have con­trol over its data and dig­i­tal in­fra­struc­ture.

The French gov­ern­ment did not pro­vide a spe­cific time­line for the switchover, or which dis­tri­b­u­tions it was con­sid­er­ing. The switchover will be­gin with com­put­ers at the French gov­ern­men­t’s dig­i­tal agency, DINUM. When reached by TechCrunch, a spokesper­son for Microsoft did not com­ment on the news.

This is the lat­est ef­fort by France to re­duce its de­pen­dence on U. S. tech gi­ants and use tech­nol­ogy and cloud ser­vices orig­i­nated within its bor­ders, known as dig­i­tal sov­er­eignty, fol­low­ing grow­ing in­sta­bil­ity and un­pre­dictabil­ity on the part of the Trump ad­min­is­tra­tion.

Lawmakers and gov­ern­ment lead­ers across Europe are grow­ing more aware of the loom­ing threat fac­ing them at home, and their over-re­liance on U. S. tech­nol­ogy. In January, the European Parliament voted to adopt a re­port di­rect­ing the European Commission to iden­tify ar­eas where the EU can re­duce its re­liance on for­eign providers.

Since tak­ing of­fice in January 2025, Trump has upped his at­tacks on world lead­ers — straight-out cap­tur­ing one and aid­ing in the killing of an­other. He has also weaponized sanc­tions against his crit­ics, who in­clude judges on the International Criminal Court, ef­fec­tively cut­ting them off from trans­act­ing with U. S. com­pa­nies. Those who have been sanc­tioned have re­ported hav­ing their bank ac­counts closed and ac­cess to U.S. tech ser­vices ter­mi­nated, as well as be­ing blocked from any other U.S. ser­vice.

France’s de­ci­sion to ditch Windows comes months af­ter the gov­ern­ment an­nounced it would stop us­ing Microsoft Teams for video con­fer­enc­ing in fa­vor of French-made Visio, a tool based on the open source end-to-end en­crypted video meet­ing tool Jitsi.

The French gov­ern­ment said it also plans to mi­grate its health data plat­form to a new trusted plat­form by the end of the year.

...

Read the original on techcrunch.com »

5 425 shares, 34 trendiness

Why you can’t trust Privacy & Security

In this Friday’s magic demon­stra­tion, I’m go­ing to show how what you see in Privacy & Security set­tings can be mis­lead­ing, when it tells you that an app does­n’t have ac­cess to a pro­tected folder, but it re­ally does.

Although it ap­pears you can achieve this us­ing sev­eral or­di­nary apps, to make things sim­pler and clearer I’ve writ­ten a lit­tle app for this pur­pose, Insent, avail­able from here: in­sen­t11

I’m work­ing in ma­cOS Tahoe 26.4, but I sus­pect you should see much the same in any ver­sion from ma­cOS 13.5 on­wards, as sup­ported by Insent.

For this magic demo, I’m only go­ing to use two of Insent’s six but­tons:

* Open by con­sent, which re­sults in Insent choos­ing a ran­dom text file from the top level of your Documents folder, and dis­play­ing its name and the start of its con­tents be­low. As it does this with­out in­volv­ing the user in the process, the ma­cOS pri­vacy sys­tem TCC re­quires it to ob­tain the user’s con­sent to list and ac­cess the con­tents of that pro­tected folder.

* Open from folder, which opens an Open and Save Panel where you se­lect a folder. Insent then picks a ran­dom text file from the top level of that folder, and dis­plays its name and the start of its con­tents be­low. Because you ex­pressed your in­tent to ac­cess that pro­tected folder, TCC con­sid­ers that is good enough to give ac­cess with­out re­quir­ing any con­sent.

Once you have down­loaded Insent, ex­tracted it from its archive, and dragged the app from that folder into one of your Applications fold­ers, fol­low this se­quence of ac­tions:

Open Insent, click on Open by con­sent, and con­sent to the prompt to al­low it to ac­cess your Documents folder. Shortly af­ter­wards, Insent will dis­play the open­ing of one of the text files in Documents. Quit Insent.

Open Privacy & Security set­tings, se­lect Files & Folders, and con­firm that Insent has been given ac­cess to Documents.

Open Insent, click on Open by con­sent, and con­firm it now gains ac­cess to a text file with­out ask­ing for con­sent. Quit Insent.

Open Privacy & Security set­tings, se­lect Files & Folders, and dis­able Documents ac­cess in Insent’s en­try there us­ing the tog­gle.

Open Insent, click on Open by con­sent, and con­firm that it can no longer open a text file, but dis­plays [Couldn’t get con­tents of Documents folder].

Click on Open from folder and se­lect your Documents folder there. Confirm that works as ex­pected and dis­plays the name and con­tents of one of the text files in Documents.

Click on Open by con­sent, and con­firm that now works again.

Confirm that Documents ac­cess for Insent is still dis­abled in Files & Folders.

Whatever you do now, the app re­tains full ac­cess to Documents, no mat­ter what is shown or set in Files & Folders.

Indeed, the only way you can pro­tect your Documents folder from ac­cess by Insent is to run the fol­low­ing com­mand in Terminal:

tc­cu­til re­set All co.eclec­ti­clight. Insent

then restart your Mac. That should set Insent’s pri­vacy set­tings back to their de­fault.

You can also demon­strate that this be­hav­iour is spe­cific to one pro­tected folder at a time. If you se­lect a dif­fer­ent pro­tected folder like Desktop or Downloads us­ing the Open from folder but­ton, then Insent still won’t be able to list the con­tents of the Documents folder, as its TCC set­tings will func­tion as ex­pected.

Insent is an or­di­nary no­tarised app, and does­n’t run in a sand­box or pull any clever tricks. When System Integrity Protection (SIP) is en­abled some of its op­er­a­tions are sand­boxed, though, in­clud­ing at­tempts to list or ac­cess the con­tents of lo­ca­tions that are pro­tected by TCC.

When you click on its Open by con­sent but­ton, sand­boxd in­ter­cepts the File Manager call to list the con­tents of Documents, as a pro­tected folder. It then re­quests ap­proval for that from TCC, as seen in the fol­low­ing log en­tries:

1.204592 Insent sendAc­tion: 1.205160 Insent: try­ing to list files in ~/Documents 1.205828 sand­boxd re­quest ap­proval 1.205919 sand­boxd tc­c_send_re­quest_au­tho­riza­tion() IPC

TCC does­n’t have au­tho­ri­sa­tion for that ac­cess by Insent, ei­ther by Full Disk Access or spe­cific ac­cess to Documents, so it prompts the user for their con­sent. If that’s given, the fol­low­ing log en­tries show that be­ing passed back to the sand­box, and the change be­ing no­ti­fied to com.ap­ple.chrono, fol­lowed by Insent ac­tion­ing the orig­i­nal re­quest:

3.798770 com.ap­ple.sand­box kTCC­Ser­viceSys­tem­Pol­i­cy­Doc­u­ments­Folder granted by TCC for Insent 3.802225 com.ap­ple.chrono ap­pAuth:co.eclec­ti­clight. Insent] tcc au­tho­riza­tion(s) changed 3.809558 Insent: try­ing to look in ~/Documents for text files 3.809691 Insent: try­ing to read from: /Users/hoakley/Documents/asHelp.text 3.842101 Insent: read from: /Users/hoakley/Documents/asHelp.text

If you then dis­able Insent’s ac­cess to Documents in Privacy & Security set­tings, TCC de­nies ac­cess to Documents, and Insent can’t get the list of its con­tents:

1.093533 com.ap­ple. TCC AUTHREQ_RESULT: ms­gID=440.109, au­th­Value=0, au­thRea­son=4, au­thVer­sion=1, de­sired_auth=0, er­ror=(null), 1.093669 com.ap­ple.sand­box kTCC­Ser­viceSys­tem­Pol­i­cy­Doc­u­ments­Folder de­nied by TCC for Insent 1.094007 Insent: could­n’t get con­tents of ~/Documents

If you then ac­cess Documents by in­tent through the Open and Save Panel, sand­boxd no longer in­ter­cepts the re­quest, and TCC there­fore does­n’t grant or deny ac­cess:

0.897244 Insent sendAc­tion: 0.897318 Insent: try­ing to list files in ~/Documents 0.900828 Insent: try­ing to look in ~/Documents for text files 0.901112 Insent: try­ing to read from: /Users/hoakley/Documents/T2M2_2026-01-06_13_03_00.text 0.904101 Insent: read from: /Users/hoakley/Documents/T2M2_2026-01-06_13_03_00.text

Thus, ac­cess to a pro­tected folder by user in­tent, such as through the Open and Save Panel, changes the sand­box­ing ap­plied to the caller by re­mov­ing its con­straint to that spe­cific pro­tected folder. As the sand­box­ing is­n’t con­trolled by or re­flected in Privacy & Security set­tings, that al­lows TCC, in Files & Folders, to con­tinue show­ing ac­cess re­stric­tions that aren’t ap­plied be­cause the sand­box is­n’t ap­plied.

Access re­stric­tions shown in Privacy & Security set­tings, specif­i­cally those to pro­tected lo­ca­tions in Files & Folders, aren’t an ac­cu­rate or trust­wor­thy re­flec­tion of those that are ac­tu­ally ap­plied. It’s pos­si­ble for an app to have un­re­stricted ac­cess to one or more pro­tected fold­ers while its list­ing in Files & Folders shows it be­ing blocked from ac­cess, or for it to have no en­try at all in that list.

Most apps that want ac­cess to pro­tected fold­ers like Documents ap­pear to seek that dur­ing their ini­tial­i­sa­tion, and be­fore any user in­ter­ac­tion that could re­sult in in­tent over­rid­ing the need for con­sent. However, many users re­port that apps ap­pear to have ac­cess to Documents but aren’t listed in Files & Folders, sug­gest­ing that at some time that se­quence of events does oc­cur.

To be ef­fec­tively ex­ploited this would need care­ful se­quenc­ing, and for the user to se­lect the pro­tected folder in an Open and Save Panel, so draw­ing at­ten­tion to the ma­noeu­vre.

Most con­cern­ing is the ap­par­ent per­ma­nence of the ac­cess granted, re­quir­ing an ar­cane com­mand in Terminal and a restart in or­der to re­set the ap­p’s pri­vacy set­tings. It’s hard to be­lieve that this was in­tended to trap the user into sur­ren­der­ing con­trol over ac­cess to pro­tected lo­ca­tions. But it can do.

I’m very grate­ful to Richard for draw­ing my at­ten­tion to this.

...

Read the original on eclecticlight.co »

6 424 shares, 19 trendiness

I Still Prefer MCP Over Skills

The Right Tool for the Job

TL;DR: The AI space is push­ing hard for Skills” as the new stan­dard for giv­ing LLMs ca­pa­bil­i­ties, but I’m not a fan. Skills are great for pure knowl­edge and teach­ing an LLM how to use an ex­ist­ing tool. But for giv­ing an LLM ac­tual ac­cess to ser­vices, the Model Context Protocol (MCP) is the far su­pe­rior, more prag­matic ar­chi­tec­tural choice. We should be build­ing con­nec­tors, not just more CLIs.

Maybe it’s an ar­ti­fact of spend­ing too much time on X, but lately, the nar­ra­tive that MCP is dead” and Skills are the new stan­dard” has been ham­mered into my brain. Everywhere I look, some­one is cel­e­brat­ing the death of the Model Context Protocol in fa­vor of drop­ping a SKILL.md into their repos­i­tory.

I am a very heavy AI user. I use Claude Code, Codex, and Gemini for cod­ing. I rely on ChatGPT, Claude, and Perplexity al­most every day to man­age every­thing from Notion notes to my DEVONthink data­bases, and even my emails.

And hon­estly? I just don’t like Skills.

I hope MCP sticks around. I re­ally don’t want a fu­ture where every sin­gle ser­vice in­te­gra­tion re­quires a ded­i­cated CLI and a mark­down man­ual.

Here’s why I think the push for Skills as a uni­ver­sal so­lu­tion is a step back­ward, and why MCP still gets the ar­chi­tec­ture right.

Claude pulling re­cent user feed­back from Kikuyo through the Kikuyo MCP, no CLI needed.

The core phi­los­o­phy of MCP is sim­ple: it’s an API ab­strac­tion. The LLM does­n’t need to un­der­stand the how; it just needs to know the what. If the LLM wants to in­ter­act with DEVONthink, it calls de­von­think.do_x(), and the MCP server han­dles the rest.

This sep­a­ra­tion of con­cerns brings some un­beat­able ad­van­tages:

Zero-Install Remote Usage: For re­mote MCP servers, you don’t need to in­stall any­thing lo­cally. You just point your client to the MCP server URL, and it works. Seamless Updates: When a re­mote MCP server is up­dated with new tools or re­sources, every client in­stantly gets the lat­est ver­sion. No need to push up­dates, up­grade pack­ages, or re­in­stall bi­na­ries.Saner Auth: Authentication is han­dled grace­fully (often with OAuth). Once the client fin­ishes the hand­shake, it can per­form ac­tions against the MCP. You aren’t forc­ing the user to man­age raw to­kens and se­crets in plain text.True Portability: My re­mote MCP servers work from any­where: my Mac, my phone, the web. It does­n’t mat­ter. I can man­age my Notion via my LLM of choice from wher­ever a client is avail­able.Sand­box­ing: Remote MCPs are nat­u­rally sand­boxed. They ex­pose a con­trolled in­ter­face rather than giv­ing the LLM raw ex­e­cu­tion power in your lo­cal en­vi­ron­ment.Smart Discovery: Modern apps (ChatGPT, Claude, etc.) have tool search built-in. They only look for and load tools when they are ac­tu­ally needed, sav­ing pre­cious con­text win­dow.Fric­tion­less Auto-Updates: Even for lo­cal se­tups, an MCP in­stalled di­rectly via npx -y or uv can auto-up­date on every launch.

Not all Skills are the same. A pure knowl­edge skill (one that teaches the LLM how to for­mat a com­mit mes­sage, write tests a cer­tain way, or use your in­ter­nal jar­gon) ac­tu­ally works well. The prob­lems start when a Skill re­quires a CLI to ac­tu­ally do some­thing.

My biggest gripe with Skills is the as­sump­tion that every en­vi­ron­ment can, or should, run ar­bi­trary CLIs.

Most skills re­quire you to in­stall a ded­i­cated CLI. But what if you aren’t in a lo­cal ter­mi­nal? ChatGPT can’t run CLIs. Neither can Perplexity or the stan­dard web ver­sion of Claude. Unless you are us­ing a full-blown com­pute en­vi­ron­ment (like Perplexity Computer, Claude Cowork, Claude Code, or Codex), any skill that re­lies on a CLI is dead on ar­rival.

This leads to a cas­cade of an­noy­ing UX and ar­chi­tec­tural prob­lems:

The Deployment Mess: CLIs need to be pub­lished, man­aged, and in­stalled through bi­na­ries, NPM, uv, etc. The Secret Management Nightmare: Where do you put the API to­kens re­quired to au­then­ti­cate? If you’re lucky, the en­vi­ron­ment has a .env file you can dump plain-text se­crets into. Some ephemeral en­vi­ron­ments wipe them­selves, mean­ing your CLI works to­day but for­gets your se­crets to­mor­row.Frag­mented Ecosystems: Skill man­age­ment is cur­rently the wild west. When a skill up­dates, you have to re­in­stall it. Some tools sup­port in­stalling skills via npx skills, but that only works in Codex and Claude Code, not Claude Cowork or stan­dard Claude. Pure knowl­edge skills work in Claude, but most oth­ers don’t. Some tools sup­port a skills mar­ket­place,” oth­ers don’t. Some can in­stall from GitHub, oth­ers can’t. You try to in­stall an OpenClaw skill into Claude and it ex­plodes with YAML pars­ing er­rors be­cause the meta­data fields don’t match.Con­text Bloat: Using a skill of­ten re­quires load­ing the en­tire SKILL.md into the LLMs con­text win­dow, rather than just ex­pos­ing the sin­gle tool sig­na­ture it needs. It’s like forc­ing some­one to read the en­tire car’s own­er’s man­ual when all they want to do is call car.turn_on().

If a Skill’s in­struc­tions start with install this CLI first,” you’ve just added an un­nec­es­sary ab­strac­tion layer and ex­tra steps. Why not just use a re­mote MCP in­stead?

Codex pulling up a pure knowl­edge skill to learn how Phoenix colo­cated hooks work. No CLI, no MCP, just con­text.

The Right Tool for the Job#

I don’t want Skills to be­come the de facto way to con­nect an LLM to a ser­vice. We can ex­plain API shapes in a Skill so the LLM can curl it, but how is that bet­ter than pro­vid­ing a clean, strongly-typed in­ter­face via MCP?

Here’s how I think the ecosys­tem should look:

When to use MCP:

MCP should be the stan­dard for giv­ing an LLM an in­ter­face to con­nect to some­thing: a web­site, a ser­vice, an ap­pli­ca­tion. The ser­vice it­self should dic­tate the in­ter­face it ex­poses.

Take Google Calendar. A gcal CLI is fine. The prob­lem is a Skill that tells the LLM to in­stall it, man­age auth to­kens, and shell out to it. An OAuth-backed re­mote MCP owned by Google han­dles all of that at the pro­to­col level, and works from any client with­out any setup. To con­trol Chrome, the browser should ex­pose an MCP end­point for state­ful con­trol, rather than re­ly­ing on a janky chrome-cli.To de­bug with Hopper, the cur­rent built-in MCP that lets the LLM run step() is in­fi­nitely bet­ter than a sep­a­rate hop­per-cli.Xcode should ship with a built-in MCP that han­dles auth when an LLM con­nects to a pro­ject.No­tion should have mcp.no­tion.so/​mcp avail­able na­tively, in­stead of forc­ing me to down­load no­tion-cli and man­age auth state man­u­ally. (They ac­tu­ally do have a re­mote MCP now, which is ex­actly the right call.)

When to use Skills:

Skills should be pure.” They should fo­cus on knowl­edge and con­text.

Teaching ex­ist­ing tools: I love hav­ing a .claude/skills folder that teaches the LLM how to use tools I al­ready have in­stalled. A skill ex­plain­ing how to use curl, git, gh, or gcloud makes com­plete sense. We don’t need a curl MCP. We just need to teach the LLM how to con­struct good curl com­mands. However, a ded­i­cated re­mote GitHub MCP makes much more sense for man­ag­ing is­sues than re­ly­ing on a gh CLI skill. Standardizing work­flows: Skills are per­fect for teach­ing Claude your busi­ness jar­gon, in­ter­nal com­mu­ni­ca­tion style, or or­ga­ni­za­tional struc­ture.Teach­ing han­dling of cer­tain things: This is an­other great ex­am­ple and what Anthropic does as well with the PDF Skill - it ex­plains how to deal with PDF files and how to ma­nip­u­late them with Python.Secret Management pat­terns: Having a skill that tells Claude Use fnox for this repo, here is how to use it” just makes sense. Every time we deal with se­crets, Claude pulls up the skill. That’s way bet­ter than build­ing a cus­tom MCP just to call get_se­cret().

Skills liv­ing di­rectly in the repo. The LLM picks them up au­to­mat­i­cally when work­ing in that pro­ject.

Shower thought: Maybe the ter­mi­nol­ogy is the prob­lem. Skills should just be called LLM_MANUAL.md, and MCPs should be called Connectors.

Both have their place.

For the ser­vices I own, I al­ready do this. A few ex­am­ples:

mcp-server-de­von­think: A lo­cal MCP server that gives any LLM di­rect con­trol over DEVONthink. No CLI wrap­per, just a clean tool in­ter­face.mi­crofn: Exposes a re­mote MCP at mcp.mi­crofn.dev so any MCP-capable client can use it out of the box. MCP Nest: Tunnels lo­cal MCP servers through the cloud so they’re reach­able re­motely at mcp.mcp­nest.dev/​mcp. Built it be­cause I kept want­ing re­mote ac­cess to lo­cal MCPs with­out ex­pos­ing my ma­chine di­rectly.

For mi­crofn and Kikuyo I also pub­lished Skills, but they cover the CLI, not the MCP. That said, writ­ing this made me re­al­ize: a skill that ex­plains how to use an MCP server ac­tu­ally makes a lot of sense. Not to re­place the MCP, but to give the LLM con­text be­fore it starts call­ing tools. What the ser­vice does, how the tools re­late to each other, when to use which one. A knowl­edge layer on top of a con­nec­tor layer. That’s the com­bi­na­tion I’d want.

And this is ac­tu­ally a pat­tern I’ve been us­ing more and more in prac­tice. When I’m work­ing with a MCP server, I in­evitably dis­cover gotchas and non-ob­vi­ous pat­terns: a date for­mat that needs to be YYYY-MM-DD in­stead of YYYYMMDD, a search func­tion that trun­cates re­sults un­less you bump a pa­ra­me­ter, a tool name that does­n’t do what you’d ex­pect. Rather than re­dis­cov­er­ing these every ses­sion, I just ask Claude to wrap every­thing we learned into a Skill. The LLM al­ready has the con­text from our con­ver­sa­tion, so it writes the Skill with all the gotchas, com­mon pat­terns, and cor­rected as­sump­tions baked in.

After dis­cov­er­ing back­link gotchas and date for­mat quirks in the NotePlan MCP, I asked Claude to pack­age every­thing into a skill. Now every fu­ture ses­sion starts with that knowl­edge.

The re­sult is a Skill that acts as a cheat sheet for the MCP, not a re­place­ment for it. The MCP still han­dles the ac­tual con­nec­tion and tool ex­e­cu­tion. The Skill just makes sure the LLM does­n’t waste to­kens stum­bling through the same pit­falls I al­ready solved. It’s the com­bi­na­tion of both that makes the ex­pe­ri­ence ac­tu­ally smooth.

At the same time, I’ll keep main­tain­ing my dot­files repo full of Skills for pro­ce­dures I use of­ten, and I’ll keep drop­ping .claude/skills into my repos­i­to­ries to guide the AIs be­hav­ior.

I just hope the in­dus­try does­n’t aban­don the Model Context Protocol. The dream of seam­less AI in­te­gra­tion re­lies on stan­dard­ized in­ter­faces, not a frac­tured land­scape of hacky CLIs. I’m still hold­ing out hope for of­fi­cial Skyscanner, Booking.com, Trip.com, and Agoda.com MCPs.

Speaking of re­mote MCPs: I built MCP Nest specif­i­cally for this prob­lem. A lot of use­ful MCP servers are lo­cal-only by na­ture, think Fastmail, Gmail, or any­thing that runs on your ma­chine. MCP Nest tun­nels them through the cloud so they be­come re­motely ac­ces­si­ble, us­able from Claude, ChatGPT, Perplexity, or any MCP-capable client, across all your de­vices. If you want your lo­cal MCPs to work every­where with­out ex­pos­ing your ma­chine di­rectly, that’s what it’s for.

...

Read the original on david.coffee »

7 418 shares, 26 trendiness

OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters

OpenAI is throw­ing its sup­port be­hind an Illinois state bill that would shield AI labs from li­a­bil­ity in cases where AI mod­els are used to cause se­ri­ous so­ci­etal harms, such as death or se­ri­ous in­jury of 100 or more peo­ple or at least $1 bil­lion in prop­erty dam­age.

The ef­fort seems to mark a shift in OpenAI’s leg­isla­tive strat­egy. Until now, OpenAI has largely played de­fense, op­pos­ing bills that could have made AI labs li­able for their tech­nol­o­gy’s harms. Several AI pol­icy ex­perts tell WIRED that SB 3444—which could set a new stan­dard for the in­dus­try—is a more ex­treme mea­sure than bills OpenAI has sup­ported in the past.

The bill would shield fron­tier AI de­vel­op­ers from li­a­bil­ity for critical harms” caused by their fron­tier mod­els as long as they did not in­ten­tion­ally or reck­lessly cause such an in­ci­dent, and have pub­lished safety, se­cu­rity, and trans­parency re­ports on their web­site. It de­fines a fron­tier model as any AI model trained us­ing more than $100 mil­lion in com­pu­ta­tional costs, which likely could ap­ply to America’s largest AI labs, like OpenAI, Google, xAI, Anthropic, and Meta.

We sup­port ap­proaches like this be­cause they fo­cus on what mat­ters most: Reducing the risk of se­ri­ous harm from the most ad­vanced AI sys­tems while still al­low­ing this tech­nol­ogy to get into the hands of the peo­ple and busi­nesses—small and big—of Illinois,” said OpenAI spokesper­son Jamie Radice in an emailed state­ment. They also help avoid a patch­work of state-by-state rules and move to­ward clearer, more con­sis­tent na­tional stan­dards.”

Under its de­f­i­n­i­tion of crit­i­cal harms, the bill lists a few com­mon ar­eas of con­cern for the AI in­dus­try, such as a bad ac­tor us­ing AI to cre­ate a chem­i­cal, bi­o­log­i­cal, ra­di­o­log­i­cal, or nu­clear weapon. If an AI model en­gages in con­duct on its own that, if com­mit­ted by a hu­man, would con­sti­tute a crim­i­nal of­fense and leads to those ex­treme out­comes, that would also be a crit­i­cal harm. If an AI model were to com­mit any of these ac­tions un­der SB 3444, the AI lab be­hind the model may not be held li­able, so long as it was­n’t in­ten­tional and they pub­lished their re­ports.

Federal and state leg­is­la­tures in the US have yet to pass any laws specif­i­cally de­ter­min­ing whether AI model de­vel­op­ers, like OpenAI, could be li­able for these types of harm caused by their tech­nol­ogy. But as AI labs con­tinue to re­lease more pow­er­ful AI mod­els that raise novel safety and cy­ber­se­cu­rity chal­lenges, such as Anthropic’s Claude Mythos, these ques­tions feel in­creas­ingly pre­scient.

In her tes­ti­mony sup­port­ing SB 3444, a mem­ber of OpenAI’s Global Affairs team, Caitlin Niedermeyer, also ar­gued in fa­vor of a fed­eral frame­work for AI reg­u­la­tion. Niedermeyer struck a mes­sage that’s con­sis­tent with the Trump ad­min­is­tra­tion’s crack­down on state AI safety laws, claim­ing it’s im­por­tant to avoid a patch­work of in­con­sis­tent state re­quire­ments that could cre­ate fric­tion with­out mean­ing­fully im­prov­ing safety.” This is also con­sis­tent with the broader view of Silicon Valley in re­cent years, which has gen­er­ally ar­gued that it’s para­mount for AI leg­is­la­tion to not ham­per America’s po­si­tion in the global AI race. While SB 3444 is it­self a state-level safety law, Niedermeyer ar­gued that those can be ef­fec­tive if they reinforce a path to­ward har­mo­niza­tion with fed­eral sys­tems.”

At OpenAI, we be­lieve the North Star for fron­tier reg­u­la­tion should be the safe de­ploy­ment of the most ad­vanced mod­els in a way that also pre­serves US lead­er­ship in in­no­va­tion,” Niedermeyer said.

Scott Wisor, pol­icy di­rec­tor for the Secure AI pro­ject, tells WIRED he be­lieves this bill has a slim chance of pass­ing, given Illinois’ rep­u­ta­tion for ag­gres­sively reg­u­lat­ing tech­nol­ogy. We polled peo­ple in Illinois, ask­ing whether they think AI com­pa­nies should be ex­empt from li­a­bil­ity, and 90 per­cent of peo­ple op­pose it. There’s no rea­son ex­ist­ing AI com­pa­nies should be fac­ing re­duced li­a­bil­ity,” Wisor says.

...

Read the original on www.wired.com »

8 390 shares, 37 trendiness

[ANNOUNCE] WireGuardNT v0.11 and WireGuard for Windows v0.6 Released

Jason at zx2c4.com

Previous mes­sage (by thread): Adding mes­sage type 5/6 for PQC (was Re: Export noise prim­i­tives for ad­di­tional chain key ratch­et­ing”)

Next mes­sage (by thread): [ANNOUNCE] WireGuardNT v0.11 and WireGuard for Windows v0.6 Released

Hey folks,

I gen­er­ally don’t send an­nounce­ment emails for the Windows soft­ware, be­cause the built-in up­dater takes care of no­ti­fy­ing the rel­e­vant users. But be­cause this has­n’t been up­dated in so long, and be­cause of re­cent news ar­ti­cles, I thought it’d be a good idea to no­tify the list.

After a lot of hard­work, we’ve re­leased an up­dated Windows client, both the low level ker­nel dri­ver and api har­ness, called WireGuardNT, and the higher level man­age­ment soft­ware, com­mand line util­i­ties, and UI, called WireGuard for Windows.

There are some new fea­tures — such as sup­port for re­mov­ing in­di­vid­ual al­lowed IPs with­out drop­ping pack­ets (as was added al­ready to Linux and FreeBSD) and set­ting very low MTUs on IPv4 con­nec­tions — but the main im­prove­ment is lots of ac­cu­mu­lated bug fixes, per­for­mance im­prove­ments, and above all, im­mense code stream­lin­ing due to ratch­et­ing for­ward our min­i­mum sup­ported Windows ver­sion [1]. These pro­jects are now built in a much more solid foun­da­tion, with­out hav­ing to main­tain decades of com­pat­i­bil­ity hacks and al­ter­na­tive code­paths, and bizarre logic, and dy­namic dis­patch­ing, and all man­ner of crust. There have also been large tool­chain up­dates — the EWDK ver­sion used for the dri­ver, the Clang/LLVM/MingW ver­sion used for the user­space tool­ing, the Go ver­sion used for the main UI, the EV cer­tifi­cate and sign­ing in­fra­struc­ture — which all to­gether should amount to bet­ter per­for­mance and more mod­ern code.

But, as it’s our first Windows re­lease in a long while, please test and let me know how it goes. Hopefully there are no re­gres­sions, and we’ve tested this quite a bit — in­clud­ing on Windows 10 1507 Build 10240, the most an­cient Windows that we sup­port which Microsoft does not any­more — but you never know. So feel free to write me as needed.

As al­ways, the built-in up­dater should be prompt­ing users to click the up­date but­ton, which will check sig­na­tures and se­curely up­date the soft­ware. Alternatively, if you’re in­stalling for the first time or want to up­date im­me­di­ately, our mini 80k fetcher will down­load and ver­ify the lat­est ver­sion: - https://​down­load.wire­guard.com/​win­dows-client/​wire­guard-in­staller.exe - https://​www.wire­guard.com/​in­stall/

And to learn more about each of these two Windows pro­jects: - https://​git.zx2c4.com/​wire­guard-win­dows/​about/ - https://​git.zx2c4.com/​wire­guard-nt/​about/

Finally, I should com­ment on the afore­men­tioned news ar­ti­cles. When we tried to sub­mit the new NT ker­nel dri­ver to Microsoft for sign­ing, they had sus­pended our ac­count, as I wrote about first in a ran­dom com­ment [2] on Hacker News in a thread about this hap­pen­ing to an­other pro­ject, and then later that day on Twitter [3]. The com­ments that fol­lowed were a bit off the rails. There’s no con­spir­acy here from Microsoft. But the Internet dis­cus­sion wound up catch­ing the at­ten­tion of Microsoft, and a day later, the ac­count was un­blocked, and all was well. I think this is just a case of bu­reau­cratic processes get­ting a bit out of hand, which Microsoft was able to eas­ily rem­edy. I don’t think there’s been any mal­ice or con­spir­acy or any­thing weird. I think most news ar­ti­cles cur­rently cir­cu­lat­ing haven’t been up­dated to show that this was ac­tu­ally fixed pretty quickly. So, in case you were won­der­ing, but how can there be a new WireGuard for Windows up­date when the ac­count is blocked?!”, now you know that the an­swer is, because the ac­count was un­blocked.”

Anyway, en­joy the new soft­ware, and let me know how it works for you.

Thanks, Jason

[1] https://​lists.zx2c4.com/​piper­mail/​wire­guard/​2026-March/​009541.html [2] https://​news.ycombi­na­tor.com/​item?id=47687884 [3] https://​x.com/​EdgeSe­cu­rity/​sta­tus/​2041872931576299888

Previous mes­sage (by thread): Adding mes­sage type 5/6 for PQC (was Re: Export noise prim­i­tives for ad­di­tional chain key ratch­et­ing”)

Next mes­sage (by thread): [ANNOUNCE] WireGuardNT v0.11 and WireGuard for Windows v0.6 Released

More in­for­ma­tion about the WireGuard

mail­ing list

...

Read the original on lists.zx2c4.com »

9 383 shares, 15 trendiness

- YouTube

...

Read the original on www.youtube.com »

10 306 shares, 13 trendiness

We’ve raised $17M to build what comes after Git

Today we’re an­nounc­ing that GitButler has raised a $17M Series A led by a16z with con­tin­u­ing sup­port from our lead seed in­vestors, Fly Ventures and A Capital.

I know what you’re think­ing. You’re hop­ing that we’ll use phrases such as we’re ex­cited,” this is just the be­gin­ning,” and AI is chang­ing every­thing”. While all those things are true, I’ll try to avoid them and in­stead make this an­nounce­ment a lit­tle more per­sonal.

For me this is a long story.

I was one of the co­founders of GitHub and over the last 15 years I’ve watched Git go from a rather niche de­vel­oper tool writ­ten for a very es­o­teric col­lab­o­ra­tion style to the foun­da­tional in­fra­struc­ture of all soft­ware de­vel­op­ment on the planet. I may have even had a small hand in some part of that.

What I learned from watch­ing that story un­fold is that de­vel­oper plat­forms win when they re­move fric­tion from col­lab­o­ra­tion, and when they let the peo­ple pro­duc­ing code have less over­head to deal with.

GitButler was started three years ago be­cause we felt like our de­vel­op­ment prac­tices have been shoe­horned into what Git could do for such a long time, it would be amaz­ing to see what we could do with tool­ing that was ac­tu­ally de­signed for those prac­tices.

That’s fun­da­men­tally what is be­hind this round.

We think soft­ware de­vel­op­ment is quickly mov­ing into a new phase, and the prob­lem that Git has solved for the last 20 years is over­due for a re­design. Today, with Git, we’re all teach­ing swarms of agents to use a tool built for send­ing patches over mail­ing lists. That’s far from what is needed to­day.

At GitHub, one thing be­came painfully clear over and over: de­vel­op­ers don’t strug­gle be­cause they can’t write code. They strug­gle be­cause con­text falls apart be­tween tools, be­tween peo­ple, and now be­tween peo­ple and agents. The hard prob­lem is not gen­er­at­ing change, it’s or­ga­niz­ing, re­view­ing, and in­te­grat­ing change with­out cre­at­ing chaos.

The old model as­sumed one per­son, one branch, one ter­mi­nal, one lin­ear flow. Not only has the prob­lem not been solved well for that old model, it’s now only been com­pounded with our new AI tools.

Last week we re­leased our first an­swer to that, the tech­ni­cal pre­view of the GitButler CLI.

This is a tool de­signed for the GitHub Flow style - the short lived branch, trunk based work­flows that so many of us are us­ing. This is a tool de­signed for hu­mans, de­signed for agents, de­signed for script­ing. Designed to stack branches, to mul­ti­task, to con­trol and or­ga­nize your changes, to eas­ily undo - to be sim­ple, pow­er­ful and in­tu­itive, no mat­ter who (or what) you are. Best of all, it just drops into any ex­ist­ing Git pro­ject.

But of course, that’s just the be­gin­ning. (Damn, I said I was­n’t go­ing to say that…)

There was a tagline at GitHub that I al­ways loved, but I never felt like we lived up to the promise of: Social Coding”.

While GitHub cer­tainly made it eas­ier to col­lab­o­rate on open source pro­jects with forks and pull re­quests, it oth­er­wise did­n’t much im­prove the process of work­ing to­gether. There are still lists of is­sues and kan­ban boards, there are still patches (we just call them PRs now), we still chat in ex­ter­nal chat rooms. We don’t look at com­mit mes­sages and our PR de­scrip­tions aren’t stored in Git and usu­ally lost in his­tory. Heck, it could be ar­gued that de­vel­op­ment in teams is less so­cial than it was when ver­sion con­trol was cen­tral­ized.

But what if cod­ing was ac­tu­ally so­cial? What if it was eas­ier to for a team to work to­gether than it is to work alone?

Imagine your ver­sion con­trol tool tak­ing what you’ve worked on and help­ing you craft log­i­cal, beau­ti­ful changes with proper con­text. Imagine be­ing able to ac­cess agent in­ter­ac­tions, re­lated con­ver­sa­tions and other in­for­ma­tion we’re cur­rently los­ing. Imagine your tools telling you as soon as there are pos­si­ble merge con­flicts be­tween team­mates, rather than at the end of the process. Imaging be­ing able to work on a branch stacked on a cowork­ers branch while you’re both con­stantly mod­i­fy­ing them. Imagine your agent be­ing fully aware of not only what your other agents are work­ing on, but what every­one on your team is work­ing on, right now.

There is so much more that this fun­da­men­tal layer of our soft­ware tool­ing could be do­ing for us. This is what we’re do­ing at GitButler, this is why we’ve raised the fund­ing to help build all of this, faster.

We’re build­ing the in­fra­struc­ture for how soft­ware gets built next.

...

Read the original on blog.gitbutler.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.