10 interesting stories served every morning and every evening.




1 596 shares, 68 trendiness

SpaceX

...

Read the original on www.spacex.com »

2 363 shares, 27 trendiness

Todd C. Miller

Note: this page tends be ne­glected and is only up­dated oc­ca­sion­ally. The links to the left are where the use­ful bits are hid­ing.

For the past 30+ years I’ve been the main­tainer of

sudo. I’m cur­rently in search of a spon­sor to fund con­tin­ued sudo main­te­nance and de­vel­op­ment. If you or your or­ga­ni­za­tion is in­ter­ested in spon­sor­ing sudo, please let me know.

I also work on OpenBSD, though my I’m not as ac­tive there as I once was.

In the past, I’ve made large con­tri­bu­tions to

ISC cron, among other pro­jects.

...

Read the original on www.millert.dev »

3 359 shares, 19 trendiness

Claude Code is suddenly everywhere inside Microsoft

is a se­nior ed­i­tor and au­thor of Notepad , who has been cov­er­ing all things Microsoft, PC, and tech for over 20 years.

Posts from this au­thor will be added to your daily email di­gest and your home­page feed.

is a se­nior ed­i­tor and au­thor of Notepad , who has been cov­er­ing all things Microsoft, PC, and tech for over 20 years.

Posts from this au­thor will be added to your daily email di­gest and your home­page feed.

Developers have been com­par­ing the strengths and weak­nesses of Anthropic’s Claude Code, Anysphere’s Cursor, and Microsoft’s GitHub Copilot for months now, look­ing for a win­ner. While no in­di­vid­ual AI cod­ing tool man­ages to be the best at every task that soft­ware de­vel­op­ers do each day, Claude Code is in­creas­ingly com­ing out on top for its ease of use, both for de­vel­op­ers and non­tech­ni­cal users.

It seems like Microsoft agrees, as sources tell me the com­pany is now en­cour­ag­ing thou­sands of its em­ploy­ees from some of its most pro­lific teams to pick up Claude Code and get cod­ing, even if they’re not de­vel­op­ers.

Microsoft first started adopt­ing Anthropic’s Claude Sonnet 4 model in­side its de­vel­oper di­vi­sion in June last year, be­fore fa­vor­ing it for paid users of GitHub Copilot sev­eral months later. Now, Microsoft is go­ing a step be­yond us­ing Anthropic’s AI mod­els and widely adopt­ing Claude Code across its biggest en­gi­neer­ing teams.

Microsoft’s CoreAI team, the new AI en­gi­neer­ing group led by for­mer Meta en­gi­neer­ing chief Jay Parikh, has been test­ing Claude Code in re­cent months, and last week Microsoft’s Experiences + Devices di­vi­sion were be­ing asked to in­stall Claude Code. This di­vi­sion is re­spon­si­ble for Windows, Microsoft 365, Outlook, Microsoft Teams, Surface, and more.

Even em­ploy­ees with­out any cod­ing ex­pe­ri­ence are be­ing en­cour­aged to ex­per­i­ment with Claude Code, to al­low de­sign­ers and pro­ject man­agers to pro­to­type ideas. Microsoft has also ap­proved the use of Claude Code across all of its code and repos­i­to­ries for its Business and Industry Copilot teams.

Software en­gi­neers at Microsoft are now ex­pected to use both Claude Code and GitHub Copilot and give feed­back com­par­ing the two, I’m told. Microsoft sells GitHub Copilot as its AI cod­ing tool of choice to its cus­tomers, but if these broad in­ter­nal pi­lot pro­grams are suc­cess­ful, then it’s pos­si­ble the com­pany could even even­tu­ally sell Claude Code di­rectly to its cloud cus­tomers.

Microsoft is now one of Anthropic’s top cus­tomers, ac­cord­ing to a re­cent re­port from The Information. The soft­ware maker is also count­ing sell­ing Anthropic AI mod­els to­ward Azure sales quo­tas, which is un­usual given Microsoft typ­i­cally only of­fers its sales­peo­ple in­cen­tives for home­grown prod­ucts or mod­els from OpenAI.

Microsoft’s de­ci­sion to adopt Claude Code more broadly among its en­gi­neer­ing teams cer­tainly looks like a vote of con­fi­dence in Anthropic’s AI tools over its own, es­pe­cially as it’s en­cour­ag­ing non­tech­ni­cal em­ploy­ees to try out cod­ing. But the re­al­ity is that Microsoft’s de­vel­op­ers are likely to use a mix of AI tools, and adopt­ing Claude Code is an­other part of that tool set.

Companies reg­u­larly test and trial com­pet­ing prod­ucts to gain a bet­ter un­der­stand­ing of the mar­ket land­scape,” says Frank Shaw, Microsoft’s com­mu­ni­ca­tions chief, in a state­ment to Notepad. OpenAI con­tin­ues to be our pri­mary part­ner and model provider on fron­tier mod­els, and we re­main com­mit­ted to our long-term part­ner­ship.”

While Microsoft re­mains com­mit­ted to OpenAI, it is in­creas­ingly work­ing with Anthropic to bring its mod­els and tools to Microsoft’s own teams and the soft­ware it sells to cus­tomers. Microsoft and Anthropic signed a deal in November that al­lows Microsoft Foundry cus­tomers to get ac­cess to Claude Sonnet 4.5, Claude Opus 4.1, and Claude Haiku 4.5. The deal also in­volves Anthropic com­mit­ting to pur­chas­ing $30 bil­lion of Azure com­pute ca­pac­ity.

The big ques­tion here is, what does the in­creased use of Claude Code at Microsoft mean for its more than 100,000 code repos­i­to­ries? Microsoft told me last year that 91 per­cent of its en­gi­neer­ing teams use GitHub Copilot and a va­ri­ety of teams have been us­ing the AI tool to speed up mun­dane tasks. Microsoft’s use of AI tools has been largely re­stricted to soft­ware en­gi­neers, but with Claude Code and Claude Cowork, Anthropic is in­creas­ingly fo­cused on mak­ing cod­ing and non-cod­ing tasks more ap­proach­able, thanks to AI agent ca­pa­bil­i­ties.

Microsoft is em­brac­ing the ease of use of Claude Code to al­low more non­tech­ni­cal em­ploy­ees to com­mit code us­ing AI, and this broad pi­lot will cer­tainly high­light the chal­lenges and ben­e­fits of that shift. It also puts fur­ther pres­sure on ju­nior de­vel­oper roles, with fears in the in­dus­try that these roles are in­creas­ingly dis­ap­pear­ing be­cause of AI. Microsoft just took an­other big step to­ward a fu­ture where more au­tonomous AI agents are cre­at­ing code, fur­ther wrestling con­trol from its soft­ware en­gi­neers.

Microsoft is get­ting ready to show off two of its biggest Xbox games this year, Forza Horizon 6 and Fable, later to­day as part of its Xbox Developer Direct stream. There will also be a first in-depth look at Beast of Reincarnation and at least one other game shown, I’m hear­ing. Double Fine is ready to show off Kiln, a mul­ti­player, team-based brawler. I un­der­stand Double Fine has been hold­ing playtests re­cently, where you play as a spirit that can in­habit pot­tery and carry wa­ter to douse an op­po­nen­t’s kiln and put out a fire.

I would­n’t be sur­prised to see Kiln ap­pear as an early pre­view in the com­ing months, fol­lowed by Forza Horizon 6 in May and then Halo: Campaign Evolved. I keep hear­ing that both Fable and Gears of War: E-Day are cur­rently tar­get­ing a re­lease in the sec­ond half of this year. Microsoft is keen to re­lease new Forza, Gears, Halo, and Fable games in 2026 to mark 25 years of Xbox.

Follow top­ics and au­thors from this story to see more like this in your per­son­al­ized home­page feed and to re­ceive email up­dates.

...

Read the original on www.theverge.com »

4 333 shares, 18 trendiness

a terminal emulator application for Android OS extendible by variety of packages.

Termux is an Android ter­mi­nal ap­pli­ca­tion and Linux en­vi­ron­ment.

Note that this repos­i­tory is for the app it­self (the user in­ter­face and the ter­mi­nal em­u­la­tion). For the pack­ages in­stal­lable in­side the app, see ter­mux/​ter­mux-pack­ages.

Quick how-to about Termux pack­age man­age­ment is avail­able at Package Management. It also has info on how to fix repos­i­tory is un­der main­te­nance or down er­rors when run­ning apt or pkg com­mands.

We are look­ing for Termux Android ap­pli­ca­tion main­tain­ers.

NOTICE: Termux may be un­sta­ble on Android 12+. Android OS will kill any (phantom) processes greater than 32 (limit is for all apps com­bined) and also kill any processes us­ing ex­ces­sive CPU. You may get [Process com­pleted (signal 9) - press Enter] mes­sage in the ter­mi­nal with­out ac­tu­ally ex­it­ing the shell process your­self. Check the re­lated is­sue #2366, is­sue tracker, phan­tom cached and empty processes docs and this TLDR com­ment on how to dis­able trim­ming of phan­tom and ex­ces­sive cpu us­age processes. A proper docs page will be added later. An op­tion to dis­able the killing should be avail­able in Android 12L or 13, so up­grade at your own risk if you are on Android 11, spe­cially if you are not rooted.

The core Termux app comes with the fol­low­ing op­tional plu­gin apps.

NOTICE: It is highly rec­om­mended that you up­date to v0.118.0 or higher ASAP for var­i­ous bug fixes, in­clud­ing a crit­i­cal world-read­able vul­ner­a­bil­ity re­ported here. See be­low for in­for­ma­tion re­gard­ing Termux on Google Play.

Termux can be ob­tained through var­i­ous sources listed be­low for only Android >= 7 with full sup­port for apps and pack­ages.

Support for both app and pack­ages was dropped for Android 5 and 6 on 2020-01-01 at v0.83, how­ever it was re-added just for the app with­out any sup­port for pack­age up­dates on 2022-05-24 via the GitHub sources. Check here for the de­tails.

The APK files of dif­fer­ent sources are signed with dif­fer­ent sig­na­ture keys. The Termux app and all its plu­g­ins use the same share­dUserId com.ter­mux and so all their APKs in­stalled on a de­vice must have been signed with the same sig­na­ture key to work to­gether and so they must all be in­stalled from the same source. Do not at­tempt to mix them to­gether, i.e do not try to in­stall an app or plu­gin from F-Droid and an­other one from a dif­fer­ent source like GitHub. Android Package Manager will also nor­mally not al­low in­stal­la­tion of APKs with dif­fer­ent sig­na­tures and you will get er­rors on in­stal­la­tion like App not in­stalled, Failed to in­stall due to an un­known er­ror, INSTALL_FAILED_UPDATE_INCOMPATIBLE, INSTALL_FAILED_SHARED_USER_INCOMPATIBLE, sig­na­tures do not match pre­vi­ously in­stalled ver­sion, etc. This re­stric­tion can be by­passed with root or with cus­tom roms.

If you wish to in­stall from a dif­fer­ent source, then you must unin­stall any and all ex­ist­ing Termux or its plu­gin app APKs from your de­vice first, then in­stall all new APKs from the same new source. Check Uninstallation sec­tion for de­tails. You may also want to con­sider Backing up Termux be­fore the unin­stal­la­tion so that you can re­store it af­ter re-in­stalling from Termux dif­fer­ent source.

In the fol­low­ing para­graphs, bootstrap” refers to the min­i­mal pack­ages that are shipped with the ter­mux-app it­self to start a work­ing shell en­vi­ron­ment. Its zips are built and re­leased here.

Termux ap­pli­ca­tion can be ob­tained from F-Droid from here.

You do not need to down­load the F-Droid app (via the Download F-Droid link) to in­stall Termux. You can down­load the Termux APK di­rectly from the site by click­ing the Download APK link at the bot­tom of each ver­sion sec­tion.

It usu­ally takes a few days (or even a week or more) for up­dates to be avail­able on F-Droid once an up­date has been re­leased on GitHub. The F-Droid re­leases are built and pub­lished by F-Droid once they de­tect a new GitHub re­lease. The Termux main­tain­ers do not have any con­trol over the build­ing and pub­lish­ing of the Termux apps on F-Droid. Moreover, the Termux main­tain­ers also do not have ac­cess to the APK sign­ing keys of F-Droid re­leases, so we can­not re­lease an APK our­selves on GitHub that would be com­pat­i­ble with F-Droid re­leases.

The F-Droid app of­ten may not no­tify you of up­dates and you will man­u­ally have to do a pull down swipe ac­tion in the Updates tab of the app for it to check up­dates. Make sure bat­tery op­ti­miza­tions are dis­abled for the app, check https://​don­tkillmyapp.com/ for de­tails on how to do that.

Only a uni­ver­sal APK is re­leased, which will work on all sup­ported ar­chi­tec­tures. The APK and boot­strap in­stal­la­tion size will be ~180MB. F-Droid does not sup­port ar­chi­tec­ture spe­cific APKs.

Termux ap­pli­ca­tion can be ob­tained on GitHub ei­ther from GitHub Releases for ver­sion >= 0.118.0 or from GitHub Build Action work­flows. For an­droid >= 7, only in­stall apt-an­droid-7 vari­ants. For an­droid 5 and 6, only in­stall apt-an­droid-5 vari­ants.

The APKs for GitHub Releases will be listed un­der Assets drop-down of a re­lease. These are au­to­mat­i­cally at­tached when a new ver­sion is re­leased.

The APKs for GitHub Build ac­tion work­flows will be listed un­der Artifacts sec­tion of a work­flow run. These are cre­ated for each com­mit/​push done to the repos­i­tory and can be used by users who don’t want to wait for re­leases and want to try out the lat­est fea­tures im­me­di­ately or want to test their pull re­quests. Note that for ac­tion work­flows, you need to be logged into a GitHub ac­count for the Artifacts links to be en­abled/​click­able. If you are us­ing the GitHub app, then make sure to open work­flow link in a browser like Chrome or Firefox that has your GitHub ac­count logged in since the in-app browser may not be logged in.

The APKs for both of these are de­bug­gable and are com­pat­i­ble with each other but they are not com­pat­i­ble with other sources.

Both uni­ver­sal and ar­chi­tec­ture spe­cific APKs are re­leased. The APK and boot­strap in­stal­la­tion size will be ~180MB if us­ing uni­ver­sal and ~120MB if us­ing ar­chi­tec­ture spe­cific. Check here for de­tails.

Security warn­ing: APK files on GitHub are signed with a test key that has been shared with com­mu­nity. This IS NOT an of­fi­cial de­vel­oper key and every­one can use it to gen­er­ate re­leases for own test­ing. Be very care­ful when us­ing Termux GitHub builds ob­tained else­where ex­cept https://​github.com/​ter­mux/​ter­mux-app. Everyone is able to use it to forge a ma­li­cious Termux up­date in­stal­lable over the GitHub build. Think twice about in­stalling Termux builds dis­trib­uted via Telegram or other so­cial me­dia. If your de­vice get caught by mal­ware, we will not be able to help you.

The test key shall not be used to im­per­son­ate @termux and can’t be used for this any­way. This key is not trusted by us and it is quite easy to de­tect its use in user gen­er­ated con­tent.

There is cur­rently a build of Termux avail­able on Google Play for Android 11+ de­vices, with ex­ten­sive ad­just­ments in or­der to pass pol­icy re­quire­ments there. This is un­der de­vel­op­ment and has miss­ing func­tion­al­ity and bugs (see here for sta­tus up­dates) com­pared to the sta­ble F-Droid build, which is why most users who can should still use F-Droid or GitHub build as men­tioned above.

Currently, Google Play will try to up­date in­stal­la­tions away from F-Droid ones. Updating will still fail as share­dUserId has been re­moved. A planned 0.118.1 F-Droid re­lease will fix this by set­ting a higher ver­sion code than used for the PlayStore app. Meanwhile, to pre­vent Google Play from at­tempt­ing to down­load and then fail to in­stall the Google Play re­leases over ex­ist­ing in­stal­la­tions, you can open the Termux apps pages on Google Play and then click on the 3 dots op­tions but­ton in the top right and then dis­able the Enable auto up­date tog­gle. However, the Termux apps up­dates will still show in the PlayStore app up­dates list.

If you want to help out with test­ing the Google Play build (or can­not in­stall Termux from other sources), be aware that it’s built from a sep­a­rate repos­i­tory (https://​github.com/​ter­mux-play-store/) - be sure to re­port is­sues there, as any is­sues en­coun­tered might very well be spe­cific to that repos­i­tory.

Uninstallation may be re­quired if a user does­n’t want Termux in­stalled in their de­vice any­more or is switch­ing to a dif­fer­ent in­stall source. You may also want to con­sider Backing up Termux be­fore the unin­stal­la­tion.

To unin­stall Termux com­pletely, you must unin­stall any and all ex­ist­ing Termux or its plu­gin app APKs listed in Termux App and Plugins.

Go to Android Settings -> Applications and then look for those apps. You can also use the search fea­ture if it’s avail­able on your de­vice and search ter­mux in the ap­pli­ca­tions list.

Even if you think you have not in­stalled any of the plu­g­ins, it’s strongly sug­gested to go through the ap­pli­ca­tion list in Android set­tings and dou­ble-check.

All com­mu­nity links are avail­able here.

The main ones are the fol­low­ing.

You can help de­bug prob­lems of the Termux app and its plu­g­ins by set­ting ap­pro­pri­ate log­cat Log Level in Termux app set­tings -> -> Debugging -> Log Level (Requires Termux app ver­sion >= 0.118.0). The Log Level de­faults to Normal and log level Verbose cur­rently logs ad­di­tional in­for­ma­tion. Its best to re­vert log level to Normal af­ter you have fin­ished de­bug­ging since pri­vate data may oth­er­wise be passed to log­cat dur­ing nor­mal op­er­a­tion and more­over, ad­di­tional log­ging in­creases ex­e­cu­tion time.

The plu­gin apps do not ex­e­cute the com­mands them­selves but send ex­e­cu­tion in­tents to Termux app, which has its own log level which can be set in Termux app set­tings -> Termux -> Debugging -> Log Level. So you must set log level for both Termux and the re­spec­tive plu­gin app set­tings to get all the info.

Once log lev­els have been set, you can run the log­cat com­mand in Termux app ter­mi­nal to view the logs in re­al­time (Ctrl+c to stop) or use log­cat -d > log­cat.txt to take a dump of the log. You can also view the logs from a PC over ADB. For more in­for­ma­tion, check of­fi­cial an­droid log­cat guide here.

Moreover, users can gen­er­ate ter­mux files stat info and log­cat dump au­to­mat­i­cally too with ter­mi­nal’s long hold op­tions menu More -> Report Issue op­tion and se­lect­ing YES in the prompt shown to add de­bug info. This can be help­ful for re­port­ing and de­bug­ging other is­sues. If the re­port gen­er­ated is too large, then Save To File op­tion in con­text menu (3 dots on top right) of ReportActivity can be used and the file viewed/​shared in­stead.

Users must post com­plete re­port (optionally with­out sen­si­tive info) when re­port­ing is­sues. Issues opened with (partial) screen­shots of er­ror re­ports in­stead of text will likely be au­to­mat­i­cally closed/​deleted.

The ter­mux-shared li­brary was added in v0.109. It de­fines shared con­stants and utils of the Termux app and its plu­g­ins. It was cre­ated to al­low for the re­moval of all hard­coded paths in the Termux app. Some of the ter­mux plu­g­ins are us­ing this as well and rest will in fu­ture. If you are con­tribut­ing code that is us­ing a con­stant or a util that may be shared, then de­fine it in ter­mux-shared li­brary if it cur­rently does­n’t ex­ist and ref­er­ence it from there. Update the rel­e­vant changel­ogs as well. Pull re­quests us­ing hard­coded val­ues will/​should not be ac­cepted. Termux app and plu­gin spe­cific classes must be added un­der com.ter­mux.shared.ter­mux pack­age and gen­eral classes out­side it. The ter­mux-shared LICENSE must also be checked and up­dated if nec­es­sary when con­tribut­ing code. The li­censes of any ex­ter­nal li­brary or code must be ho­n­oured.

The main Termux con­stants are de­fined by TermuxConstants class. It also con­tains in­for­ma­tion on how to fork Termux or build it with your own pack­age name. Changing the pack­age name will re­quire build­ing the boot­strap zip pack­ages and other pack­ages with the new $PREFIX, check Building Packages for more info.

Check Termux Libraries for how to im­port ter­mux li­braries in plu­gin apps and Forking and Local Development for how to up­date ter­mux li­braries for plu­g­ins.

The ver­sion­Name in build.gra­dle files of Termux and its plu­gin apps must fol­low the se­man­tic ver­sion 2.0.0 spec in the for­mat ma­jor.mi­nor.patch(-pre­re­lease)(+build­meta­data). When bump­ing ver­sion­Name in build.gra­dle files and when cre­at­ing a tag for new re­leases on GitHub, make sure to in­clude the patch num­ber as well, like v0.1.0 in­stead of just v0.1. The build.gra­dle files and at­tach_de­bug_apks_­to_re­lease work­flow val­i­dates the ver­sion as well and the build/​at­tach­ment will fail if ver­sion­Name does not fol­low the spec.

Commit mes­sages must use the Conventional Commits spec so that chagel­ogs as per the Keep a Changelog spec can au­to­mat­i­cally be gen­er­ated by the cre­ate-con­ven­tional-changelog script, check its repo for fur­ther de­tails on the spec. The first let­ter for type and de­scrip­tion must be cap­i­tal and de­scrip­tion should be in the pre­sent tense. The space af­ter the colon : is nec­es­sary. For a break­ing change, add an ex­cla­ma­tion mark ! be­fore the colon :, so that it is high­lighted in the chagelog au­to­mat­i­cally.

Only the types listed be­low must be used ex­actly as they are used in the changelog head­ings. For ex­am­ple, Added: Add foo, Added|Fixed: Add foo and fix bar, Changed!: Change baz as a break­ing change, etc. You can op­tion­ally add a scope as well, like Fixed(terminal): Fix some bug. Do not use any­thing else as type, like add in­stead of Added, etc.

* Changed for changes in ex­ist­ing func­tion­al­ity.

* Check TermuxConstants javadocs for in­struc­tions on what changes to make in the app to change pack­age name.

* You also need to re­com­pile boot­strap zip for the new pack­age name. Check build­ing boot­strap, here and here.

* Currently, not all plu­g­ins use TermuxConstants from ter­mux-shared li­brary and have hard­coded com.ter­mux val­ues and will need to be man­u­ally patched.

* If fork­ing ter­mux plu­g­ins, check Forking and Local Development for info on how to use ter­mux li­braries for plu­g­ins.

...

Read the original on github.com »

5 310 shares, 31 trendiness

Anki's Growing Up

Anki’s 19th birth­day was about 4 months ago. It would have been a good time to pause and re­flect on what Anki has be­come, and how it will grow in the fu­ture. But I ended up let­ting the mo­ment come and go, as I did­n’t feel like I had the free time. It’s a feel­ing that’s been re­gret­tably com­mon of late, and I’ve come to re­alise that some­thing has to change.

For a num­ber of years, I’ve reached out to some of the most pro­lific con­trib­u­tors and of­fered them pay­ment in ex­change for them con­tribut­ing more code or sup­port to Anki. That has been a big help, and I’m very grate­ful for their con­tri­bu­tions. But there is a lot that I haven’t been able to del­e­gate. With no pre­vi­ous man­age­ment ex­pe­ri­ence, I was a bit daunted by the thought of seek­ing out and man­ag­ing em­ploy­ees. And with so much to get on with, it al­ways got put in the maybe later” bas­ket.

As Anki slowly grew in pop­u­lar­ity, so did its de­mands on my time. I was of course de­lighted to see it reach­ing more peo­ple, and to have played a part in its suc­cess. But I also felt a big sense of re­spon­si­bil­ity, and did not want to let peo­ple down. That led to un­sus­tain­ably long hours and con­stant stress, which took a toll on my re­la­tion­ships and well-be­ing.

The parts of the job that drew me to start work­ing on Anki (the deep work’, solv­ing in­ter­est­ing tech­ni­cal prob­lems with­out con­stant dis­trac­tions) have mostly fallen by the way­side. I find my­self re­ac­tively re­spond­ing to the lat­est prob­lem or post in­stead of proac­tively mov­ing things for­ward, which is nei­ther as en­joy­able as it once was, nor the best thing for the pro­ject.

There have been many of­fers to in­vest in or buy Anki over the years, but I’ve al­ways shut them down quickly, as I had no con­fi­dence that these in­vest­ment-fo­cused peo­ple would be good stew­ards, and not pro­ceed down the typ­i­cal path of en­shit­ti­fi­ca­tion that is un­for­tu­nately so com­mon in VC and PE-backed ven­tures.

Some months ago, the AnkiHub folks reached out to me, want­ing to dis­cuss work­ing more closely to­gether in the fu­ture. Like oth­ers in the com­mu­nity, they were keen to see Anki’s de­vel­op­ment pace im­prove. We’ve had a sym­bi­otic re­la­tion­ship for years, with their con­tent cre­ation and col­lab­o­ra­tion plat­form dri­ving more users to Anki. They’ve man­aged to scale up much faster than I did, and have built out an im­pres­sive team.

During the course of those talks, I came to the re­al­i­sa­tion that AnkiHub is bet­ter po­si­tioned to take Anki to the next level than I am. I ended up sug­gest­ing to them that we look into grad­u­ally tran­si­tion­ing busi­ness op­er­a­tions and open source stew­ard­ship over, with pro­vi­sions in place to en­sure that Anki re­mains open source and true to the prin­ci­ples I’ve run it by all these years.

This is a step back for me rather than a good­bye - I will still be in­volved with the pro­ject, al­beit at a more sus­tain­able level. I’ve spent 19 years look­ing af­ter my baby”, and I want to see it do well as it grows up.

I’m con­fi­dent this change will be a net pos­i­tive for both users and de­vel­op­ers. Removing me as a bot­tle­neck will al­low things to move faster, en­cour­age a more col­lab­o­ra­tive ap­proach, and free up time for im­prove­ments that have been hard to pri­ori­tise, like UI pol­ish. It also means the ecosys­tem will no longer be in jeop­ardy if I’m one day hit by a bus.

It’s nat­ural to feel ap­pre­hen­sive about change, but as the ben­e­fits be­come clearer over the com­ing months, I sus­pect many of you will come to wish this change had hap­pened sooner.

Thank you to every­one who has con­tributed to mak­ing Anki bet­ter up un­til now. I’m ex­cited for Anki’s fu­ture, and can’t wait to see what we can build to­gether in this next stage.

We ini­tially reached out to @dae to ex­plore col­lab­o­rat­ing more closely on im­prov­ing Anki. We were both hum­bled and shocked when he asked if we’d be will­ing to step into a much larger lead­er­ship role than we ex­pected.

At this point, we’re mostly ex­cited…and also feel­ing a healthy amount of ter­ror. This is a big re­spon­si­bil­ity. It will push us to grow as in­di­vid­u­als, as a team, and as a com­mu­nity, and we don’t take that lightly.

We’re grate­ful for the trust Damien and oth­ers have placed in us. And we also know that trust has to be earned, es­pe­cially from peo­ple who don’t know us yet.

What We Believe

We be­lieve Anki is al­most sa­cred, some­thing big­ger than any one per­son or or­ga­ni­za­tion. In an im­por­tant sense, it be­longs to the com­mu­nity.

This ar­ti­cle high­lights the prin­ci­ples Damien built Anki on; prin­ci­ples we deeply share, such as re­spect for user agency, re­fusal of ma­nip­u­la­tive de­sign pat­terns, and an em­pha­sis on the craft of build­ing gen­uinely use­ful tools that aren’t merely en­gag­ing. Anki has never tried to max­i­mize engagement” by ex­ploit­ing psy­cho­log­i­cal vul­ner­a­bil­i­ties purely for profit. Anki gives your time back to you, and that is an ex­cep­tional rar­ity in this world that we want to pre­serve.

As an or­ga­ni­za­tion built by stu­dents, for stu­dents, our mis­sion is to con­tinue em­body­ing these prin­ci­ples. We are ac­count­able only to you, our users, not ex­ter­nal in­vestors, and we plan to keep it that way.

What We Don’t Know Yet

We can’t an­swer every ques­tion right away, as there are many un­knowns since much has­n’t been de­cided yet. But we are shar­ing every­thing we can now be­cause the com­mu­nity is im­por­tant to us. We en­cour­age you all to share your thoughts and ques­tions — we’re all in this to­gether!

We’re still work­ing through the de­tails on things like:

Governance and de­ci­sion-mak­ing: How de­ci­sions are made, who has fi­nal say, and how the com­mu­nity is heard

Roadmap and pri­or­i­ties: What gets built when and how to bal­ance com­pet­ing needs

The tran­si­tion it­self: How to bring in more sup­port with­out dis­rupt­ing what al­ready works

Anki has shown how pow­er­ful com­mu­nity col­lab­o­ra­tion can be when it’s gen­uinely a group ef­fort, and that’s a tra­di­tion we are hon­ored to con­tinue.

We’re cur­rently talk­ing to David Allison, a long-time core con­trib­u­tor to AnkiDroid, about work­ing to­gether on ex­actly these ques­tions. His ex­pe­ri­ence with AnkiDroid’s col­lab­o­ra­tive de­vel­op­ment is in­valu­able, and we’re grate­ful he’s will­ing to help us get this right. We’re in­cred­i­bly ex­cited to have him join us full-time to help pro­pel Anki into the fu­ture.

UI/UX im­prove­ments. We’re bring­ing pro­fes­sional de­sign ex­per­tise on board to make it more ap­proach­able with­out sac­ri­fic­ing Anki’s power. We be­lieve that prin­ci­pled de­sign will bring mean­ing­ful qual­ity of life im­prove­ments to power users and novices alike.

Addressing the bus fac­tor. The ecosys­tem should­n’t be in jeop­ardy if any one per­son dis­ap­pears. We want to build soft­ware that lives be­yond any sin­gle con­trib­u­tor.

Supporting more than just med stu­dents. AnkiHub grew out of the med­ical ed­u­ca­tion com­mu­nity, but Anki serves learn­ers from all walks of life, and we want to sup­port every­one to achieve their learn­ing goals.

A more ro­bust add-on ecosys­tem. We’d love to build tools that em­power non-tech­ni­cal users to cus­tomize Anki for their needs, and we’re ex­plor­ing add-ons that work every­where, in­clud­ing mo­bile.

We want to pro­vide trans­parency into the de­ci­sion-mak­ing process, tak­ing in­spi­ra­tion from proven mod­els to:

Give the com­mu­nity clar­ity on how to be heard and give feed­back

Make it clear how de­ci­sions are made and why

Define roles and re­spon­si­bil­i­ties so things don’t fall through the cracks

We want to bring every­one in the global Anki com­mu­nity to­gether into a closer col­lab­o­ra­tion fo­cused on build­ing the best learn­ing tools pos­si­ble. Today, these groups of­ten work in si­los; a more uni­fied process will help every­one move Anki for­ward to­gether.

Sustainability, af­ford­abil­ity, and ac­ces­si­bil­ity. We’re com­mit­ted to a sus­tain­able busi­ness model that keeps Anki ac­ces­si­ble and pri­or­i­tizes user needs above prof­its. If any­thing ever needs to change, we’ll be trans­par­ent about why.

No en­shit­ti­fi­ca­tion. We’ve seen what hap­pens when VC-backed com­pa­nies ac­quire beloved tools. That’s not what this is. There are no in­vestors in­volved, and we’re not here to ex­tract value from some­thing the com­mu­nity built to­gether. Building in the right safe­guards and processes to han­dle pres­sure with­out sti­fling nec­es­sary im­prove­ments is some­thing we’re ac­tively con­sid­er­ing.

We’re grate­ful to Damien et all for their trust and sup­port, and grate­ful to all of you for the pas­sion that makes this com­mu­nity so spe­cial.

We wel­come your ques­tions, con­cerns, and feed­back.

AnkiHub is a small ed­u­ca­tion tech­nol­ogy com­pany founded by two long-time Anki nerds: Nick, a res­i­dent physi­cian known as The AnKing, and Andrew Sanchez, a re­search soft­ware en­gi­neer. AnkiHub grew out of years of ob­ses­sive Anki use and first­hand ex­pe­ri­ence with both its power and its lim­i­ta­tions.

AnkiHub be­gan as a way to col­lab­o­rate on Anki decks (such as the AnKing Step Deck for med­ical stu­dents) and has since evolved into a broader ef­fort to im­prove the Anki ecosys­tem by build­ing tools that help more peo­ple ben­e­fit from Anki.

Absolutely. Anki’s core code will re­main open source, guided by the same prin­ci­ples that have guided the pro­ject from the be­gin­ning.

Are there any changes planned to Anki’s pric­ing?

No. We are com­mit­ted to fair pric­ing that sup­ports users rather than ex­ploit­ing them. Both Anki and AnkiHub are al­ready prof­itable. Any fu­ture de­ci­sions will be made with com­mu­nity ben­e­fit, user value, and long-term pro­ject health in mind.

Is Anki in fi­nan­cial trou­ble?

No. The tran­si­tion is dri­ven by the goal of help­ing Anki reach its full po­ten­tial, not by fi­nan­cial is­sues. Our goal is to build a re­silient struc­ture and ac­cel­er­ate de­vel­op­ment.

What is the time­line?

Our in­ten­tion is to build con­fi­dence and earn trust while mak­ing grad­ual changes. The tran­si­tion will be trans­par­ent, with clear com­mu­ni­ca­tion through­out.

What hap­pens to vol­un­teer con­trib­u­tors and com­mu­nity de­vel­op­ers?

Volunteer con­trib­u­tors will al­ways be es­sen­tial to Anki. Our goal is to make it eas­ier to col­lab­o­rate mean­ing­fully.

Will the mo­bile apps change or be re­moved from the app stores?

The mo­bile apps will con­tinue to be main­tained and sup­ported. Additional de­vel­op­ment ca­pac­ity should help with faster up­dates, bet­ter test­ing, and more con­sis­tent im­prove­ments across plat­forms over time.

How much in­flu­ence will in­vestors or ex­ter­nal part­ners have on Anki af­ter the tran­si­tion?

None. Both Anki and AnkiHub are en­tirely self-funded. There are no out­side in­vestors dic­tat­ing prod­uct de­ci­sions, growth tar­gets, or mon­e­ti­za­tion strat­egy.

What will hap­pen with AnkiHub?

AnkiHub will con­tinue to op­er­ate as usual, but now our teams are work­ing to­gether to im­prove both so­lu­tions. The only change you should no­tice is that, over time, every­thing be­comes much eas­ier to use.

We’ll share more up­dates as they hap­pen in the fu­ture.

What will hap­pen with the cur­rent AnkiHub sub­scrip­tions?

AnkiHub sub­scrip­tions en­hance Anki with col­lab­o­ra­tive fea­tures, shared deck sync­ing, and LLM-based fea­tures and that is­n’t chang­ing at this time.

What will hap­pen with AnkiDroid?

AnkiDroid will re­main an open-source, self-gov­erned pro­ject. There are no plans or agree­ments re­gard­ing AnkiDroid.

How will de­ci­sions be made and com­mu­ni­cated?

Anki is open-source, and we will build on and im­prove its cur­rent de­ci­sion-mak­ing processes. We will work in pub­lic when­ever pos­si­ble and seek con­sen­sus from core con­trib­u­tors. Significant de­ci­sions, choices, and their out­comes will be doc­u­mented on GitHub or in the source code. When a change ma­te­ri­ally af­fects users or de­vel­op­ers, the rea­son­ing be­hind it and its im­pact will be com­mu­ni­cated pub­licly. In the com­ing weeks, we will work on defin­ing a more for­mal gov­er­nance model to set clear ex­pec­ta­tions.

Will there be a pub­lic gov­er­nance model, ad­vi­sory board, or other ac­count­abil­ity struc­ture?

We’re ex­plor­ing what makes sense here, and we don’t want to rush it.

Historically, Anki has re­lied more on trust and stew­ard­ship than on for­mal gov­er­nance. We want to pre­serve that spirit while im­prov­ing trans­parency. Our goal is to es­tab­lish a gov­er­nance struc­ture that sup­ports the com­mu­nity and im­proves clar­ity and ac­count­abil­ity with­out bur­den­some bu­reau­cracy.

How will the tran­si­tion af­fect add-ons and their de­vel­op­ers?

Add-ons are a crit­i­cal part of the ecosys­tem.

Our in­tent is to make life eas­ier for add-on de­vel­op­ers: clearer APIs, bet­ter doc­u­men­ta­tion, fewer break­ing changes, and more pre­dictable re­lease cy­cles. The goal is not to lock down or re­strict the add-on space, but rather to en­hance it.

What new re­sources will Anki gain through this tran­si­tion?

The biggest change is band­width by en­abling more peo­ple to work on Anki with­out every­thing be­ing bot­tle­necked through a sin­gle per­son. This will take time, but will even­tu­ally trans­late into more en­gi­neer­ing, de­sign, and sup­port ca­pac­ity.

What steps will be taken to make Anki more ac­ces­si­ble, sta­ble, and be­gin­ner-friendly?

There is a lot of low-hang­ing fruit that we plan to tackle: im­prov­ing on­board­ing for new users, pol­ish­ing rough edges, and ad­dress­ing long-stand­ing us­abil­ity is­sues. These are ex­actly the kinds of im­prove­ments that have been dif­fi­cult to tackle un­der con­stant time pres­sure, and we’re ex­cited to in­vest in them.

Will com­mu­nity feed­back still mean­ing­fully in­flu­ence the pro­jec­t’s di­rec­tion?

Yes. Anki ex­ists be­cause of its com­mu­nity: users, con­trib­u­tors, add-on de­vel­op­ers, trans­la­tors, and ed­u­ca­tors. Feedback won’t al­ways trans­late into im­me­di­ate changes, but it will al­ways be heard, con­sid­ered, and re­spected.

How will trust be built with users who are skep­ti­cal or anx­ious about the change?

Trust is­n’t some­thing you de­mand; it’s some­thing you earn over time. We in­tend to build trust through con­sis­tent ac­tions: hon­or­ing com­mit­ments, avoid­ing sur­prises, com­mu­ni­cat­ing clearly, and demon­strat­ing that Anki’s val­ues haven’t changed. We hope our past ac­tions will give you some peace of mind, but we also un­der­stand the skep­ti­cism, and we’re pre­pared to meet it with pa­tience and trans­parency.

That led to un­sus­tain­ably long hours and con­stant stress, which took a toll on my re­la­tion­ships and well-be­ing.

If the only pos­i­tive thing this change brings is to al­low you to fo­cus more on your per­sonal life and well-be­ing, I’d still be very op­ti­mistic about the fu­ture of Anki as a whole! It’d be very sad if Anki is im­prov­ing the lives of mil­lions of peo­ple while in­flict­ing pain on its cre­ator.

The parts of the job that drew me to start work­ing on Anki (the deep work’, solv­ing in­ter­est­ing tech­ni­cal prob­lems with­out con­stant dis­trac­tions) have mostly fallen by the way­side. I find my­self re­ac­tively re­spond­ing to the lat­est prob­lem or post in­stead of proac­tively mov­ing things for­ward, which is nei­ther as en­joy­able as it once was, nor the best thing for the pro­ject.

I (partly) know this feel­ing. The other day I came across a 8-year Anki dis­cus­sion that I shared with @andrewsanchez, and we were like dae must have some type of su­per­power to be able to al­ways stay calm and po­lite deal­ing with all kinds of peo­ple”.

Perhaps it is time for an in­creased AnkiHub pres­ence in the Anki dis­cord

Wow! That is big news. I was notic­ing that the re­lease cy­cles were get­ting longer and longer with the lat­est re­lease be­ing 25.09.2. When I first read the ar­ti­cle I had a neg­a­tive feel­ing and I was re­luc­tant but if Anki re­mains open-source and no paid ser­vices are forced in the pref­er­ences etc, it may be a good de­ci­sion at the end… Just my 2 cents.

Thank you, Dae, for main­tain­ing Anki for 19 years.

You stood by your prin­ci­ples for a very long time, even when it was not easy. Because of that, Anki be­came some­thing peo­ple trust.

Anki has af­fected mil­lions of lives. Students, doc­tors, and many oth­ers use it every day. Not many peo­ple can say they built some­thing like that.

The legacy you leave is huge. Good luck to any­one try­ing to live up to it.

Thank you for every­thing you did for this pro­ject and the com­mu­nity.

Anki has helped so many strug­gling stu­dents, in­clud­ing me.

Thank you very much @dae and good luck

Well, that’s quite the big piece of news !

Having a full team be­hind Anki will def­i­nitely be a huge op­por­tu­nity for its fu­ture de­vel­op­ment !

Enjoy your time now you made that baby lives on its own, @Dae !

Thanks for putting the ef­fort into com­mu­ni­cat­ing it.

I’m a SW de­vel­oper (/team lead) and started to use anki ex­ten­sively the last few months af­ter a few years of flirting with anki” and it’s quite amaz­ing! I also see so much more po­ten­tial for how it can be im­proved.

I see a lot of po­ten­tial ben­e­fits for this change, but also some wor­ry­ing as­pects. Namely, the com­mer­cial con­flict of in­ter­ests be­tween Anki and mak­ing prof­its. From my ex­pe­ri­ence, as long as such con­flicts ex­ist within an or­ga­ni­za­tion or the peo­ple ac­tu­ally mak­ing de­ci­sions about a prod­uct, then it’s a slip­pery slope, slowly lead­ing to the wrong place… Simply hav­ing this dillema in the back of your mind while mak­ing a prod­uct de­ci­sion, is al­ready a lot.

I from my side would ap­pre­ci­ate com­mu­ni­ca­tion re­gard­ing this wor­ry­ing as­pect, how is it taken into con­sid­er­a­tion.

The anki­hub team looks great, con­grats on build­ing such a team, and good luck with this am­bi­tious next step!

I would be happy to be more in­volved in the fu­ture in the pro­ject.

Will this be like AnkiHub that slowly can­ni­bal­ized the free stuff from /r/medicalschoolanki to move every­thing be­hind a pay­wall?

...

Read the original on forums.ankiweb.net »

6 303 shares, 38 trendiness

Court orders restart of all US offshore wind construction

The Trump ad­min­is­tra­tion is no fan of re­new­able en­ergy, but it re­serves spe­cial ire for wind power. Trump him­self has re­peat­edly made false state­ments about the cost of wind power, its use around the world, and its en­vi­ron­men­tal im­pacts. That an­i­mos­ity was paired with an ex­ec­u­tive or­der that blocked all per­mit­ting for off­shore wind and some land-based pro­jects, an or­der that has since been thrown out by a court that ruled it ar­bi­trary and capri­cious.

Not con­tent to block all fu­ture de­vel­op­ments, the ad­min­is­tra­tion has also gone af­ter the five off­shore wind pro­jects cur­rently un­der con­struc­tion. After tem­porar­ily block­ing two of them for rea­sons that were never fully elab­o­rated, the Department of the Interior set­tled on a sin­gle jus­ti­fi­ca­tion for block­ing tur­bine in­stal­la­tion: a clas­si­fied na­tional se­cu­rity risk.

The re­sponse to that late-De­cem­ber an­nounce­ment has been uni­form: The com­pa­nies build­ing each of the pro­jects sued the ad­min­is­tra­tion. As of Monday, every sin­gle one of them has achieved the same re­sult: a tem­po­rary in­junc­tion that al­lows them to con­tinue con­struc­tion. This, de­spite the fact that the suits were filed in three dif­fer­ent courts and heard by four dif­fer­ent judges.

...

Read the original on arstechnica.com »

7 284 shares, 22 trendiness

Hacking Moltbook: AI Social Network Reveals 1.5M API Keys

Moltbook, the weirdly fu­tur­is­tic so­cial net­work, has quickly gone vi­ral as a fo­rum where AI agents post and chat. But what we dis­cov­ered tells a dif­fer­ent story - and pro­vides a fas­ci­nat­ing look into what hap­pens when ap­pli­ca­tions are vibe-coded into ex­is­tence with­out proper se­cu­rity con­trols.

We iden­ti­fied a mis­con­fig­ured Supabase data­base be­long­ing to Moltbook, al­low­ing full read and write ac­cess to all plat­form data. The ex­po­sure in­cluded 1.5 mil­lion API au­then­ti­ca­tion to­kens, 35,000 email ad­dresses, and pri­vate mes­sages be­tween agents. We im­me­di­ately dis­closed the is­sue to the Moltbook team, who se­cured it within hours with our as­sis­tance, and all data ac­cessed dur­ing the re­search and fix ver­i­fi­ca­tion has been deleted.

Moltbook is a so­cial plat­form de­signed ex­clu­sively for AI agents - po­si­tioned as the front page of the agent in­ter­net.” The plat­form al­lows AI agents to post con­tent, com­ment, vote, and build rep­u­ta­tion through a karma sys­tem, cre­at­ing what ap­pears to be a thriv­ing so­cial net­work where AI is the pri­mary par­tic­i­pant.

Over the past few days, Moltbook gained sig­nif­i­cant at­ten­tion in the AI com­mu­nity. OpenAI found­ing mem­ber Andrej Karpathy de­scribed it as genuinely the most in­cred­i­ble sci-fi take­off-ad­ja­cent thing I have seen re­cently,” not­ing how agents were self-organizing on a Reddit-like site for AIs, dis­cussing var­i­ous top­ics, e.g. even how to speak pri­vately.”

The Moltbook founder ex­plained pub­licly on X that he vibe-coded” the plat­form:

I did­n’t write a sin­gle line of code for @moltbook. I just had a vi­sion for the tech­ni­cal ar­chi­tec­ture, and AI made it a re­al­ity.”

This prac­tice, while rev­o­lu­tion­ary, can lead to dan­ger­ous se­cu­rity over­sights - sim­i­lar to pre­vi­ous vul­ner­a­bil­i­ties we have iden­ti­fied, in­clud­ing the DeepSeek data leak and Base44 Authentication Bypass.

We con­ducted a non-in­tru­sive se­cu­rity re­view, sim­ply by brows­ing like nor­mal users. Within min­utes, we dis­cov­ered a Supabase API key ex­posed in client-side JavaScript, grant­ing unau­then­ti­cated ac­cess to the en­tire pro­duc­tion data­base - in­clud­ing read and write op­er­a­tions on all ta­bles.

The ex­posed data told a dif­fer­ent story than the plat­for­m’s pub­lic im­age - while Moltbook boasted 1.5 mil­lion reg­is­tered agents, the data­base re­vealed only 17,000 hu­man own­ers be­hind them - an 88:1 ra­tio. Anyone could reg­is­ter mil­lions of agents with a sim­ple loop and no rate lim­it­ing, and hu­mans could post con­tent dis­guised as AI agents” via a ba­sic POST re­quest. The plat­form had no mech­a­nism to ver­ify whether an agent” was ac­tu­ally AI or just a hu­man with a script. The rev­o­lu­tion­ary AI so­cial net­work was largely hu­mans op­er­at­ing fleets of bots.

When nav­i­gat­ing to Moltbook’s web­site, we ex­am­ined the client-side JavaScript bun­dles loaded au­to­mat­i­cally by the page. Modern web ap­pli­ca­tions bun­dle con­fig­u­ra­tion val­ues into sta­tic JavaScript files, which can in­ad­ver­tently ex­pose sen­si­tive cre­den­tials. This is a re­cur­ring pat­tern we’ve ob­served in vibe-coded ap­pli­ca­tions - API keys and se­crets fre­quently end up in fron­tend code, vis­i­ble to any­one who in­spects the page source, of­ten with sig­nif­i­cant se­cu­rity con­se­quences.

By an­a­lyz­ing the pro­duc­tion JavaScript file at -

The dis­cov­ery of these cre­den­tials does not au­to­mat­i­cally in­di­cate a se­cu­rity fail­ure, as Supabase is de­signed to op­er­ate with cer­tain keys ex­posed to the client - the real dan­ger lies in the con­fig­u­ra­tion of the back­end they point to.

Supabase is a pop­u­lar open-source Firebase al­ter­na­tive pro­vid­ing hosted PostgreSQL data­bases with REST APIs. It’s be­come es­pe­cially pop­u­lar with vibe-coded ap­pli­ca­tions due to its ease of setup. When prop­erly con­fig­ured with Row Level Security (RLS), the pub­lic API key is safe to ex­pose - it acts like a pro­ject iden­ti­fier. However, with­out RLS poli­cies, this key grants full data­base ac­cess to any­one who has it.

In Moltbook’s im­ple­men­ta­tion, this crit­i­cal line of de­fense was miss­ing.

Using the dis­cov­ered API key, we tested whether the rec­om­mended se­cu­rity mea­sures were in place. We at­tempted to query the REST API di­rectly - a re­quest that should have re­turned an empty ar­ray or an au­tho­riza­tion er­ror if RLS were ac­tive.

Instead, the data­base re­sponded ex­actly as if we were an ad­min­is­tra­tor. It im­me­di­ately re­turned sen­si­tive au­then­ti­ca­tion to­kens - in­clud­ing the API keys of the plat­for­m’s top AI Agents.

This con­firmed unau­then­ti­cated ac­cess to user cre­den­tials that would al­low com­plete ac­count im­per­son­ation of any user on the plat­form.

By lever­ag­ing Supabase’s PostgREST er­ror mes­sages, we enu­mer­ated ad­di­tional ta­bles. Querying non-ex­is­tent table names re­turned hints re­veal­ing the ac­tual schema.

Using this tech­nique com­bined with GraphQL in­tro­spec­tion, we mapped the com­plete data­base schema and found around ~4.75 mil­lion records ex­posed.

The agents table ex­posed au­then­ti­ca­tion cre­den­tials for every reg­is­tered agent in the data­base

- claim_­to­ken - Token used to claim own­er­ship of an agent

With these cre­den­tials, an at­tacker could fully im­per­son­ate any agent on the plat­form - post­ing con­tent, send­ing mes­sages, and in­ter­act­ing as that agent. This in­cluded high-karma ac­counts and well-known per­sona agents. Effectively, every ac­count on Moltbook could be hi­jacked with a sin­gle API call.

Additionally, by query­ing the GraphQL end­point, we dis­cov­ered a new ob­servers table con­tain­ing 29,631 ad­di­tional email ad­dresses - these were early ac­cess signups for Moltbook’s up­com­ing Build Apps for AI Agents” prod­uct.

Unlike Twitter han­dles which were pub­licly dis­played on pro­files, email ad­dresses were meant to stay pri­vate - but were fully ex­posed in the data­base.

While ex­am­in­ing this table to un­der­stand agent-to-agent in­ter­ac­tions, we dis­cov­ered that con­ver­sa­tions were stored with­out any en­cryp­tion or ac­cess con­trols — some con­tained third-party API cre­den­tials, in­clud­ing plain­text OpenAI API keys shared be­tween agents.

Beyond read ac­cess, we con­firmed full write ca­pa­bil­i­ties. Even af­ter the ini­tial fix that blocked read ac­cess to sen­si­tive ta­bles, write ac­cess to pub­lic ta­bles re­mained open. We tested it and were able to suc­cess­fully mod­ify ex­ist­ing posts on the plat­form.

Proving that any unau­then­ti­cated user could:

- Edit any post on the plat­form

This raises ques­tions about the in­tegrity of all plat­form con­tent - posts, votes, and karma scores - dur­ing the ex­po­sure win­dow.

We promptly no­ti­fied the team again to ap­ply write re­stric­tions via RLS poli­cies.

Once the fix was con­firmed, I could no longer re­vert the post as write ac­cess was blocked. The Moltbook team deleted the con­tent a few hours later and thanked us for our re­port.

Vibe cod­ing un­locks re­mark­able speed and cre­ativ­ity, en­abling founders to ship real prod­ucts with un­prece­dented ve­loc­ity - as demon­strated by Moltbook. At the same time, to­day’s AI tools don’t yet rea­son about se­cu­rity pos­ture or ac­cess con­trols on a de­vel­op­er’s be­half, which means con­fig­u­ra­tion de­tails still ben­e­fit from care­ful hu­man re­view. In this case, the is­sue ul­ti­mately traced back to a sin­gle Supabase con­fig­u­ra­tion set­ting - a re­minder of how small de­tails can mat­ter at scale.

The 88:1 agent-to-hu­man ra­tio shows how agent in­ter­net” met­rics can be eas­ily in­flated with­out guardrails like rate lim­its or iden­tity ver­i­fi­ca­tion. While Moltbook re­ported 1.5 mil­lion agents, these were as­so­ci­ated with roughly 17,000 hu­man ac­counts, an av­er­age of about 88 agents per per­son. At the time of our re­view, there were lim­ited guardrails such as rate lim­it­ing or val­i­da­tion of agent au­ton­omy. Rather than a flaw, this likely re­flects how early the agent in­ter­net” cat­e­gory still is: builders are ac­tively ex­plor­ing what agent iden­tity, par­tic­i­pa­tion, and au­then­tic­ity should look like, and the sup­port­ing mech­a­nisms are still evolv­ing.

Similarly, the plat­for­m’s ap­proach to pri­vacy high­lights an im­por­tant ecosys­tem-wide les­son. Users shared OpenAI API keys and other cre­den­tials in di­rect mes­sages un­der the as­sump­tion of pri­vacy, but a con­fig­u­ra­tion is­sue made those mes­sages pub­licly ac­ces­si­ble. A sin­gle plat­form mis­con­fig­u­ra­tion was enough to ex­pose cre­den­tials for en­tirely un­re­lated ser­vices - un­der­scor­ing how in­ter­con­nected mod­ern AI sys­tems have be­come.

#4. Write Access Introduces Far Greater Risk Than Data Exposure Alone

While data leaks are bad, the abil­ity to mod­ify con­tent and in­ject prompts into an AI ecosys­tem in­tro­duces deeper in­tegrity risks, in­clud­ing con­tent ma­nip­u­la­tion, nar­ra­tive con­trol, and prompt in­jec­tion that can prop­a­gate down­stream to other AI agents. As AI-driven plat­forms grow, these dis­tinc­tions be­come in­creas­ingly im­por­tant de­sign con­sid­er­a­tions.

Security, es­pe­cially in fast-mov­ing AI prod­ucts, is rarely a one-and-done fix. We worked with the team through mul­ti­ple rounds of re­me­di­a­tion, with each it­er­a­tion sur­fac­ing ad­di­tional ex­posed sur­faces: from sen­si­tive ta­bles, to write ac­cess, to GraphQL-discovered re­sources. This kind of it­er­a­tive hard­en­ing is com­mon in new plat­forms and re­flects how se­cu­rity ma­tu­rity de­vel­ops over time.

Overall, Moltbook il­lus­trates both the ex­cite­ment and the grow­ing pains of a brand-new cat­e­gory. The en­thu­si­asm around AI-native so­cial net­works is well-founded, but the un­der­ly­ing sys­tems are still catch­ing up. The most im­por­tant out­come here is not what went wrong, but what the ecosys­tem can learn as builders, re­searchers, and plat­forms col­lec­tively de­fine the next phase of AI-native ap­pli­ca­tions.

As AI con­tin­ues to lower the bar­rier to build­ing soft­ware, more builders with bold ideas but lim­ited se­cu­rity ex­pe­ri­ence will ship ap­pli­ca­tions that han­dle real users and real data. That’s a pow­er­ful shift. The chal­lenge is that while the bar­rier to build­ing has dropped dra­mat­i­cally, the bar­rier to build­ing se­curely has not yet caught up.

The op­por­tu­nity is not to slow down vibe cod­ing but to el­e­vate it. Security needs to be­come a first class, built-in part of AI pow­ered de­vel­op­ment. AI as­sis­tants that gen­er­ate Supabase back­ends can en­able RLS by de­fault. Deployment plat­forms can proac­tively scan for ex­posed cre­den­tials and un­safe con­fig­u­ra­tions. In the same way AI now au­to­mates code gen­er­a­tion, it can also au­to­mate se­cure de­faults and guardrails.

If we get this right, vibe cod­ing does not just make soft­ware eas­ier to build … it makes se­cure soft­ware the nat­ural out­come and un­locks the full po­ten­tial of AI-driven in­no­va­tion.

Note: Security re­searcher Jameson O’Reilly also dis­cov­ered the un­der­ly­ing Supabase mis­con­fig­u­ra­tion, which has been re­ported by 404 Media. Wiz’s post shares our ex­pe­ri­ence in­de­pen­dently find­ing the is­sue, the full — un­re­ported — scope of im­pact, and how we worked Moltbook’s main­tainer to im­prove se­cu­rity.

...

Read the original on www.wiz.io »

8 283 shares, 17 trendiness

4x faster network file sync with rclone (vs rsync)

For the past cou­ple years, I have trans­ported my working set’ of video and pro­ject data to and from work on an ex­ter­nal Thunderbolt NVMe SSD.

But it’s al­ways been slow when I do the sync. In a typ­i­cal day, I may gen­er­ate a new pro­ject folder with 500-1000 in­di­vid­ual files, and dozens of them may be 1-10 GB in size.

The Thunderbolt drive I had was ca­pa­ble of well over 5 GB/sec, and my 10 Gbps net­work con­nec­tion is ca­pa­ble of 1 GB/sec. I even up­graded my Thunderbolt drive to Thunderbolt 5 lately… though that was not the bot­tle­neck.

I used the fol­low­ing rsync com­mand to copy files from a net­work share mounted on my Mac to the drive (which I call Shuttle”):

mer­cury is so named be­cause it’s a fast NVMe-backed NAS vol­ume on my Arm NAS (all my net­work vol­umes are named af­ter ce­les­tial bod­ies).

As a test, I deleted one of the dozen or so ac­tive pro­jects off my Shuttle’ drive, and ran my rsync copy:

The full copy took over 8 min­utes, for a to­tal of about 59 GiB of files copied. There are two prob­lems:

rsync per­forms copies sin­gle-threaded, se­ri­ally, mean­ing only one file is copied at a timeEven for very large files, rsync seems to max out on this net­work share around 350 MB/sec

I had been play­ing with dif­fer­ent com­pres­sion al­go­rithms, try­ing to tar then pipe that to rsync, even ex­per­i­ment­ing with run­ning the rsync dae­mon in­stead of SSH… but never could I get a sig­nif­i­cant speedup! In fact, some com­pres­sion modes would ac­tu­ally slow things down as my en­ergy-ef­fi­cient NAS is run­ning on some slower Arm cores, and they bog things down a bit sin­gle-threaded…

I’ve been us­ing rclone as part of my 3-2-1 backup plan for years. It’s amaz­ing at copy­ing, mov­ing, and sync­ing files from and to al­most any place (including Cloud stor­age, lo­cal stor­age, NAS vol­umes, etc.), but I had some­how pi­geon­holed it as for cloud to lo­cal or vice-versa”, and never con­sid­ered it for lo­cal trans­fer, like over my own LAN.

But it has an op­tion that al­lows trans­fers in par­al­lel, –multi-thread-streams, which Stack Overflow user dan­te­barba sug­gested some­one use in the same sce­nario.

So I gave it a try.

After fid­dling a bit with the ex­act pa­ra­me­ters to match rsync’s -a, and han­dling the weird sym­links like .fcpcache di­rec­to­ries Final Cut Pro spits out in­side pro­ject files, I came up with:

Using this method, I could see my Mac’s net­work con­nec­tion quickly max out around 1 GB/sec, com­plet­ing the same di­rec­tory copy in 2 min­utes:

I’m not 100% sure why rclone says 59 GB were copied, ver­sus rsync’s 63 GB. Probably the ex­clu­sion of the .fcpcache di­rec­tory? lol units… GiB vs GB ;)

But the con­clu­sion—es­pe­cially af­ter see­ing my 10 Gbps con­nec­tion fi­nally be­ing fully uti­lized—is that rclone is about 4x faster work­ing in par­al­lel.

I also ran com­par­isons just chang­ing out a cou­ple files, and rclone and rsync were al­most iden­ti­cal, as the full scan of the di­rec­tory tree for meta­data changes takes about the same time on both (about 18 sec­onds). It’s just the par­al­lel file trans­fers that help rclone pull ahead.

...

Read the original on www.jeffgeerling.com »

9 231 shares, 13 trendiness

Understanding LLM Inference Engines: Inside Nano-vLLM (Part 1)

When de­ploy­ing large lan­guage mod­els in pro­duc­tion, the in­fer­ence en­gine be­comes a crit­i­cal piece of in­fra­struc­ture. Every LLM API you use — OpenAI, Claude, DeepSeek — is sit­ting on top of an in­fer­ence en­gine like this. While most de­vel­op­ers in­ter­act with LLMs through high-level APIs, un­der­stand­ing what hap­pens be­neath the sur­face—how prompts are processed, how re­quests are batched, and how GPU re­sources are man­aged—can sig­nif­i­cantly im­pact sys­tem de­sign de­ci­sions.

This two-part se­ries ex­plores these in­ter­nals through Nano-vLLM, a min­i­mal (~1,200 lines of Python) yet pro­duc­tion-grade im­ple­men­ta­tion that dis­tills the core ideas be­hind vLLM, one of the most widely adopted open-source in­fer­ence en­gines.

Nano-vLLM was cre­ated by a con­trib­u­tor to DeepSeek, whose name ap­pears on the tech­ni­cal re­ports of mod­els like DeepSeek-V3 and R1. Despite its min­i­mal code­base, it im­ple­ments the es­sen­tial fea­tures that make vLLM pro­duc­tion-ready: pre­fix caching, ten­sor par­al­lelism, CUDA graph com­pi­la­tion, and torch com­pi­la­tion op­ti­miza­tions. Benchmarks show it achiev­ing through­put com­pa­ra­ble to—or even slightly ex­ceed­ing—the full vLLM im­ple­men­ta­tion. This makes it an ideal lens for un­der­stand­ing in­fer­ence en­gine de­sign with­out get­ting lost in the com­plex­ity of sup­port­ing dozens of model ar­chi­tec­tures and hard­ware back­ends.

In Part 1, we fo­cus on the en­gi­neer­ing ar­chi­tec­ture: how the sys­tem is or­ga­nized, how re­quests flow through the pipeline, and how sched­ul­ing de­ci­sions are made. We will treat the ac­tual model com­pu­ta­tion as a black box for now—Part 2 will open that box to ex­plore at­ten­tion mech­a­nisms, KV cache in­ter­nals, and ten­sor par­al­lelism at the com­pu­ta­tion level.

The en­try point to Nano-vLLM is straight­for­ward: an LLM class with a gen­er­ate method. You pass in an ar­ray of prompts and sam­pling pa­ra­me­ters, and get back the gen­er­ated text. But be­hind this sim­ple in­ter­face lies a care­fully de­signed pipeline that trans­forms text into to­kens, sched­ules com­pu­ta­tion ef­fi­ciently, and man­ages GPU re­sources.

When gen­er­ate is called, each prompt string goes through a to­k­enizer—a model-spe­cific com­po­nent that splits nat­ural lan­guage into to­kens, the fun­da­men­tal units that LLMs process. Different model fam­i­lies (Qwen, LLaMA, DeepSeek) use dif­fer­ent to­k­eniz­ers, which is why a prompt of the same length may pro­duce dif­fer­ent to­ken counts across mod­els. The to­k­enizer con­verts each prompt into a se­quence: an in­ter­nal data struc­ture rep­re­sent­ing a vari­able-length ar­ray of to­ken IDs. This se­quence be­comes the core unit of work flow­ing through the rest of the sys­tem.

Here’s where the ar­chi­tec­ture gets in­ter­est­ing. Rather than pro­cess­ing each se­quence im­me­di­ately, the sys­tem adopts a pro­ducer-con­sumer pat­tern with the Scheduler at its cen­ter. The ad­d_re­quest method acts as the pro­ducer: it con­verts prompts to se­quences and places them into the Scheduler’s queue. Meanwhile, a sep­a­rate step loop acts as the con­sumer, pulling batches of se­quences from the Scheduler for pro­cess­ing. This de­cou­pling is key—it al­lows the sys­tem to ac­cu­mu­late mul­ti­ple se­quences and process them to­gether, which is where the per­for­mance gains come from.

Why does batch­ing mat­ter? GPU com­pu­ta­tion has sig­nif­i­cant fixed over­head—ini­tial­iz­ing CUDA ker­nels, trans­fer­ring data be­tween CPU and GPU mem­ory, and syn­chro­niz­ing re­sults. If you process one se­quence at a time, you pay this over­head for every sin­gle re­quest. By batch­ing mul­ti­ple se­quences to­gether, you amor­tize this over­head across many re­quests, dra­mat­i­cally im­prov­ing over­all through­put.

However, batch­ing comes with a trade-off. When three prompts are batched to­gether, each must wait for the oth­ers to com­plete be­fore any re­sults are re­turned. The to­tal time for the batch is de­ter­mined by the slow­est se­quence. This means: larger batches yield higher through­put but po­ten­tially higher la­tency for in­di­vid­ual re­quests; smaller batches yield lower la­tency but re­duced through­put. This is a fun­da­men­tal ten­sion in in­fer­ence en­gine de­sign, and the batch size pa­ra­me­ters you con­fig­ure di­rectly con­trol this trade-off.

Before div­ing into the Scheduler, we need to un­der­stand a cru­cial dis­tinc­tion. LLM in­fer­ence hap­pens in two phases:

* Prefill: Processing the in­put prompt. All in­put to­kens are processed to­gether to build up the mod­el’s in­ter­nal state. During this phase, the user sees noth­ing.

* Decode: Generating out­put to­kens. The model pro­duces one to­ken at a time, each de­pend­ing on all pre­vi­ous to­kens. This is when you see text stream­ing out.

For a sin­gle se­quence, there is ex­actly one pre­fill phase fol­lowed by many de­code steps. The Scheduler needs to dis­tin­guish be­tween these phases be­cause they have very dif­fer­ent com­pu­ta­tional char­ac­ter­is­tics—pre­fill processes many to­kens at once, while de­code processes just one to­ken per step.

The Scheduler is re­spon­si­ble for de­cid­ing which se­quences to process and in what or­der. It main­tains two queues:

* Waiting Queue: Sequences that have been sub­mit­ted but not yet started. New se­quences from ad­d_re­quest al­ways en­ter here first.

* Running Queue: Sequences that are ac­tively be­ing processed—ei­ther in pre­fill or de­code phase.

When a se­quence en­ters the Waiting queue, the Scheduler checks with an­other com­po­nent called the Block Manager to al­lo­cate re­sources for it. Once al­lo­cated, the se­quence moves to the Running queue. The Scheduler then se­lects se­quences from the Running queue for the next com­pu­ta­tion step, group­ing them into a batch along with an ac­tion in­di­ca­tor (prefill or de­code).

What hap­pens when GPU mem­ory fills up? The KV cache (which stores in­ter­me­di­ate com­pu­ta­tion re­sults) has lim­ited ca­pac­ity. If a se­quence in the Running queue can­not con­tinue be­cause there’s no room to store its next to­ken’s cache, the Scheduler pre­empts it—mov­ing it back to the front of the Waiting queue. This en­sures the se­quence will re­sume as soon as re­sources free up, while al­low­ing other se­quences to make progress.

When a se­quence com­pletes (reaches an end-of-se­quence to­ken or max­i­mum length), the Scheduler re­moves it from the Running queue and deal­lo­cates its re­sources, free­ing space for wait­ing se­quences.

The Block Manager is where vLLM’s mem­ory man­age­ment in­no­va­tion lives. To un­der­stand it, we first need to in­tro­duce a new re­source unit: the block.

A se­quence is a vari­able-length ar­ray of to­kens—it can be 10 to­kens or 10,000. But vari­able-length al­lo­ca­tions are in­ef­fi­cient for GPU mem­ory man­age­ment. The Block Manager solves this by di­vid­ing se­quences into fixed-size blocks (default: 256 to­kens each).

A 700-token se­quence would oc­cupy three blocks: two full blocks (256 to­kens each) and one par­tial block (188 to­kens, with 68 slots un­used). Importantly, to­kens from dif­fer­ent se­quences never share a block—but a long se­quence will span mul­ti­ple blocks.

Here’s where it gets clever. Each block’s con­tent is hashed, and the Block Manager main­tains a hash-to-block-id map­ping. When a new se­quence ar­rives, the sys­tem com­putes hashes for its blocks and checks if any al­ready ex­ist in the cache.

If a block with the same hash ex­ists, the sys­tem reuses it by in­cre­ment­ing a ref­er­ence count—no re­dun­dant com­pu­ta­tion or stor­age needed. This is par­tic­u­larly pow­er­ful for sce­nar­ios where many re­quests share com­mon pre­fixes (like sys­tem prompts in chat ap­pli­ca­tions). The pre­fix only needs to be com­puted once; sub­se­quent re­quests can reuse the cached re­sults.

A sub­tle but im­por­tant point: the Block Manager lives in CPU mem­ory and only tracks meta­data—which blocks are al­lo­cated, their ref­er­ence counts, and hash map­pings. The ac­tual KV cache data lives on the GPU. The Block Manager is the con­trol plane; the GPU mem­ory is the data plane. This sep­a­ra­tion al­lows fast al­lo­ca­tion de­ci­sions with­out touch­ing GPU mem­ory un­til ac­tual com­pu­ta­tion hap­pens.

When blocks are deal­lo­cated, the Block Manager marks them as free im­me­di­ately, but the GPU mem­ory is­n’t ze­roed—it’s sim­ply over­writ­ten when the block is reused. This avoids un­nec­es­sary mem­ory op­er­a­tions.

The Model Runner is re­spon­si­ble for ac­tu­ally ex­e­cut­ing the model on GPU(s). When the step loop re­trieves a batch of se­quences from the Scheduler, it passes them to the Model Runner along with the ac­tion (prefill or de­code).

When a model is too large for a sin­gle GPU, Nano-vLLM sup­ports ten­sor par­al­lelism (TP)—splitting the model across mul­ti­ple GPUs. With TP=8, for ex­am­ple, eight GPUs work to­gether to run a sin­gle model.

* Rank 0 (Leader): Receives com­mands from the step loop, ex­e­cutes its por­tion, and co­or­di­nates with work­ers.

* Ranks 1 to N-1 (Workers): Continuously poll a shared mem­ory buffer for com­mands from the leader.

When the leader re­ceives a run com­mand, it writes the method name and ar­gu­ments to shared mem­ory. Workers de­tect this, read the pa­ra­me­ters, and ex­e­cute the same op­er­a­tion on their re­spec­tive GPUs. Each worker knows its rank, so it can com­pute its des­ig­nated por­tion of the work. This shared-mem­ory ap­proach is ef­fi­cient for sin­gle-ma­chine multi-GPU se­tups, avoid­ing net­work over­head.

Before in­vok­ing the model, the Model Runner pre­pares the in­put based on the ac­tion:

* Prepare Decode: Batches sin­gle to­kens (one per se­quence) with their po­si­tions and slot map­pings for KV cache ac­cess.

This prepa­ra­tion also in­volves con­vert­ing CPU-side to­ken data into GPU ten­sors—the point where data crosses from CPU mem­ory to GPU mem­ory.

For de­code steps (which process just one to­ken per se­quence), ker­nel launch over­head can be­come sig­nif­i­cant rel­a­tive to ac­tual com­pu­ta­tion. CUDA Graphs ad­dress this by record­ing a se­quence of GPU op­er­a­tions once, then re­play­ing them with dif­fer­ent in­puts. Nano-vLLM pre-cap­tures CUDA graphs for com­mon batch sizes (1, 2, 4, 8, 16, up to 512), al­low­ing de­code steps to ex­e­cute with min­i­mal launch over­head.

The model does­n’t out­put a sin­gle to­ken—it out­puts log­its, a prob­a­bil­ity dis­tri­b­u­tion over the en­tire vo­cab­u­lary. The fi­nal step is sam­pling: se­lect­ing one to­ken from this dis­tri­b­u­tion.

The tem­per­a­ture pa­ra­me­ter con­trols this se­lec­tion. Mathematically, it ad­justs the shape of the prob­a­bil­ity dis­tri­b­u­tion:

* Low tem­per­a­ture (approaching 0): The dis­tri­b­u­tion be­comes sharply peaked. The high­est-prob­a­bil­ity to­ken is al­most al­ways se­lected, mak­ing out­puts more de­ter­min­is­tic and fo­cused.

* High tem­per­a­ture: The dis­tri­b­u­tion flat­tens. Lower-probability to­kens have a bet­ter chance of be­ing se­lected, mak­ing out­puts more di­verse and cre­ative.

This is where the randomness” in LLM out­puts comes from—and why the same prompt can pro­duce dif­fer­ent re­sponses. The sam­pling step se­lects from a valid range of can­di­dates, in­tro­duc­ing con­trolled vari­abil­ity.

In Part 2, we’ll open the black box of model. We’ll ex­plore:

* How the model trans­forms to­kens into hid­den states and back

* The at­ten­tion mech­a­nism and why multi-head at­ten­tion mat­ters

* How KV cache is phys­i­cally laid out on GPU mem­ory

* How ten­sor par­al­lelism works at the com­pu­ta­tion level

Understanding these in­ter­nals will com­plete the pic­ture—from prompt string to gen­er­ated text, with noth­ing left hid­den.

...

Read the original on neutree.ai »

10 199 shares, 18 trendiness

Devlog

Over the past month or so, sev­eral en­ter­pris­ing con­trib­u­tors have taken an in­ter­est in the zig libc sub­pro­ject. The idea here is to in­cre­men­tally delete re­dun­dant code, by pro­vid­ing libc func­tions as Zig stan­dard li­brary wrap­pers rather than as ven­dored C source files. In many cases, these func­tions are one-to-one map­pings, such as mem­cpy or atan2, or triv­ially wrap a generic func­tion, like strnlen:

fn strnlen(str: [*:0]const c_char, max: usize) call­conv(.c) usize {

re­turn std.mem.find­Scalar(u8, @ptrCast(str[0..max]), 0) orelse max;

So far, roughly 250 C source files have been deleted from the Zig repos­i­tory, with 2032 re­main­ing.

With each func­tion that makes the tran­si­tion, Zig gains in­de­pen­dence from third party pro­jects and from the C pro­gram­ming lan­guage, com­pi­la­tion speed im­proves, Zig’s in­stal­la­tion size is sim­pli­fied and re­duced, and user ap­pli­ca­tions which sta­t­i­cally link libc en­joy re­duced bi­nary size.

Additionally, a re­cent en­hance­ment now makes zig libc share the Zig Compilation Unit with other Zig code rather than be­ing a sep­a­rate sta­tic archive, linked to­gether later. This is one of the ad­van­tages of Zig hav­ing an in­te­grated com­piler and linker. When the ex­ported libc func­tions share the ZCU, re­dun­dant code is elim­i­nated be­cause func­tions can be op­ti­mized to­gether. It’s kind of like en­abling LTO (Link-Time Optimization) across the libc bound­ary, ex­cept it’s done prop­erly in the fron­tend in­stead of too late, in the linker.

Furthermore, when this work is com­bined with the re­cent std. Io changes, there is po­ten­tial for users to seam­lessly con­trol how libc per­forms I/O - for ex­am­ple forc­ing all calls to read and write to par­tic­i­pate in an io_ur­ing event loop, even though that code was not writ­ten with such use case in mind. Or, re­source leak de­tec­tion could be en­abled for third-party C code. For now this is only a va­por­ware idea which has not been ex­per­i­mented with, but the idea in­trigues me.

Big thanks to Szabolcs Nagy for libc-test. This pro­ject has been a huge help in mak­ing sure that we don’t regress any math func­tions.

As a re­minder to our users, now that Zig is tran­si­tion­ing to be­ing the sta­tic libc provider, if you en­counter is­sues with the musl, mingw-w64, or wasi-libc libc func­tion­al­ity pro­vided by Zig, please file bug re­ports in Zig first so we don’t an­noy main­tain­ers for bugs that are in Zig, and no longer ven­dored by in­de­pen­dent libc im­ple­men­ta­tion pro­jects.

...

Read the original on ziglang.org »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.