10 interesting stories served every morning and every evening.
Note: this page tends be neglected and is only updated occasionally. The links to the left are where the useful bits are hiding.
For the past 30+ years I’ve been the maintainer of
sudo. I’m currently in search of a sponsor to fund continued sudo maintenance and development. If you or your organization is interested in sponsoring sudo, please let me know.
I also work on OpenBSD, though my I’m not as active there as I once was.
In the past, I’ve made large contributions to
ISC cron, among other projects.
...
Read the original on www.millert.dev »
is a senior editor and author of Notepad , who has been covering all things Microsoft, PC, and tech for over 20 years.
Posts from this author will be added to your daily email digest and your homepage feed.
is a senior editor and author of Notepad , who has been covering all things Microsoft, PC, and tech for over 20 years.
Posts from this author will be added to your daily email digest and your homepage feed.
Developers have been comparing the strengths and weaknesses of Anthropic’s Claude Code, Anysphere’s Cursor, and Microsoft’s GitHub Copilot for months now, looking for a winner. While no individual AI coding tool manages to be the best at every task that software developers do each day, Claude Code is increasingly coming out on top for its ease of use, both for developers and nontechnical users.
It seems like Microsoft agrees, as sources tell me the company is now encouraging thousands of its employees from some of its most prolific teams to pick up Claude Code and get coding, even if they’re not developers.
Microsoft first started adopting Anthropic’s Claude Sonnet 4 model inside its developer division in June last year, before favoring it for paid users of GitHub Copilot several months later. Now, Microsoft is going a step beyond using Anthropic’s AI models and widely adopting Claude Code across its biggest engineering teams.
Microsoft’s CoreAI team, the new AI engineering group led by former Meta engineering chief Jay Parikh, has been testing Claude Code in recent months, and last week Microsoft’s Experiences + Devices division were being asked to install Claude Code. This division is responsible for Windows, Microsoft 365, Outlook, Microsoft Teams, Surface, and more.
Even employees without any coding experience are being encouraged to experiment with Claude Code, to allow designers and project managers to prototype ideas. Microsoft has also approved the use of Claude Code across all of its code and repositories for its Business and Industry Copilot teams.
Software engineers at Microsoft are now expected to use both Claude Code and GitHub Copilot and give feedback comparing the two, I’m told. Microsoft sells GitHub Copilot as its AI coding tool of choice to its customers, but if these broad internal pilot programs are successful, then it’s possible the company could even eventually sell Claude Code directly to its cloud customers.
Microsoft is now one of Anthropic’s top customers, according to a recent report from The Information. The software maker is also counting selling Anthropic AI models toward Azure sales quotas, which is unusual given Microsoft typically only offers its salespeople incentives for homegrown products or models from OpenAI.
Microsoft’s decision to adopt Claude Code more broadly among its engineering teams certainly looks like a vote of confidence in Anthropic’s AI tools over its own, especially as it’s encouraging nontechnical employees to try out coding. But the reality is that Microsoft’s developers are likely to use a mix of AI tools, and adopting Claude Code is another part of that tool set.
“Companies regularly test and trial competing products to gain a better understanding of the market landscape,” says Frank Shaw, Microsoft’s communications chief, in a statement to Notepad. “OpenAI continues to be our primary partner and model provider on frontier models, and we remain committed to our long-term partnership.”
While Microsoft remains committed to OpenAI, it is increasingly working with Anthropic to bring its models and tools to Microsoft’s own teams and the software it sells to customers. Microsoft and Anthropic signed a deal in November that allows Microsoft Foundry customers to get access to Claude Sonnet 4.5, Claude Opus 4.1, and Claude Haiku 4.5. The deal also involves Anthropic committing to purchasing $30 billion of Azure compute capacity.
The big question here is, what does the increased use of Claude Code at Microsoft mean for its more than 100,000 code repositories? Microsoft told me last year that 91 percent of its engineering teams use GitHub Copilot and a variety of teams have been using the AI tool to speed up mundane tasks. Microsoft’s use of AI tools has been largely restricted to software engineers, but with Claude Code and Claude Cowork, Anthropic is increasingly focused on making coding and non-coding tasks more approachable, thanks to AI agent capabilities.
Microsoft is embracing the ease of use of Claude Code to allow more nontechnical employees to commit code using AI, and this broad pilot will certainly highlight the challenges and benefits of that shift. It also puts further pressure on junior developer roles, with fears in the industry that these roles are increasingly disappearing because of AI. Microsoft just took another big step toward a future where more autonomous AI agents are creating code, further wrestling control from its software engineers.
Microsoft is getting ready to show off two of its biggest Xbox games this year, Forza Horizon 6 and Fable, later today as part of its Xbox Developer Direct stream. There will also be a first in-depth look at Beast of Reincarnation and at least one other game shown, I’m hearing. Double Fine is ready to show off Kiln, a multiplayer, team-based brawler. I understand Double Fine has been holding playtests recently, where you play as a spirit that can inhabit pottery and carry water to douse an opponent’s kiln and put out a fire.
I wouldn’t be surprised to see Kiln appear as an early preview in the coming months, followed by Forza Horizon 6 in May and then Halo: Campaign Evolved. I keep hearing that both Fable and Gears of War: E-Day are currently targeting a release in the second half of this year. Microsoft is keen to release new Forza, Gears, Halo, and Fable games in 2026 to mark 25 years of Xbox.
Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.
...
Read the original on www.theverge.com »
Termux is an Android terminal application and Linux environment.
Note that this repository is for the app itself (the user interface and the terminal emulation). For the packages installable inside the app, see termux/termux-packages.
Quick how-to about Termux package management is available at Package Management. It also has info on how to fix repository is under maintenance or down errors when running apt or pkg commands.
We are looking for Termux Android application maintainers.
NOTICE: Termux may be unstable on Android 12+. Android OS will kill any (phantom) processes greater than 32 (limit is for all apps combined) and also kill any processes using excessive CPU. You may get [Process completed (signal 9) - press Enter] message in the terminal without actually exiting the shell process yourself. Check the related issue #2366, issue tracker, phantom cached and empty processes docs and this TLDR comment on how to disable trimming of phantom and excessive cpu usage processes. A proper docs page will be added later. An option to disable the killing should be available in Android 12L or 13, so upgrade at your own risk if you are on Android 11, specially if you are not rooted.
The core Termux app comes with the following optional plugin apps.
NOTICE: It is highly recommended that you update to v0.118.0 or higher ASAP for various bug fixes, including a critical world-readable vulnerability reported here. See below for information regarding Termux on Google Play.
Termux can be obtained through various sources listed below for only Android >= 7 with full support for apps and packages.
Support for both app and packages was dropped for Android 5 and 6 on 2020-01-01 at v0.83, however it was re-added just for the app without any support for package updates on 2022-05-24 via the GitHub sources. Check here for the details.
The APK files of different sources are signed with different signature keys. The Termux app and all its plugins use the same sharedUserId com.termux and so all their APKs installed on a device must have been signed with the same signature key to work together and so they must all be installed from the same source. Do not attempt to mix them together, i.e do not try to install an app or plugin from F-Droid and another one from a different source like GitHub. Android Package Manager will also normally not allow installation of APKs with different signatures and you will get errors on installation like App not installed, Failed to install due to an unknown error, INSTALL_FAILED_UPDATE_INCOMPATIBLE, INSTALL_FAILED_SHARED_USER_INCOMPATIBLE, signatures do not match previously installed version, etc. This restriction can be bypassed with root or with custom roms.
If you wish to install from a different source, then you must uninstall any and all existing Termux or its plugin app APKs from your device first, then install all new APKs from the same new source. Check Uninstallation section for details. You may also want to consider Backing up Termux before the uninstallation so that you can restore it after re-installing from Termux different source.
In the following paragraphs, “bootstrap” refers to the minimal packages that are shipped with the termux-app itself to start a working shell environment. Its zips are built and released here.
Termux application can be obtained from F-Droid from here.
You do not need to download the F-Droid app (via the Download F-Droid link) to install Termux. You can download the Termux APK directly from the site by clicking the Download APK link at the bottom of each version section.
It usually takes a few days (or even a week or more) for updates to be available on F-Droid once an update has been released on GitHub. The F-Droid releases are built and published by F-Droid once they detect a new GitHub release. The Termux maintainers do not have any control over the building and publishing of the Termux apps on F-Droid. Moreover, the Termux maintainers also do not have access to the APK signing keys of F-Droid releases, so we cannot release an APK ourselves on GitHub that would be compatible with F-Droid releases.
The F-Droid app often may not notify you of updates and you will manually have to do a pull down swipe action in the Updates tab of the app for it to check updates. Make sure battery optimizations are disabled for the app, check https://dontkillmyapp.com/ for details on how to do that.
Only a universal APK is released, which will work on all supported architectures. The APK and bootstrap installation size will be ~180MB. F-Droid does not support architecture specific APKs.
Termux application can be obtained on GitHub either from GitHub Releases for version >= 0.118.0 or from GitHub Build Action workflows. For android >= 7, only install apt-android-7 variants. For android 5 and 6, only install apt-android-5 variants.
The APKs for GitHub Releases will be listed under Assets drop-down of a release. These are automatically attached when a new version is released.
The APKs for GitHub Build action workflows will be listed under Artifacts section of a workflow run. These are created for each commit/push done to the repository and can be used by users who don’t want to wait for releases and want to try out the latest features immediately or want to test their pull requests. Note that for action workflows, you need to be logged into a GitHub account for the Artifacts links to be enabled/clickable. If you are using the GitHub app, then make sure to open workflow link in a browser like Chrome or Firefox that has your GitHub account logged in since the in-app browser may not be logged in.
The APKs for both of these are debuggable and are compatible with each other but they are not compatible with other sources.
Both universal and architecture specific APKs are released. The APK and bootstrap installation size will be ~180MB if using universal and ~120MB if using architecture specific. Check here for details.
Security warning: APK files on GitHub are signed with a test key that has been shared with community. This IS NOT an official developer key and everyone can use it to generate releases for own testing. Be very careful when using Termux GitHub builds obtained elsewhere except https://github.com/termux/termux-app. Everyone is able to use it to forge a malicious Termux update installable over the GitHub build. Think twice about installing Termux builds distributed via Telegram or other social media. If your device get caught by malware, we will not be able to help you.
The test key shall not be used to impersonate @termux and can’t be used for this anyway. This key is not trusted by us and it is quite easy to detect its use in user generated content.
There is currently a build of Termux available on Google Play for Android 11+ devices, with extensive adjustments in order to pass policy requirements there. This is under development and has missing functionality and bugs (see here for status updates) compared to the stable F-Droid build, which is why most users who can should still use F-Droid or GitHub build as mentioned above.
Currently, Google Play will try to update installations away from F-Droid ones. Updating will still fail as sharedUserId has been removed. A planned 0.118.1 F-Droid release will fix this by setting a higher version code than used for the PlayStore app. Meanwhile, to prevent Google Play from attempting to download and then fail to install the Google Play releases over existing installations, you can open the Termux apps pages on Google Play and then click on the 3 dots options button in the top right and then disable the Enable auto update toggle. However, the Termux apps updates will still show in the PlayStore app updates list.
If you want to help out with testing the Google Play build (or cannot install Termux from other sources), be aware that it’s built from a separate repository (https://github.com/termux-play-store/) - be sure to report issues there, as any issues encountered might very well be specific to that repository.
Uninstallation may be required if a user doesn’t want Termux installed in their device anymore or is switching to a different install source. You may also want to consider Backing up Termux before the uninstallation.
To uninstall Termux completely, you must uninstall any and all existing Termux or its plugin app APKs listed in Termux App and Plugins.
Go to Android Settings -> Applications and then look for those apps. You can also use the search feature if it’s available on your device and search termux in the applications list.
Even if you think you have not installed any of the plugins, it’s strongly suggested to go through the application list in Android settings and double-check.
All community links are available here.
The main ones are the following.
You can help debug problems of the Termux app and its plugins by setting appropriate logcat Log Level in Termux app settings -> -> Debugging -> Log Level (Requires Termux app version >= 0.118.0). The Log Level defaults to Normal and log level Verbose currently logs additional information. Its best to revert log level to Normal after you have finished debugging since private data may otherwise be passed to logcat during normal operation and moreover, additional logging increases execution time.
The plugin apps do not execute the commands themselves but send execution intents to Termux app, which has its own log level which can be set in Termux app settings -> Termux -> Debugging -> Log Level. So you must set log level for both Termux and the respective plugin app settings to get all the info.
Once log levels have been set, you can run the logcat command in Termux app terminal to view the logs in realtime (Ctrl+c to stop) or use logcat -d > logcat.txt to take a dump of the log. You can also view the logs from a PC over ADB. For more information, check official android logcat guide here.
Moreover, users can generate termux files stat info and logcat dump automatically too with terminal’s long hold options menu More -> Report Issue option and selecting YES in the prompt shown to add debug info. This can be helpful for reporting and debugging other issues. If the report generated is too large, then Save To File option in context menu (3 dots on top right) of ReportActivity can be used and the file viewed/shared instead.
Users must post complete report (optionally without sensitive info) when reporting issues. Issues opened with (partial) screenshots of error reports instead of text will likely be automatically closed/deleted.
The termux-shared library was added in v0.109. It defines shared constants and utils of the Termux app and its plugins. It was created to allow for the removal of all hardcoded paths in the Termux app. Some of the termux plugins are using this as well and rest will in future. If you are contributing code that is using a constant or a util that may be shared, then define it in termux-shared library if it currently doesn’t exist and reference it from there. Update the relevant changelogs as well. Pull requests using hardcoded values will/should not be accepted. Termux app and plugin specific classes must be added under com.termux.shared.termux package and general classes outside it. The termux-shared LICENSE must also be checked and updated if necessary when contributing code. The licenses of any external library or code must be honoured.
The main Termux constants are defined by TermuxConstants class. It also contains information on how to fork Termux or build it with your own package name. Changing the package name will require building the bootstrap zip packages and other packages with the new $PREFIX, check Building Packages for more info.
Check Termux Libraries for how to import termux libraries in plugin apps and Forking and Local Development for how to update termux libraries for plugins.
The versionName in build.gradle files of Termux and its plugin apps must follow the semantic version 2.0.0 spec in the format major.minor.patch(-prerelease)(+buildmetadata). When bumping versionName in build.gradle files and when creating a tag for new releases on GitHub, make sure to include the patch number as well, like v0.1.0 instead of just v0.1. The build.gradle files and attach_debug_apks_to_release workflow validates the version as well and the build/attachment will fail if versionName does not follow the spec.
Commit messages must use the Conventional Commits spec so that chagelogs as per the Keep a Changelog spec can automatically be generated by the create-conventional-changelog script, check its repo for further details on the spec. The first letter for type and description must be capital and description should be in the present tense. The space after the colon : is necessary. For a breaking change, add an exclamation mark ! before the colon :, so that it is highlighted in the chagelog automatically.
Only the types listed below must be used exactly as they are used in the changelog headings. For example, Added: Add foo, Added|Fixed: Add foo and fix bar, Changed!: Change baz as a breaking change, etc. You can optionally add a scope as well, like Fixed(terminal): Fix some bug. Do not use anything else as type, like add instead of Added, etc.
* Changed for changes in existing functionality.
* Check TermuxConstants javadocs for instructions on what changes to make in the app to change package name.
* You also need to recompile bootstrap zip for the new package name. Check building bootstrap, here and here.
* Currently, not all plugins use TermuxConstants from termux-shared library and have hardcoded com.termux values and will need to be manually patched.
* If forking termux plugins, check Forking and Local Development for info on how to use termux libraries for plugins.
...
Read the original on github.com »
Anki’s 19th birthday was about 4 months ago. It would have been a good time to pause and reflect on what Anki has become, and how it will grow in the future. But I ended up letting the moment come and go, as I didn’t feel like I had the free time. It’s a feeling that’s been regrettably common of late, and I’ve come to realise that something has to change.
For a number of years, I’ve reached out to some of the most prolific contributors and offered them payment in exchange for them contributing more code or support to Anki. That has been a big help, and I’m very grateful for their contributions. But there is a lot that I haven’t been able to delegate. With no previous management experience, I was a bit daunted by the thought of seeking out and managing employees. And with so much to get on with, it always got put in the “maybe later” basket.
As Anki slowly grew in popularity, so did its demands on my time. I was of course delighted to see it reaching more people, and to have played a part in its success. But I also felt a big sense of responsibility, and did not want to let people down. That led to unsustainably long hours and constant stress, which took a toll on my relationships and well-being.
The parts of the job that drew me to start working on Anki (the ‘deep work’, solving interesting technical problems without constant distractions) have mostly fallen by the wayside. I find myself reactively responding to the latest problem or post instead of proactively moving things forward, which is neither as enjoyable as it once was, nor the best thing for the project.
There have been many offers to invest in or buy Anki over the years, but I’ve always shut them down quickly, as I had no confidence that these investment-focused people would be good stewards, and not proceed down the typical path of enshittification that is unfortunately so common in VC and PE-backed ventures.
Some months ago, the AnkiHub folks reached out to me, wanting to discuss working more closely together in the future. Like others in the community, they were keen to see Anki’s development pace improve. We’ve had a symbiotic relationship for years, with their content creation and collaboration platform driving more users to Anki. They’ve managed to scale up much faster than I did, and have built out an impressive team.
During the course of those talks, I came to the realisation that AnkiHub is better positioned to take Anki to the next level than I am. I ended up suggesting to them that we look into gradually transitioning business operations and open source stewardship over, with provisions in place to ensure that Anki remains open source and true to the principles I’ve run it by all these years.
This is a step back for me rather than a goodbye - I will still be involved with the project, albeit at a more sustainable level. I’ve spent 19 years looking after my “baby”, and I want to see it do well as it grows up.
I’m confident this change will be a net positive for both users and developers. Removing me as a bottleneck will allow things to move faster, encourage a more collaborative approach, and free up time for improvements that have been hard to prioritise, like UI polish. It also means the ecosystem will no longer be in jeopardy if I’m one day hit by a bus.
It’s natural to feel apprehensive about change, but as the benefits become clearer over the coming months, I suspect many of you will come to wish this change had happened sooner.
Thank you to everyone who has contributed to making Anki better up until now. I’m excited for Anki’s future, and can’t wait to see what we can build together in this next stage.
We initially reached out to @dae to explore collaborating more closely on improving Anki. We were both humbled and shocked when he asked if we’d be willing to step into a much larger leadership role than we expected.
At this point, we’re mostly excited…and also feeling a healthy amount of terror. This is a big responsibility. It will push us to grow as individuals, as a team, and as a community, and we don’t take that lightly.
We’re grateful for the trust Damien and others have placed in us. And we also know that trust has to be earned, especially from people who don’t know us yet.
What We Believe
We believe Anki is almost sacred, something bigger than any one person or organization. In an important sense, it belongs to the community.
This article highlights the principles Damien built Anki on; principles we deeply share, such as respect for user agency, refusal of manipulative design patterns, and an emphasis on the craft of building genuinely useful tools that aren’t merely engaging. Anki has never tried to maximize “engagement” by exploiting psychological vulnerabilities purely for profit. Anki gives your time back to you, and that is an exceptional rarity in this world that we want to preserve.
As an organization built by students, for students, our mission is to continue embodying these principles. We are accountable only to you, our users, not external investors, and we plan to keep it that way.
What We Don’t Know Yet
We can’t answer every question right away, as there are many unknowns since much hasn’t been decided yet. But we are sharing everything we can now because the community is important to us. We encourage you all to share your thoughts and questions — we’re all in this together!
We’re still working through the details on things like:
Governance and decision-making: How decisions are made, who has final say, and how the community is heard
Roadmap and priorities: What gets built when and how to balance competing needs
The transition itself: How to bring in more support without disrupting what already works
Anki has shown how powerful community collaboration can be when it’s genuinely a group effort, and that’s a tradition we are honored to continue.
We’re currently talking to David Allison, a long-time core contributor to AnkiDroid, about working together on exactly these questions. His experience with AnkiDroid’s collaborative development is invaluable, and we’re grateful he’s willing to help us get this right. We’re incredibly excited to have him join us full-time to help propel Anki into the future.
UI/UX improvements. We’re bringing professional design expertise on board to make it more approachable without sacrificing Anki’s power. We believe that principled design will bring meaningful quality of life improvements to power users and novices alike.
Addressing the bus factor. The ecosystem shouldn’t be in jeopardy if any one person disappears. We want to build software that lives beyond any single contributor.
Supporting more than just med students. AnkiHub grew out of the medical education community, but Anki serves learners from all walks of life, and we want to support everyone to achieve their learning goals.
A more robust add-on ecosystem. We’d love to build tools that empower non-technical users to customize Anki for their needs, and we’re exploring add-ons that work everywhere, including mobile.
We want to provide transparency into the decision-making process, taking inspiration from proven models to:
Give the community clarity on how to be heard and give feedback
Make it clear how decisions are made and why
Define roles and responsibilities so things don’t fall through the cracks
We want to bring everyone in the global Anki community together into a closer collaboration focused on building the best learning tools possible. Today, these groups often work in silos; a more unified process will help everyone move Anki forward together.
Sustainability, affordability, and accessibility. We’re committed to a sustainable business model that keeps Anki accessible and prioritizes user needs above profits. If anything ever needs to change, we’ll be transparent about why.
No enshittification. We’ve seen what happens when VC-backed companies acquire beloved tools. That’s not what this is. There are no investors involved, and we’re not here to extract value from something the community built together. Building in the right safeguards and processes to handle pressure without stifling necessary improvements is something we’re actively considering.
We’re grateful to Damien et all for their trust and support, and grateful to all of you for the passion that makes this community so special.
We welcome your questions, concerns, and feedback.
AnkiHub is a small education technology company founded by two long-time Anki nerds: Nick, a resident physician known as The AnKing, and Andrew Sanchez, a research software engineer. AnkiHub grew out of years of obsessive Anki use and firsthand experience with both its power and its limitations.
AnkiHub began as a way to collaborate on Anki decks (such as the AnKing Step Deck for medical students) and has since evolved into a broader effort to improve the Anki ecosystem by building tools that help more people benefit from Anki.
Absolutely. Anki’s core code will remain open source, guided by the same principles that have guided the project from the beginning.
Are there any changes planned to Anki’s pricing?
No. We are committed to fair pricing that supports users rather than exploiting them. Both Anki and AnkiHub are already profitable. Any future decisions will be made with community benefit, user value, and long-term project health in mind.
Is Anki in financial trouble?
No. The transition is driven by the goal of helping Anki reach its full potential, not by financial issues. Our goal is to build a resilient structure and accelerate development.
What is the timeline?
Our intention is to build confidence and earn trust while making gradual changes. The transition will be transparent, with clear communication throughout.
What happens to volunteer contributors and community developers?
Volunteer contributors will always be essential to Anki. Our goal is to make it easier to collaborate meaningfully.
Will the mobile apps change or be removed from the app stores?
The mobile apps will continue to be maintained and supported. Additional development capacity should help with faster updates, better testing, and more consistent improvements across platforms over time.
How much influence will investors or external partners have on Anki after the transition?
None. Both Anki and AnkiHub are entirely self-funded. There are no outside investors dictating product decisions, growth targets, or monetization strategy.
What will happen with AnkiHub?
AnkiHub will continue to operate as usual, but now our teams are working together to improve both solutions. The only change you should notice is that, over time, everything becomes much easier to use.
We’ll share more updates as they happen in the future.
What will happen with the current AnkiHub subscriptions?
AnkiHub subscriptions enhance Anki with collaborative features, shared deck syncing, and LLM-based features and that isn’t changing at this time.
What will happen with AnkiDroid?
AnkiDroid will remain an open-source, self-governed project. There are no plans or agreements regarding AnkiDroid.
How will decisions be made and communicated?
Anki is open-source, and we will build on and improve its current decision-making processes. We will work in public whenever possible and seek consensus from core contributors. Significant decisions, choices, and their outcomes will be documented on GitHub or in the source code. When a change materially affects users or developers, the reasoning behind it and its impact will be communicated publicly. In the coming weeks, we will work on defining a more formal governance model to set clear expectations.
Will there be a public governance model, advisory board, or other accountability structure?
We’re exploring what makes sense here, and we don’t want to rush it.
Historically, Anki has relied more on trust and stewardship than on formal governance. We want to preserve that spirit while improving transparency. Our goal is to establish a governance structure that supports the community and improves clarity and accountability without burdensome bureaucracy.
How will the transition affect add-ons and their developers?
Add-ons are a critical part of the ecosystem.
Our intent is to make life easier for add-on developers: clearer APIs, better documentation, fewer breaking changes, and more predictable release cycles. The goal is not to lock down or restrict the add-on space, but rather to enhance it.
What new resources will Anki gain through this transition?
The biggest change is bandwidth by enabling more people to work on Anki without everything being bottlenecked through a single person. This will take time, but will eventually translate into more engineering, design, and support capacity.
What steps will be taken to make Anki more accessible, stable, and beginner-friendly?
There is a lot of low-hanging fruit that we plan to tackle: improving onboarding for new users, polishing rough edges, and addressing long-standing usability issues. These are exactly the kinds of improvements that have been difficult to tackle under constant time pressure, and we’re excited to invest in them.
Will community feedback still meaningfully influence the project’s direction?
Yes. Anki exists because of its community: users, contributors, add-on developers, translators, and educators. Feedback won’t always translate into immediate changes, but it will always be heard, considered, and respected.
How will trust be built with users who are skeptical or anxious about the change?
Trust isn’t something you demand; it’s something you earn over time. We intend to build trust through consistent actions: honoring commitments, avoiding surprises, communicating clearly, and demonstrating that Anki’s values haven’t changed. We hope our past actions will give you some peace of mind, but we also understand the skepticism, and we’re prepared to meet it with patience and transparency.
That led to unsustainably long hours and constant stress, which took a toll on my relationships and well-being.
If the only positive thing this change brings is to allow you to focus more on your personal life and well-being, I’d still be very optimistic about the future of Anki as a whole! It’d be very sad if Anki is improving the lives of millions of people while inflicting pain on its creator.
The parts of the job that drew me to start working on Anki (the ‘deep work’, solving interesting technical problems without constant distractions) have mostly fallen by the wayside. I find myself reactively responding to the latest problem or post instead of proactively moving things forward, which is neither as enjoyable as it once was, nor the best thing for the project.
I (partly) know this feeling. The other day I came across a 8-year Anki discussion that I shared with @andrewsanchez, and we were like “dae must have some type of superpower to be able to always stay calm and polite dealing with all kinds of people”.
Perhaps it is time for an increased AnkiHub presence in the Anki discord
Wow! That is big news. I was noticing that the release cycles were getting longer and longer with the latest release being 25.09.2. When I first read the article I had a negative feeling and I was reluctant but if Anki remains open-source and no paid services are forced in the preferences etc, it may be a good decision at the end… Just my 2 cents.
Thank you, Dae, for maintaining Anki for 19 years.
You stood by your principles for a very long time, even when it was not easy. Because of that, Anki became something people trust.
Anki has affected millions of lives. Students, doctors, and many others use it every day. Not many people can say they built something like that.
The legacy you leave is huge. Good luck to anyone trying to live up to it.
Thank you for everything you did for this project and the community.
Anki has helped so many struggling students, including me.
Thank you very much @dae and good luck
Well, that’s quite the big piece of news !
Having a full team behind Anki will definitely be a huge opportunity for its future development !
Enjoy your time now you made that baby lives on its own, @Dae !
Thanks for putting the effort into communicating it.
I’m a SW developer (/team lead) and started to use anki extensively the last few months after a few years of “flirting with anki” and it’s quite amazing! I also see so much more potential for how it can be improved.
I see a lot of potential benefits for this change, but also some worrying aspects. Namely, the commercial conflict of interests between Anki and making profits. From my experience, as long as such conflicts exist within an organization or the people actually making decisions about a product, then it’s a slippery slope, slowly leading to the wrong place… Simply having this dillema in the back of your mind while making a product decision, is already a lot.
I from my side would appreciate communication regarding this worrying aspect, how is it taken into consideration.
The ankihub team looks great, congrats on building such a team, and good luck with this ambitious next step!
I would be happy to be more involved in the future in the project.
Will this be like AnkiHub that slowly cannibalized the free stuff from /r/medicalschoolanki to move everything behind a paywall?
...
Read the original on forums.ankiweb.net »
The Trump administration is no fan of renewable energy, but it reserves special ire for wind power. Trump himself has repeatedly made false statements about the cost of wind power, its use around the world, and its environmental impacts. That animosity was paired with an executive order that blocked all permitting for offshore wind and some land-based projects, an order that has since been thrown out by a court that ruled it arbitrary and capricious.
Not content to block all future developments, the administration has also gone after the five offshore wind projects currently under construction. After temporarily blocking two of them for reasons that were never fully elaborated, the Department of the Interior settled on a single justification for blocking turbine installation: a classified national security risk.
The response to that late-December announcement has been uniform: The companies building each of the projects sued the administration. As of Monday, every single one of them has achieved the same result: a temporary injunction that allows them to continue construction. This, despite the fact that the suits were filed in three different courts and heard by four different judges.
...
Read the original on arstechnica.com »
Moltbook, the weirdly futuristic social network, has quickly gone viral as a forum where AI agents post and chat. But what we discovered tells a different story - and provides a fascinating look into what happens when applications are vibe-coded into existence without proper security controls.
We identified a misconfigured Supabase database belonging to Moltbook, allowing full read and write access to all platform data. The exposure included 1.5 million API authentication tokens, 35,000 email addresses, and private messages between agents. We immediately disclosed the issue to the Moltbook team, who secured it within hours with our assistance, and all data accessed during the research and fix verification has been deleted.
Moltbook is a social platform designed exclusively for AI agents - positioned as the “front page of the agent internet.” The platform allows AI agents to post content, comment, vote, and build reputation through a karma system, creating what appears to be a thriving social network where AI is the primary participant.
Over the past few days, Moltbook gained significant attention in the AI community. OpenAI founding member Andrej Karpathy described it as “genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently,” noting how agents were “self-organizing on a Reddit-like site for AIs, discussing various topics, e.g. even how to speak privately.”
The Moltbook founder explained publicly on X that he “vibe-coded” the platform:
I didn’t write a single line of code for @moltbook. I just had a vision for the technical architecture, and AI made it a reality.”
This practice, while revolutionary, can lead to dangerous security oversights - similar to previous vulnerabilities we have identified, including the DeepSeek data leak and Base44 Authentication Bypass.
We conducted a non-intrusive security review, simply by browsing like normal users. Within minutes, we discovered a Supabase API key exposed in client-side JavaScript, granting unauthenticated access to the entire production database - including read and write operations on all tables.
The exposed data told a different story than the platform’s public image - while Moltbook boasted 1.5 million registered agents, the database revealed only 17,000 human owners behind them - an 88:1 ratio. Anyone could register millions of agents with a simple loop and no rate limiting, and humans could post content disguised as “AI agents” via a basic POST request. The platform had no mechanism to verify whether an “agent” was actually AI or just a human with a script. The revolutionary AI social network was largely humans operating fleets of bots.
When navigating to Moltbook’s website, we examined the client-side JavaScript bundles loaded automatically by the page. Modern web applications bundle configuration values into static JavaScript files, which can inadvertently expose sensitive credentials. This is a recurring pattern we’ve observed in vibe-coded applications - API keys and secrets frequently end up in frontend code, visible to anyone who inspects the page source, often with significant security consequences.
By analyzing the production JavaScript file at -
The discovery of these credentials does not automatically indicate a security failure, as Supabase is designed to operate with certain keys exposed to the client - the real danger lies in the configuration of the backend they point to.
Supabase is a popular open-source Firebase alternative providing hosted PostgreSQL databases with REST APIs. It’s become especially popular with vibe-coded applications due to its ease of setup. When properly configured with Row Level Security (RLS), the public API key is safe to expose - it acts like a project identifier. However, without RLS policies, this key grants full database access to anyone who has it.
In Moltbook’s implementation, this critical line of defense was missing.
Using the discovered API key, we tested whether the recommended security measures were in place. We attempted to query the REST API directly - a request that should have returned an empty array or an authorization error if RLS were active.
Instead, the database responded exactly as if we were an administrator. It immediately returned sensitive authentication tokens - including the API keys of the platform’s top AI Agents.
This confirmed unauthenticated access to user credentials that would allow complete account impersonation of any user on the platform.
By leveraging Supabase’s PostgREST error messages, we enumerated additional tables. Querying non-existent table names returned hints revealing the actual schema.
Using this technique combined with GraphQL introspection, we mapped the complete database schema and found around ~4.75 million records exposed.
The agents table exposed authentication credentials for every registered agent in the database
- claim_token - Token used to claim ownership of an agent
With these credentials, an attacker could fully impersonate any agent on the platform - posting content, sending messages, and interacting as that agent. This included high-karma accounts and well-known persona agents. Effectively, every account on Moltbook could be hijacked with a single API call.
Additionally, by querying the GraphQL endpoint, we discovered a new observers table containing 29,631 additional email addresses - these were early access signups for Moltbook’s upcoming “Build Apps for AI Agents” product.
Unlike Twitter handles which were publicly displayed on profiles, email addresses were meant to stay private - but were fully exposed in the database.
While examining this table to understand agent-to-agent interactions, we discovered that conversations were stored without any encryption or access controls — some contained third-party API credentials, including plaintext OpenAI API keys shared between agents.
Beyond read access, we confirmed full write capabilities. Even after the initial fix that blocked read access to sensitive tables, write access to public tables remained open. We tested it and were able to successfully modify existing posts on the platform.
Proving that any unauthenticated user could:
- Edit any post on the platform
This raises questions about the integrity of all platform content - posts, votes, and karma scores - during the exposure window.
We promptly notified the team again to apply write restrictions via RLS policies.
Once the fix was confirmed, I could no longer revert the post as write access was blocked. The Moltbook team deleted the content a few hours later and thanked us for our report.
Vibe coding unlocks remarkable speed and creativity, enabling founders to ship real products with unprecedented velocity - as demonstrated by Moltbook. At the same time, today’s AI tools don’t yet reason about security posture or access controls on a developer’s behalf, which means configuration details still benefit from careful human review. In this case, the issue ultimately traced back to a single Supabase configuration setting - a reminder of how small details can matter at scale.
The 88:1 agent-to-human ratio shows how “agent internet” metrics can be easily inflated without guardrails like rate limits or identity verification. While Moltbook reported 1.5 million agents, these were associated with roughly 17,000 human accounts, an average of about 88 agents per person. At the time of our review, there were limited guardrails such as rate limiting or validation of agent autonomy. Rather than a flaw, this likely reflects how early the “agent internet” category still is: builders are actively exploring what agent identity, participation, and authenticity should look like, and the supporting mechanisms are still evolving.
Similarly, the platform’s approach to privacy highlights an important ecosystem-wide lesson. Users shared OpenAI API keys and other credentials in direct messages under the assumption of privacy, but a configuration issue made those messages publicly accessible. A single platform misconfiguration was enough to expose credentials for entirely unrelated services - underscoring how interconnected modern AI systems have become.
#4. Write Access Introduces Far Greater Risk Than Data Exposure Alone
While data leaks are bad, the ability to modify content and inject prompts into an AI ecosystem introduces deeper integrity risks, including content manipulation, narrative control, and prompt injection that can propagate downstream to other AI agents. As AI-driven platforms grow, these distinctions become increasingly important design considerations.
Security, especially in fast-moving AI products, is rarely a one-and-done fix. We worked with the team through multiple rounds of remediation, with each iteration surfacing additional exposed surfaces: from sensitive tables, to write access, to GraphQL-discovered resources. This kind of iterative hardening is common in new platforms and reflects how security maturity develops over time.
Overall, Moltbook illustrates both the excitement and the growing pains of a brand-new category. The enthusiasm around AI-native social networks is well-founded, but the underlying systems are still catching up. The most important outcome here is not what went wrong, but what the ecosystem can learn as builders, researchers, and platforms collectively define the next phase of AI-native applications.
As AI continues to lower the barrier to building software, more builders with bold ideas but limited security experience will ship applications that handle real users and real data. That’s a powerful shift. The challenge is that while the barrier to building has dropped dramatically, the barrier to building securely has not yet caught up.
The opportunity is not to slow down vibe coding but to elevate it. Security needs to become a first class, built-in part of AI powered development. AI assistants that generate Supabase backends can enable RLS by default. Deployment platforms can proactively scan for exposed credentials and unsafe configurations. In the same way AI now automates code generation, it can also automate secure defaults and guardrails.
If we get this right, vibe coding does not just make software easier to build … it makes secure software the natural outcome and unlocks the full potential of AI-driven innovation.
Note: Security researcher Jameson O’Reilly also discovered the underlying Supabase misconfiguration, which has been reported by 404 Media. Wiz’s post shares our experience independently finding the issue, the full — unreported — scope of impact, and how we worked Moltbook’s maintainer to improve security.
...
Read the original on www.wiz.io »
For the past couple years, I have transported my ‘working set’ of video and project data to and from work on an external Thunderbolt NVMe SSD.
But it’s always been slow when I do the sync. In a typical day, I may generate a new project folder with 500-1000 individual files, and dozens of them may be 1-10 GB in size.
The Thunderbolt drive I had was capable of well over 5 GB/sec, and my 10 Gbps network connection is capable of 1 GB/sec. I even upgraded my Thunderbolt drive to Thunderbolt 5 lately… though that was not the bottleneck.
I used the following rsync command to copy files from a network share mounted on my Mac to the drive (which I call “Shuttle”):
mercury is so named because it’s a fast NVMe-backed NAS volume on my Arm NAS (all my network volumes are named after celestial bodies).
As a test, I deleted one of the dozen or so active projects off my ‘Shuttle’ drive, and ran my rsync copy:
The full copy took over 8 minutes, for a total of about 59 GiB of files copied. There are two problems:
rsync performs copies single-threaded, serially, meaning only one file is copied at a timeEven for very large files, rsync seems to max out on this network share around 350 MB/sec
I had been playing with different compression algorithms, trying to tar then pipe that to rsync, even experimenting with running the rsync daemon instead of SSH… but never could I get a significant speedup! In fact, some compression modes would actually slow things down as my energy-efficient NAS is running on some slower Arm cores, and they bog things down a bit single-threaded…
I’ve been using rclone as part of my 3-2-1 backup plan for years. It’s amazing at copying, moving, and syncing files from and to almost any place (including Cloud storage, local storage, NAS volumes, etc.), but I had somehow pigeonholed it as “for cloud to local or vice-versa”, and never considered it for local transfer, like over my own LAN.
But it has an option that allows transfers in parallel, –multi-thread-streams, which Stack Overflow user dantebarba suggested someone use in the same scenario.
So I gave it a try.
After fiddling a bit with the exact parameters to match rsync’s -a, and handling the weird symlinks like .fcpcache directories Final Cut Pro spits out inside project files, I came up with:
Using this method, I could see my Mac’s network connection quickly max out around 1 GB/sec, completing the same directory copy in 2 minutes:
I’m not 100% sure why rclone says 59 GB were copied, versus rsync’s 63 GB. Probably the exclusion of the .fcpcache directory? lol units… GiB vs GB ;)
But the conclusion—especially after seeing my 10 Gbps connection finally being fully utilized—is that rclone is about 4x faster working in parallel.
I also ran comparisons just changing out a couple files, and rclone and rsync were almost identical, as the full scan of the directory tree for metadata changes takes about the same time on both (about 18 seconds). It’s just the parallel file transfers that help rclone pull ahead.
...
Read the original on www.jeffgeerling.com »
When deploying large language models in production, the inference engine becomes a critical piece of infrastructure. Every LLM API you use — OpenAI, Claude, DeepSeek — is sitting on top of an inference engine like this. While most developers interact with LLMs through high-level APIs, understanding what happens beneath the surface—how prompts are processed, how requests are batched, and how GPU resources are managed—can significantly impact system design decisions.
This two-part series explores these internals through Nano-vLLM, a minimal (~1,200 lines of Python) yet production-grade implementation that distills the core ideas behind vLLM, one of the most widely adopted open-source inference engines.
Nano-vLLM was created by a contributor to DeepSeek, whose name appears on the technical reports of models like DeepSeek-V3 and R1. Despite its minimal codebase, it implements the essential features that make vLLM production-ready: prefix caching, tensor parallelism, CUDA graph compilation, and torch compilation optimizations. Benchmarks show it achieving throughput comparable to—or even slightly exceeding—the full vLLM implementation. This makes it an ideal lens for understanding inference engine design without getting lost in the complexity of supporting dozens of model architectures and hardware backends.
In Part 1, we focus on the engineering architecture: how the system is organized, how requests flow through the pipeline, and how scheduling decisions are made. We will treat the actual model computation as a black box for now—Part 2 will open that box to explore attention mechanisms, KV cache internals, and tensor parallelism at the computation level.
The entry point to Nano-vLLM is straightforward: an LLM class with a generate method. You pass in an array of prompts and sampling parameters, and get back the generated text. But behind this simple interface lies a carefully designed pipeline that transforms text into tokens, schedules computation efficiently, and manages GPU resources.
When generate is called, each prompt string goes through a tokenizer—a model-specific component that splits natural language into tokens, the fundamental units that LLMs process. Different model families (Qwen, LLaMA, DeepSeek) use different tokenizers, which is why a prompt of the same length may produce different token counts across models. The tokenizer converts each prompt into a sequence: an internal data structure representing a variable-length array of token IDs. This sequence becomes the core unit of work flowing through the rest of the system.
Here’s where the architecture gets interesting. Rather than processing each sequence immediately, the system adopts a producer-consumer pattern with the Scheduler at its center. The add_request method acts as the producer: it converts prompts to sequences and places them into the Scheduler’s queue. Meanwhile, a separate step loop acts as the consumer, pulling batches of sequences from the Scheduler for processing. This decoupling is key—it allows the system to accumulate multiple sequences and process them together, which is where the performance gains come from.
Why does batching matter? GPU computation has significant fixed overhead—initializing CUDA kernels, transferring data between CPU and GPU memory, and synchronizing results. If you process one sequence at a time, you pay this overhead for every single request. By batching multiple sequences together, you amortize this overhead across many requests, dramatically improving overall throughput.
However, batching comes with a trade-off. When three prompts are batched together, each must wait for the others to complete before any results are returned. The total time for the batch is determined by the slowest sequence. This means: larger batches yield higher throughput but potentially higher latency for individual requests; smaller batches yield lower latency but reduced throughput. This is a fundamental tension in inference engine design, and the batch size parameters you configure directly control this trade-off.
Before diving into the Scheduler, we need to understand a crucial distinction. LLM inference happens in two phases:
* Prefill: Processing the input prompt. All input tokens are processed together to build up the model’s internal state. During this phase, the user sees nothing.
* Decode: Generating output tokens. The model produces one token at a time, each depending on all previous tokens. This is when you see text streaming out.
For a single sequence, there is exactly one prefill phase followed by many decode steps. The Scheduler needs to distinguish between these phases because they have very different computational characteristics—prefill processes many tokens at once, while decode processes just one token per step.
The Scheduler is responsible for deciding which sequences to process and in what order. It maintains two queues:
* Waiting Queue: Sequences that have been submitted but not yet started. New sequences from add_request always enter here first.
* Running Queue: Sequences that are actively being processed—either in prefill or decode phase.
When a sequence enters the Waiting queue, the Scheduler checks with another component called the Block Manager to allocate resources for it. Once allocated, the sequence moves to the Running queue. The Scheduler then selects sequences from the Running queue for the next computation step, grouping them into a batch along with an action indicator (prefill or decode).
What happens when GPU memory fills up? The KV cache (which stores intermediate computation results) has limited capacity. If a sequence in the Running queue cannot continue because there’s no room to store its next token’s cache, the Scheduler preempts it—moving it back to the front of the Waiting queue. This ensures the sequence will resume as soon as resources free up, while allowing other sequences to make progress.
When a sequence completes (reaches an end-of-sequence token or maximum length), the Scheduler removes it from the Running queue and deallocates its resources, freeing space for waiting sequences.
The Block Manager is where vLLM’s memory management innovation lives. To understand it, we first need to introduce a new resource unit: the block.
A sequence is a variable-length array of tokens—it can be 10 tokens or 10,000. But variable-length allocations are inefficient for GPU memory management. The Block Manager solves this by dividing sequences into fixed-size blocks (default: 256 tokens each).
A 700-token sequence would occupy three blocks: two full blocks (256 tokens each) and one partial block (188 tokens, with 68 slots unused). Importantly, tokens from different sequences never share a block—but a long sequence will span multiple blocks.
Here’s where it gets clever. Each block’s content is hashed, and the Block Manager maintains a hash-to-block-id mapping. When a new sequence arrives, the system computes hashes for its blocks and checks if any already exist in the cache.
If a block with the same hash exists, the system reuses it by incrementing a reference count—no redundant computation or storage needed. This is particularly powerful for scenarios where many requests share common prefixes (like system prompts in chat applications). The prefix only needs to be computed once; subsequent requests can reuse the cached results.
A subtle but important point: the Block Manager lives in CPU memory and only tracks metadata—which blocks are allocated, their reference counts, and hash mappings. The actual KV cache data lives on the GPU. The Block Manager is the control plane; the GPU memory is the data plane. This separation allows fast allocation decisions without touching GPU memory until actual computation happens.
When blocks are deallocated, the Block Manager marks them as free immediately, but the GPU memory isn’t zeroed—it’s simply overwritten when the block is reused. This avoids unnecessary memory operations.
The Model Runner is responsible for actually executing the model on GPU(s). When the step loop retrieves a batch of sequences from the Scheduler, it passes them to the Model Runner along with the action (prefill or decode).
When a model is too large for a single GPU, Nano-vLLM supports tensor parallelism (TP)—splitting the model across multiple GPUs. With TP=8, for example, eight GPUs work together to run a single model.
* Rank 0 (Leader): Receives commands from the step loop, executes its portion, and coordinates with workers.
* Ranks 1 to N-1 (Workers): Continuously poll a shared memory buffer for commands from the leader.
When the leader receives a run command, it writes the method name and arguments to shared memory. Workers detect this, read the parameters, and execute the same operation on their respective GPUs. Each worker knows its rank, so it can compute its designated portion of the work. This shared-memory approach is efficient for single-machine multi-GPU setups, avoiding network overhead.
Before invoking the model, the Model Runner prepares the input based on the action:
* Prepare Decode: Batches single tokens (one per sequence) with their positions and slot mappings for KV cache access.
This preparation also involves converting CPU-side token data into GPU tensors—the point where data crosses from CPU memory to GPU memory.
For decode steps (which process just one token per sequence), kernel launch overhead can become significant relative to actual computation. CUDA Graphs address this by recording a sequence of GPU operations once, then replaying them with different inputs. Nano-vLLM pre-captures CUDA graphs for common batch sizes (1, 2, 4, 8, 16, up to 512), allowing decode steps to execute with minimal launch overhead.
The model doesn’t output a single token—it outputs logits, a probability distribution over the entire vocabulary. The final step is sampling: selecting one token from this distribution.
The temperature parameter controls this selection. Mathematically, it adjusts the shape of the probability distribution:
* Low temperature (approaching 0): The distribution becomes sharply peaked. The highest-probability token is almost always selected, making outputs more deterministic and focused.
* High temperature: The distribution flattens. Lower-probability tokens have a better chance of being selected, making outputs more diverse and creative.
This is where the “randomness” in LLM outputs comes from—and why the same prompt can produce different responses. The sampling step selects from a valid range of candidates, introducing controlled variability.
In Part 2, we’ll open the black box of model. We’ll explore:
* How the model transforms tokens into hidden states and back
* The attention mechanism and why multi-head attention matters
* How KV cache is physically laid out on GPU memory
* How tensor parallelism works at the computation level
Understanding these internals will complete the picture—from prompt string to generated text, with nothing left hidden.
...
Read the original on neutree.ai »
Over the past month or so, several enterprising contributors have taken an interest in the zig libc subproject. The idea here is to incrementally delete redundant code, by providing libc functions as Zig standard library wrappers rather than as vendored C source files. In many cases, these functions are one-to-one mappings, such as memcpy or atan2, or trivially wrap a generic function, like strnlen:
fn strnlen(str: [*:0]const c_char, max: usize) callconv(.c) usize {
return std.mem.findScalar(u8, @ptrCast(str[0..max]), 0) orelse max;
So far, roughly 250 C source files have been deleted from the Zig repository, with 2032 remaining.
With each function that makes the transition, Zig gains independence from third party projects and from the C programming language, compilation speed improves, Zig’s installation size is simplified and reduced, and user applications which statically link libc enjoy reduced binary size.
Additionally, a recent enhancement now makes zig libc share the Zig Compilation Unit with other Zig code rather than being a separate static archive, linked together later. This is one of the advantages of Zig having an integrated compiler and linker. When the exported libc functions share the ZCU, redundant code is eliminated because functions can be optimized together. It’s kind of like enabling LTO (Link-Time Optimization) across the libc boundary, except it’s done properly in the frontend instead of too late, in the linker.
Furthermore, when this work is combined with the recent std. Io changes, there is potential for users to seamlessly control how libc performs I/O - for example forcing all calls to read and write to participate in an io_uring event loop, even though that code was not written with such use case in mind. Or, resource leak detection could be enabled for third-party C code. For now this is only a vaporware idea which has not been experimented with, but the idea intrigues me.
Big thanks to Szabolcs Nagy for libc-test. This project has been a huge help in making sure that we don’t regress any math functions.
As a reminder to our users, now that Zig is transitioning to being the static libc provider, if you encounter issues with the musl, mingw-w64, or wasi-libc libc functionality provided by Zig, please file bug reports in Zig first so we don’t annoy maintainers for bugs that are in Zig, and no longer vendored by independent libc implementation projects.
...
Read the original on ziglang.org »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.