10 interesting stories served every morning and every evening.
Ghostty is now fiscally sponsored by Hack Club, a registered 501(c)(3) non-profit.
Fiscal sponsorship is a legal and financial arrangement in which a recognized non-profit extends its tax-exempt status to a project that aligns with its mission. This allows Ghostty to operate as a charitable initiative while Hack Club manages compliance, donations, accounting, and governance oversight.
Being non-profit clearly demonstrates our commitment to keeping Ghostty free and open source for everyone. It paves the way for a model for sustainable development beyond my personal involvement. And it also provides important legal protections and assurances to the people and communities that adopt and use Ghostty.
Since the beginning of the project in 2023 and the private beta days of Ghostty, I’ve repeatedly expressed my intention that Ghostty legally become a non-profit. This intention stems from several core beliefs I have.
First, I want to lay bricks for a sustainable future for Ghostty that doesn’t depend on my personal involvement technically or financially. Financially, I am still the largest donor to the project, and I intend to remain so, but a non-profit structure allows others to contribute financially without fear of misappropriation or misuse of funds (as protected by legal requirements and oversight from the fiscal sponsor).
Second, I want to squelch any possible concerns about a
“rug pull”. A non-profit structure provides enforceable assurances: the mission cannot be quietly changed, funds cannot be diverted to private benefit, and the project cannot be sold off or repurposed for commercial gain. The structure legally binds Ghostty to the public-benefit purpose it was created to serve.
Finally, despite being decades-old technology, terminals and terminal-related technologies remain foundational to modern computing and software infrastructure. They’re often out of the limelight, but they’re ever present on developer machines, embedded in IDEs, visible as read-only consoles for continuous integration and cloud services, and still one of the primary ways remote access is done on servers around the world.
I believe infrastructure of this kind should be stewarded by a mission-driven,
non-commercial entity that prioritizes public benefit over private profit.
That structure increases trust, encourages adoption, and creates the conditions for Ghostty to grow into a widely used and impactful piece of open-source infrastructure.
From a technical perspective, nothing changes for Ghostty. Our technical goals for the project remain the same, the license (MIT) remains the same, and we continue our work towards better Ghostty GUI releases and
libghostty.
Financially, Ghostty can now accept tax-deductible donations in the United States. This opens up new avenues for funding the project and sustaining development over the long term. Most immediately, I’m excited to begin
compensating contributors, but I also intend to support upstream dependencies, fund community events, and pay for boring operational costs.
All our financial transactions will be transparent down to individual transactions for both inflows and outflows. You can view our public ledger at Ghostty’s page on Hack Club Bank. At the time of writing, this is empty, but you’ll soon see some initial funding from me and the beginning of paying for some of our operational costs.
All applicable names, marks, and intellectual property associated with Ghostty have been transferred to Hack Club and are now owned under the non-profit umbrella. Copyright continues to be held by individual contributors under the continued and existing license structure.
From a leadership perspective, I remain the project lead and final authority on all decisions, but as stated earlier, the creation of a non-profit structure lays the groundwork for an eventual future beyond this model.
As our fiscal sponsor, Hack Club provides essential services to Ghostty, including accounting, legal compliance, and governance oversight. To support this, 7% of all donations to Ghostty go to Hack Club to cover these costs in addition to supporting their broader mission of empowering young people around the world interested in technology and coding.
In the words of Zach Latta, Hack Club’s founder and executive director this is a “good-for-good” trade. Instead of donor fees going to a for-profit management company or covering pure overhead of a single project, the fees go to another non-profit doing important work in the tech community and the overhead is amortized across many projects.
In addition to the 7% fees, my family is personally donating $150,000
directly to the Hack Club project1 (not to Ghostty within it). Hack Club does amazing work and I would’ve supported them regardless of their fiscal sponsorship of Ghostty, but I wanted to pair these two things together to amplify the impact of both.
Please consider donating to support Ghostty’s continued development.
I recognize that Ghostty is already in an abnormally fortunate position to have myself as a backer, but I do envision a future where Ghostty is more equally supported by a broader community. And with our new structure, you can be assured about the usage of your funds
towards public-benefit goals.
This post isn’t meant to directly be a fundraising pitch
so it is purposely lacking critical details about our funding goals, budget, project goals, project metrics, etc. I’ll work on those in the future. In the mean time, if you’re interested in talking more about supporting Ghostty, please email me at m@mitchellh.com.
I’m thankful for Hack Club and their team for working with us to make this happen. I’m also thankful for the Ghostty community who has supported this project and has trusted me and continues to trust me to steward it responsibly.
For more information about Ghostty’s non-profit structure, see the
dedicated page on Ghostty’s website.
...
Read the original on mitchellh.com »
Hi! I’m Eric Wastl. I make Advent of Code. I hope you like it! I also make lots of other things. I’m on Bluesky, Mastodon, and GitHub.
Advent of Code is an Advent calendar of small programming puzzles for a variety of skill levels that can be solved in any programming language you like. People use them as interview prep, company training, university coursework, practice problems, a speed contest, or to challenge each other.
You don’t need a computer science background to participate - just a little programming knowledge and some problem solving skills will get you pretty far. Nor do you need a fancy computer; every problem has a solution that completes in at most 15 seconds on ten-year-old hardware.
If you’d like to support Advent of Code, you can do so indirectly by helping to [Share] it with others or directly via AoC++.
If you get stuck, try your solution against the examples given in the puzzle; you should get the same answers. If not, re-read the description. Did you misunderstand something? Is your program doing something you don’t expect? After the examples work, if your answer still isn’t correct, build some test cases for which you can verify the answer by hand and see if those work with your program. Make sure you have the entire puzzle input. If you’re still stuck, maybe ask a friend for help, or come back to the puzzle later. You can also ask for hints in the subreddit.
Is there an easy way to select entire code blocks? You should be able to triple-click code blocks to select them. You’ll need JavaScript enabled.
#!/usr/bin/env perl
use warnings;
use strict;
print “You can test it out by ”;
print “triple-clicking this code.\n”;
How does authentication work? Advent of Code uses OAuth to confirm your identity through other services. When you log in, you only ever give your credentials to that service - never to Advent of Code. Then, the service you use tells the Advent of Code servers that you’re really you. In general, this reveals no information about you beyond what is already public; here are examples from Reddit and GitHub. Advent of Code will remember your unique ID, names, URL, and image from the service you use to authenticate.
Why was this puzzle so easy / hard? The difficulty and subject matter varies throughout each event. Very generally, the puzzles get more difficult over time, but your specific skillset will make each puzzle significantly easier or harder for you than someone else. Making puzzles is tricky.
Why do the puzzles unlock at midnight EST/UTC-5? Because that’s when I can consistently be available to make sure everything is working. I also have a family, a day job, and even need sleep occasionally. If you can’t participate at midnight, that’s not a problem; if you want to race, many people use private leaderboards to compete with people in their area.
I find the text on the site hard to read. Is there a high contrast mode? There is a high contrast alternate stylesheet. Firefox supports these by default (View -> Page Style -> High Contrast).
I have a puzzle idea! Can I send it to you? Please don’t. Because of legal issues like copyright and attribution, I don’t accept puzzle ideas, and I won’t even read your email if it looks like one just in case I use parts of it by accident.
Did I find a bug with a puzzle? Once a puzzle has been out for even an hour, many people have already solved it; after that point, bugs are very unlikely. Start by asking on the subreddit.
Should I try to get a fast solution time? Maybe. Solving puzzles is hard enough on its own, but trying for a fast time also requires many additional skills and a lot of practice; speed-solves often look nothing like code that would pass a code review. If that sounds interesting, go for it! However, you should do Advent of Code in a way that is useful to you, and so it is completely fine to choose an approach that meets your goals and ignore speed entirely.
Why did the number of days per event change? It takes a ton of my free time every year to run Advent of Code, and building the puzzles accounts for the majority of that time. After keeping a consistent schedule for ten years(!), I needed a change. The puzzles still start on December 1st so that the day numbers make sense (Day 1 = Dec 1), and puzzles come out every day (ending mid-December).
What happened to the global leaderboard? The global leaderboard was one of the largest sources of stress for me, for the infrastructure, and for many users. People took things too seriously, going way outside the spirit of the contest; some people even resorted to things like DDoS attacks. Many people incorrectly concluded that they were somehow worse programmers because their own times didn’t compare. What started as a fun feature in 2015 became an ever-growing problem, and so, after ten years of Advent of Code, I removed the global leaderboard. (However, I’ve made it so you can share a read-only view of your private leaderboard. Please don’t use this feature or data to create a “new” global leaderboard.)
While trying to get a fast time on a private leaderboard, may I use AI / watch streamers / check the solution threads / ask a friend for help / etc? If you are a member of any private leaderboards, you should ask the people that run them what their expectations are of their members. If you don’t agree with those expectations, you should find a new private leaderboard or start your own! Private leaderboards might have rules like maximum runtime, allowed programming language, what time you can first open the puzzle, what tools you can use, or whether you have to wear a silly hat while working.
Should I use AI to solve Advent of Code puzzles? No. If you send a friend to the gym on your behalf, would you expect to get stronger? Advent of Code puzzles are designed to be interesting for humans to solve - no consideration is made for whether AI can or cannot solve a puzzle. If you want practice prompting an AI, there are almost certainly better exercises elsewhere designed with that in mind.
Can I copy/redistribute part of Advent of Code? Please don’t. Advent of Code is free to use, not free to copy. If you’re posting a code repository somewhere, please don’t include parts of Advent of Code like the puzzle text or your inputs. If you’re making a website, please don’t make it look like Advent of Code or name it something similar.
...
Read the original on adventofcode.com »
The Foundation that promotes the Zig programming language has quit GitHub due to what its leadership perceives as the code sharing site’s decline.
The drama began in April 2025 when GitHub user AlekseiNikiforovIBM started a thread titled “safe_sleep.sh rarely hangs indefinitely.” GitHub addressed the problem in August, but didn’t reveal that in the thread, which remained open until Monday.
The code uses 100 percent CPU all the time, and will run forever
That timing appears notable. Last week, Andrew Kelly, president and lead developer of the Zig Software Foundation, announced that the Zig project is moving to Codeberg, a non-profit git hosting service, because GitHub no longer demonstrates commitment to engineering excellence.
One piece of evidence he offered for that assessment was the “safe_sleep.sh rarely hangs indefinitely” thread.
“Most importantly, Actions has inexcusable bugs while being completely neglected,” Kelly wrote. “After the CEO of GitHub said to ‘embrace AI or get out’, it seems the lackeys at Microsoft took the hint, because GitHub Actions started ‘vibe-scheduling’ — choosing jobs to run seemingly at random. Combined with other bugs and inability to manually intervene, this causes our CI system to get so backed up that not even master branch commits get checked.”
Kelly’s gripe seems justified, as the bug discussed in the thread appears to have popped up following a code change in February 2022 that users flagged in prior bug reports.
The code change replaced instances of the posix “sleep” command with a “safe_sleep” script that failed to work as advertised. It was supposed to allow the GitHub Actions runner — the application that runs a job from a GitHub Actions workflow — to pause execution safely.
“The bug in this ‘safe sleep’ script is obvious from looking at it: if the process is not scheduled for the one-second interval in which the loop would return (due to $SECONDS having the correct value), then it simply spins forever,” wrote Zig core developer Matthew Lugg in a comment appended to the April bug thread.
“That can easily happen on a CI machine under extreme load. When this happens, it’s pretty bad: it completely breaks a runner until manual intervention. On Zig’s CI runner machines, we observed multiple of these processes which had been running for hundreds of hours, silently taking down two runner services for weeks.”
The fix was merged on August 20, 2025, from a separate issue opened back in February 2024. The related bug report from April 2025 remained open until Monday, December 1, 2025. A separate CPU usage bug remains unresolved.
Jeremy Howard, co-founder of Answer. AI and Fast.AI, said in a series of social media posts that users’ claims about GitHub Actions being in a poor state of repair appear to be justified.
“The bug,” he wrote, “was implemented in a way that, very obviously to nearly anyone at first glance, uses 100 percent CPU all the time, and will run forever unless the task happens to check the time during the correct second.”
I can’t see how such an extraordinary collection of outright face-palming events could be made
He added that the platform-independent fix for the CPU issue proposed last February lingered for a year without review and was closed by the GitHub bot in March 2025 before being revived and merged.
“Whilst one could say that this is just one isolated incident, I can’t see how such an extraordinary collection of outright face-palming events could be made in any reasonably functioning organization,” Howard concluded.
GitHub did not immediately respond to a request for comment.
While Kelly has gone on to apologize for the incendiary nature of his post, Zig is not the only software project publicly parting ways with GitHub.
Over the weekend, Rodrigo Arias Mallo, creator of the Dillo browser project, said he’s planning to move away from GitHub owing to concerns about over-reliance on JavaScript, GitHub’s ability to deny service, declining usability, inadequate moderation tools, and “over-focusing on LLMs and generative AI, which are destroying the open web (or what remains of it) among other problems.”
Codeberg, for its part, has doubled its supporting membership since January, going from more than 600 members to over 1,200 as of last week.
GitHub has not disclosed how many of its users pay for its services presently. The code hosting biz had “over 1.3 million paid GitHub Copilot subscribers, up 30 percent quarter-over-quarter,” Microsoft CEO Satya Nadella said on the company’s Q2 2024 earnings call.
In Q4 2024, when GitHub reported an annual revenue run rate of $2 billion, GitHub Copilot subscriptions accounted for about 40 percent of the company’s annual revenue growth.
Nadella offered a different figure during Microsoft’s Q3 2025 earnings call: “we now have over 15 million GitHub Copilot users, up over 4X year-over-year.” It’s not clear how many GitHub users pay for Copilot, or for runner scripts that burned CPU cycles when they should have been sleeping. ®
...
Read the original on www.theregister.com »
is a senior editor and founding member of The Verge who covers gadgets, games, and toys. He spent 15 years editing the likes of CNET, Gizmodo, and Engadget.
is a senior editor and founding member of The Verge who covers gadgets, games, and toys. He spent 15 years editing the likes of CNET, Gizmodo, and Engadget.
The game itself is a Windows executable, right? At a core level, the Linux operating system does not even know how to load the program, and so, instead of invoking it through the OS, you invoke it through Proton, which is going to do the first step of setting up the address space, loading the segments of code into memory. The code coming from the app is all x86, and so Proton is a facilitator. It puts the existing code of the app in a format and a layout that the Linux OS can understand and then starts executing that code.
...
Read the original on www.theverge.com »
I grabbed lunch with a former Microsoft coworker I’ve always admired—one of those engineers who can take any idea, even a mediocre one, and immediately find the gold in it. I wanted her take on Wanderfugl 🐦, the AI-powered map I’ve been building full-time. I expected encouragement. At worst, overly generous feedback because she knows what I’ve sacrificed.
Instead, she reacted to it with a level of negativity I’d never seen her direct at me before.
When I finally got her to explain what was wrong, none of it had anything to do with what I built. She talked about Copilot 365. And Microsoft AI. And every miserable AI tool she’s forced to use at work. My product barely featured. Her reaction wasn’t about me at all. It was about her entire environment.
Her PM had been laid off months earlier. The team asked why. Their director told them it was because the PM org “wasn’t effective enough at using Copilot 365.”
I nervously laughed. This director got up in a group meeting and said that someone lost their job over this?
After a pause I tried to share how much better I’ve been feeling—how AI tools helped me learn faster, how much they accelerated my work on Wanderfugl. I didn’t fully grok how tone deaf I was being though. She’s drowning in resentment.
I left the lunch deflated and weirdly guilty, like building an AI product made me part of the problem.
But then I realized this was bigger than one conversation. Every time I shared Wanderfugl with a Seattle engineer, I got the same reflexive, critical, negative response. This wasn’t true in Bali, Tokyo, Paris, or San Francisco—people were curious, engaged, wanted to understand what I was building. But in Seattle? Instant hostility the moment they heard “AI.”
The people at big tech in Seattle are not ok
When I joined Microsoft, there was still a sense of possibility. Satya was pushing “growth mindset” everywhere. Leaders talked about empowerment and breaking down silos. And even though there was always a gap between the slogans and reality, there was room to try things.
I leaned into it. I pushed into areas nobody wanted to touch, like Windows update compression, because it lived awkwardly across three teams. Somehow, a 40% improvement made it out alive. Leadership backed it. The people trying to kill it shrank back into their fiefdoms. It felt like the culture wanted change.
That world is gone.
When the layoff directive hit, every org braced for impact. Anything not strictly inside the org’s charter was axed. I went from shipping a major improvement in Windows 11 to having zero projects overnight. I quit shortly after. In hindsight, getting laid off with severance might’ve been better than watching the culture collapse in slow motion.
Then came the AI panic.
If you could classify your project as “AI,” you were safe and prestigious. If you couldn’t, you were nobody. Overnight, most engineers got rebranded as “not AI talent.” And then came the final insult: everyone was forced to use Microsoft’s AI tools whether they worked or not.
Copilot for Word. Copilot for PowerPoint. Copilot for email. Copilot for code. Worse than the tools they replaced. Worse than competitors’ tools. Sometimes worse than doing the work manually.
But you weren’t allowed to fix them—that was the AI org’s turf. You were supposed to use them, fail to see productivity gains, and keep quiet.
Meanwhile, AI teams became a protected class. Everyone else saw comp stagnate, stock refreshers evaporate, and performance reviews tank. And if your team failed to meet expectations? Clearly you weren’t “embracing AI.”
Bring up AI in a Seattle coffee shop now and people react like you’re advocating asbestos.
Amazon folks are slightly more insulated, but not by much. The old Seattle deal—Amazon treats you poorly but pays you more—only masks the rot.
This belief system—that AI is useless and that you’re not good enough to work on it anyway—hurts three groups:
1. The companies.
They’ve taught their best engineers that innovation isn’t their job.
2. The engineers.
They’re stuck in resentment and self-doubt while their careers stall.
3. Anyone trying to build anything new in Seattle.
Say “AI” and people treat you like a threat or an idiot.
And the loop feeds itself:
Engineers don’t try because they think they can’t.
Companies don’t empower them because they assume they shouldn’t.
Bad products reinforce the belief that AI is doomed.
The spiral locks in.
My former coworker—the composite of three people for anonymity—now believes she’s both unqualified for AI work and that AI isn’t worth doing anyway. She’s wrong on both counts, but the culture made sure she’d land there.
Seattle has talent as good as anywhere. But in San Francisco, people still believe they can change the world—so sometimes they actually do.
...
Read the original on jonready.com »
How to Get to Zero at Pioneer Works, Sep 12 - Dec 14, 2025. Review in the Art Newspaper, Oct 14. Offset at MediaLive: Data Rich, Dirt Poor at BMoCA, Sep 12 - Jan 11, 2026.
A browser extension for avoiding AI slop.
Download it for Chrome or Firefox.
This is a search tool that will only return content created before ChatGPT’s first public release on November 30, 2022.
Since the public release of ChatGTPT and other large language models, the internet is being increasingly polluted by AI generated text, images and video. This browser extension uses the Google search API to only return content published before Nov 30th, 2022 so you can be sure that it was written or produced by the human hand.
...
Read the original on tegabrain.com »
About Us
EU Member States cut back on vital vehicle checks and open door to “monster trucks” in double blow to EU road safety
PIN Talk: Driving under the influence and road safety, Ljubljana 8 December / PIN pogovor: Vožnja pod vplivom alkohola ali drog in varnost prometa, Ljubljana, 8. december 2025
Accepting US car standards would risk European lives, warn cities and civil society
EU officials must revisit the hastily agreed trade deal with the US, where the EU stated that it “intends to accept” lower US vehicle standards, say cities — including Paris, Brussels and Amsterdam, and more than 75 civil society organisations. In a letter to European lawmakers, the signatories warn that aligning European standards with laxer rules in the US would undermine the EU’s global leadership in road safety, public health, climate policy and competitiveness.
The deal agreed over summer states that “with respect to automobiles, the United States and the European Union intend to accept and provide mutual recognition to each other’s standards.” Yet, EU vehicle safety regulations have supported a 36% reduction in European road deaths since 2010. By contrast, road deaths in the US over the same period increased 30%, with pedestrian deaths up 80% and cyclist deaths up 50%.
Europe currently has mandatory requirements for life-saving technologies, such as pedestrian protection, automated emergency braking and lane-keeping assistance. Some of the most basic pedestrian protection requirements which have long been in place in the EU, such as deformation zones in the front of vehicles to reduce crash severity and the prohibition of sharp edges have made cars like the Tesla Cybertruck illegal to sell in Europe.
“Europe built its reputation on pioneering robust vehicle standards. To accept lower US standards would undo decades of EU progress,” say the signatories. According to the letter “the consequences of such a move for European road safety would be profound.“
The EU is set to apply limits to harmful pollution from brake and tyre wear from 2026 onwards, while at the same time the US is moving to weaken air pollution rules for vehicles. Accepting weaker US standards would increase European exposure to pollutants linked to asthma, cancer and numerous cardiovascular and neurological conditions, warn the signatories.
Major EU brands such as BMW, Mercedes and Stellantis already build large numbers of vehicles in US automotive plants to EU standards — particularly larger SUVs. However, if the lower US vehicle standards are accepted in Europe, these production lines could produce vehicles to these US lower standards, before shipping these vehicles to the EU. Overall, vehicle production would shift from the EU to the US. To accept lower US car standards would risk large-scale job losses in EU car plants and across Europe’s automotive supply chain.
The European Commission is already working to tighten Individual Vehicle Approval (IVA), which is being abused to put thousands of oversized US pick-up trucks on EU streets without complying with core EU safety, air pollution and climate standards. To now accept lower US vehicle standards across the board would open the floodgates to US pick-ups and large SUVs.
The signatories urge EU lawmakers to oppose the intention to accept lower US vehicle standards in the EU–US Joint Statement and affirm publicly that EU vehicle standards are non-negotiable.
2025 10 20 Civil society + city letter on risk of EU accepting lower US car standards (FINAL)Download
This website does not use cookies but certain pages use embedded content from external services including YouTube, Twitter, Google Sheets, MailChimp and Infogram which may track your usage. If you continue to use this site, you give your consent to this. You can find more information on our privacy policy page.
...
Read the original on etsc.eu »
We thank comments from Sumit Agarwal, Ron Kaniel, Roni Michaely, Lyndon Moore, Antoinette Schoar, and seminar/conference participants at the Chinese University of Hong Kong, Columbia Business School, Deakin University, Macquarie University, Peking University (HSBC and Guanghua), Shanghai Lixin University of Accounting and Finance, Tsinghua University, University of Sydney, University of Technology Sydney, 2023 Australasian Finance and Banking Conference, 2023 Finance Down Under, and 2023 Five Star Workshop in Finance for their helpful comments. We thank Lei Chen, Jingru Pan, Yiyun Yan, Zitong Zeng, and Tianyue Zheng for their excellent research assistance. The views expressed herein are those of the authors and do not necessarily reflect the views of the National Bureau of Economic Research.
...
Read the original on www.nber.org »
Today, we announce Mistral 3, the next generation of Mistral models. Mistral 3 includes three state-of-the-art small, dense models (14B, 8B, and 3B) and Mistral Large 3 — our most capable model to date — a sparse mixture-of-experts trained with 41B active and 675B total parameters. All models are released under the Apache 2.0 license. Open-sourcing our models in a variety of compressed formats empowers the developer community and puts AI in people’s hands through distributed intelligence.
The Ministral models represent the best performance-to-cost ratio in their category. At the same time, Mistral Large 3 joins the ranks of frontier instruction-fine-tuned open-source models.
Mistral Large 3 is one of the best permissive open weight models in the world, trained from scratch on 3000 of NVIDIA’s H200 GPUs. Mistral Large 3 is Mistral’s first mixture-of-experts model since the seminal Mixtral series, and represents a substantial step forward in pretraining at Mistral. After post-training, the model achieves parity with the best instruction-tuned open-weight models on the market on general prompts, while also demonstrating image understanding and best-in-class performance on multilingual conversations (i.e., non-English/Chinese).
Mistral Large 3 debuts at #2 in the OSS non-reasoning models category (#6 amongst OSS models overall) on the LMArena leaderboard.
We release both the base and instruction fine-tuned versions of Mistral Large 3 under the Apache 2.0 license, providing a strong foundation for further customization across the enterprise and developer communities. A reasoning version is coming soon!
Working in conjunction with vLLM and Red Hat, Mistral Large 3 is very accessible to the open-source community. We’re releasing a checkpoint in NVFP4 format, built with llm-compressor. This optimized checkpoint lets you run Mistral Large 3 efficiently on Blackwell NVL72 systems and on a single 8×A100 or 8×H100 node using vLLM.
Delivering advanced open-source AI models requires broad optimization, achieved through a partnership with NVIDIA. All our new Mistral 3 models, from Large 3 to Ministral 3, were trained on NVIDIA Hopper GPUs to tap high-bandwidth HBM3e memory for frontier-scale workloads. NVIDIA’s extreme co-design approach brings hardware, software, and models together. NVIDIA engineers enabled efficient inference support for TensorRT-LLM and SGLang for the complete Mistral 3 family, for efficient low-precision execution.
For Large 3’s sparse MoE architecture, NVIDIA integrated state-of-the-art Blackwell attention and MoE kernels, added support for prefill/decode disaggregated serving, and collaborated with Mistral on speculative decoding, enabling developers to efficiently serve long-context, high-throughput workloads on GB200 NVL72 and beyond. On the edge, delivers optimized deployments of the Ministral models on DGX Spark, RTX PCs and laptops, and Jetson devices, giving developers a consistent, high-performance path to run these open models from data center to robot.
We are very thankful for the collaboration and want to thank vLLM, Red Hat, and NVIDIA in particular.
For edge and local use cases, we release the Ministral 3 series, available in three model sizes: 3B, 8B, and 14B parameters. Furthermore, for each model size, we release base, instruct, and reasoning variants to the community, each with image understanding capabilities, all under the Apache 2.0 license. When married with the models’ native multimodal and multilingual capabilities, the Ministral 3 family offers a model for all enterprise or developer needs.
Furthermore, Ministral 3 achieves the best cost-to-performance ratio of any OSS model. In real-world use cases, both the number of generated tokens and model size matter equally. The Ministral instruct models match or exceed the performance of comparable models while often producing an order of magnitude fewer tokens.
For settings where accuracy is the only concern, the Ministral reasoning variants can think longer to produce state-of-the-art accuracy amongst their weight class - for instance 85% on AIME ’25 with our 14B variant.
Mistral 3 is available today on Mistral AI Studio, Amazon Bedrock, Azure Foundry, Hugging Face (Large 3 & Ministral), Modal, IBM WatsonX, OpenRouter, Fireworks, Unsloth AI,and Together AI. In addition, coming soon on NVIDIA NIM and AWS SageMaker.
For organizations seeking tailored AI solutions, Mistral AI offers custom model training services to fine-tune or fully adapt our models to your specific needs. Whether optimizing for domain-specific tasks, enhancing performance on proprietary datasets, or deploying models in unique environments, our team collaborates with you to build AI systems that align with your goals. For enterprise-grade deployments, custom training ensures your AI solution delivers maximum impact securely, efficiently, and at scale.
The future of AI is open. Mistral 3 redefines what’s possible with a family of models built for frontier intelligence, multimodal flexibility, and unmatched customization. Whether you’re deploying edge-optimized solutions with Ministral 3 or pushing the boundaries of reasoning with Mistral Large 3, this release puts state-of-the-art AI directly into your hands.
Frontier performance, open access: Achieve closed-source-level results with the transparency and control of open-source models.
Multimodal and multilingual: Build applications that understand text, images, and complex logic across 40+ native languages.
Scalable efficiency: From 3B to 675B parameters, choose the model that fits your needs, from edge devices to enterprise workflows.
Agentic and adaptable: Deploy for coding, creative collaboration, document analysis, or tool-use workflows with precision.
We believe that the future of AI should be built on transparency, accessibility, and collective progress. With this release, we invite the world to explore, build, and innovate with us, unlocking new possibilities in reasoning, efficiency, and real-world applications.
...
Read the original on mistral.ai »
Update: This post received a large amount of attention on Hacker News — see the discussion thread.
Initial Contact: Upon discovering this vulnerability on October 27, 2025, I immediately reached out to Filevine’s security team via email.
November 4, 2025: Filevine’s security team thanked me for the writeup and confirmed they would review the vulnerability and fix it quickly.
November 20, 2025: I followed up to confirm the patch was in place from my end, and informed them of my intention to write a technical blog post.
November 21, 2025: Filevine confirmed the issue was resolved and thanked me for responsibly reporting it.
The Filevine team was responsive, professional, and took the findings seriously throughout the disclosure process. They acknowledged the severity, worked to remediate the issues, allowed responsible disclosure, and maintained clear communication. Following conversations I’ve had with the Filevine team, it is clear that this incident is only related to a single law firm, no other Filevine clients were impacted — this was a non-production instance and there was no internal Filevine data exposed. Filevine was appreciative of my efforts to find and alert them to this issue. This is another great example of how organizations should handle security disclosures.
AI legal-tech companies are exploding in value, and Filevine, now valued at over a billion dollars, is one of the fastest-growing platforms in the space. Law firms feed tools like this enormous amounts of highly confidential information.
Because I’d recently been working with Yale Law School on a related project, I decided to take a closer look at how Filevine handles data security. What I discovered should concern every legal professional using AI systems today.
When I first navigated to the site to see how it worked, it seemed that I needed to be part of a law firm to actually play around with the tooling, or request an official demo. However, I know that companies often have a demo environment that is open, so I used a technique called subdomain enumeration (which I had first heard about in Gal Nagli’s article last year) to see if there was a demo environment. I found something much more interesting instead.
I saw a subdomain called margolis.filevine.com. When I navigated to that site, I was greeted with a loading page that never resolved:
I wanted to see what was actually loading, so I opened Chrome’s developer tools, but saw no Fetch/XHR requests (the request you often expect to see if a page is loading data). Then, I decided to dig through some of the Javascript files to see if I could figure out what was supposed to be happening. I saw a snippet in a JS file like POST await fetch(${BOX_SERVICE}/recommend). This piqued my interest — recommend what? And what is the BOX_SERVICE? That variable was not defined in the JS file the fetch would be called from, but (after looking through minified code, which SUCKS to do) I found it in another one: “dxxxxxx9.execute-api.us-west-2.amazonaws.com/prod”. Now I had a new endpoint to test, I just had to figure out the correct payload structure to it. After looking at more minified js to determine the correct structure for this endpoint, I was able to construct a working payload to /prod/recommend:
(the name could be anything of course). No authorization tokens needed, and I was greeted with the response:
At first I didn’t entirely understand the impact of what I saw. No matter the name of the project I passed in, I was recommended the same boxFolders and couldn’t seem to access any files. Then, not realizing I stumbled upon something massive, I turned my attention to the boxToken in the response.
After reading some documentation on the Box Api, I realized this was a live maximum access fully scoped admin token to the current, entire Box filesystem (like an internal shared Google Drive) of this law firm. This includes all confidential files, logs, user information, etc. Once I was able to prove this had an impact (by searching for “confidential” and getting nearly 100k results back)
I immediately stopped testing and responsibly disclosed this to Filevine. They responded quickly and professionally and remediated this issue.
If someone had malicious intent, they would have been able to extract every single file used by Margolis lawyers — countless data protected by HIPAA and other legal standards, internal memos/payrolls, literally millions of the most sensitive documents this law firm has in their possession. Documents protected by court orders! This could have been a real nightmare for both the law firm and the clients whose data would have been exposed.
To companies who feel pressure to rush into the AI craze in their industry — be careful! Always ensure the companies you are giving your most sensitive information to secure that data.
Note: After publishing this article, I was contacted by someone from the law firm Margolis PLLC asking me to confirm that the affected law firm was not theirs. I can confirm it was not.
...
Read the original on alexschapiro.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.