THE LINUX FOUNDATION PROJECTS
Blog

Scaling Open Source Security with AI

Open source software underpins much of today’s digital infrastructure, but securing it remains a difficult, constantly evolving challenge. As announced today, Alpha-Omega and the OpenSSF will manage a new $12.5 million collective investment from Anthropic, Amazon Web Services (AWS), Google, Google DeepMind, GitHub, Microsoft, and OpenAI to help strengthen the security, resilience, and long-term sustainability of the open source ecosystem.

This investment builds on a core idea that has guided Alpha-Omega from the beginning: open source security should be normal, practical, and achievable. Some might even say “boring”. Over time, we’ve seen that targeted financial support can catalyze sustained improvement. Last year we disbursed over $7M, including over $3.5M to fund security engineers in major open source foundations, $1.25M for over 60 focused security audits, $1.75M to fortify open source package registries, and holding dozens of meetings and workshops to align with our peers on a shared vision for the future.

The landscape is changing quickly. As AI accelerates both software development and the discovery of vulnerabilities, the scale and pace of security work are changing with it. That creates new pressure for maintainers large and small, but it also creates new opportunities to improve how security work gets done.

This new funding will help us respond with strategies tailored to the needs of open source maintainers and their communities. Our first objective is straightforward: help critical open source projects and ecosystems use AI capabilities to identify and fix exploitable flaws fast, especially the kinds of vulnerabilities that could have broad downstream impact. It will mean working directly with foundations and maintainers to bring advanced, AI-driven security capabilities into their unique development workflows, not bolting them on from the outside. It also means focusing on practical adoption: ensuring the tools are effective and ergonomic, minimizing false positives and AI slop, and keeping the operational burden low for the people maintaining critical software.

Our second objective is about access. We want to help at least 10,000 critical open source projects apply advanced AI security capabilities before those same capabilities become widely available to threat actors. In practice, this will mean building trusted paths for access, partnering closely with maintainers and communities, and creating mechanisms that can scale beyond one-off engagements.

Our third objective is broader ecosystem awareness and readiness. We want at least 100,000 open source maintainers to be aware of, and able to effectively adopt AI solutions to help them proactively identify and fix serious vulnerabilities in their projects. This will require more than simply shipping tools. It will require broad community engagement, workshops, conference talks, and free, high quality guidance that will help maintainers understand where AI can genuinely improve security work without undue burden.

The real test, as always, will be execution. Success here will not be measured by how much AI we introduce into open source, but by whether maintainers can use it to reduce risk, remediate serious vulnerabilities faster, and strengthen the software supply chain over the long term.

We’re grateful to our funding partners for their commitment to this work, and we look forward to continuing it alongside the maintainers and communities that power the world’s digital systems.

To learn more about how this funding will support our ongoing initiatives and grant programs, visit alpha-omega.dev.