One Horizon
    • Log inJoin Beta

    Roadmap-first AI development. Plan what matters, hand work to agents, and keep every update tied back to execution.

    Main

    • Home
    • About
    • Pricing
    • Changelog
    • Docs

    Features

    • Roadmaps
    • Planning
    • Standups
    • Status updates
    • Insights
    • AI assistant / MCP
    • CLI
    • Integrations

    Solutions

    • Startups
    • Dev shops / agencies
    • Software teams
    • Internal IT & platform teams

    Company

    • Blog
    • Security
    • Log in
    • Sign up
    • Terms of Use
    • Privacy Policy

    © 2026 One Horizon. All rights reserved

    FacebookInstagramThreadsXRedditTikTokYouTubeMedium


    Back to blogs

    The Future of Open Source in the AI Era

    Alex van der Meer•May 6, 2026•14 Min Read
    The Future of Open Source in the AI Era

    Open source AI is not dying, and it is not magically winning either. It is being forced to grow up.

    Most debates about open source AI are still trapped in a shallow frame: is the model open or closed, who is ahead on a leaderboard this quarter, and whether one more model release "proves" openness is winning.

    That frame misses what is actually changing.

    In software history, open source wins were rarely about one heroic artifact. Linux was not just a kernel story. Kubernetes was not just a container story. Git was not just a version-control binary story. The winning pattern was ecosystem depth: standards, tooling, governance, portability, and a community that could improve the stack faster than any single vendor could fully own it.

    AI is moving into that same pattern right now.

    The future of open source in the AI era will not be decided by whether an individual frontier model is "better" this month. It will be decided by whether open ecosystems can deliver five things at once: reproducibility, interoperability, security, regulatory legibility, and economic leverage for builders who do not want to rent their future from one API provider.

    That is a much harder bar.

    It is also a much more interesting one.


    Open source AI has a definition now, and it raises the bar

    For years, "open source AI" was used as marketing shorthand for "you can download weights." That was always an incomplete definition, and everyone knew it.

    The Open Source Initiative formalized that ambiguity in 2024 by releasing OSAID v1.0. It explicitly ties open source AI to the same core freedoms as open source software: use, study, modify, and share. Crucially, it also says those freedoms depend on access to the preferred form for modification, including data information, code, and parameters.12

    That shift matters because it changes the center of gravity from distribution to accountability.

    A model release that includes only final weights can still be useful. But if teams cannot inspect how data was sourced, how training was executed, how filtering happened, and what constraints were introduced, then the system is hard to audit, hard to replicate, and hard to trust in high-stakes settings.

    This is why the term "open weights" exists as a separate category now. OSI's own framing is blunt: open weights are a step toward transparency, not the end state.3

    So one major part of the future is already visible: language is getting stricter. Open-source claims will increasingly be tested against a higher technical and governance standard, not just access to a download page.


    The license wars are a preview of where pressure will land

    If you want to see that pressure in the real world, look at licensing fights around popular model families.

    The Llama ecosystem has had enormous impact on open development, but the Llama 3 Community License still contains meaningful restrictions, including additional terms tied to very large-scale usage and limitations around improving competing large language models.4 OSI has publicly argued this does not meet open-source criteria.5

    You can like Meta's strategy and still acknowledge the structural point: "widely available" and "open source" are not the same legal category.

    That gap is not a semantic nerd fight. It has strategic consequences. When terms are ambiguous, ecosystems fragment around incompatible assumptions. When terms are clear, ecosystems can align around stronger interoperability and governance expectations.

    In the next few years, we should expect more "open-enough" products that are commercially generous but not fully open-source by strict criteria. Those products will remain important. But they will coexist with projects that optimize for stronger community guarantees and neutral governance.

    Both camps will ship value. Only one camp will produce infrastructure people can build on without a single-vendor permission model.

    Developers reviewing AI licensing and governance documents at a shared table

    The real battleground is shifting from model releases to open infrastructure

    Most teams are not training frontier models from scratch. They are integrating, serving, routing, evaluating, and governing AI behavior in production.

    That means the leverage layer is moving down and out: inference engines, model-serving stacks, eval pipelines, orchestration frameworks, agent protocols, deployment primitives, and governance rails.

    This is where open source has a serious path to dominance.

    Look at the governance signals:

    PyTorch moved under Linux Foundation governance and expanded into an umbrella foundation model that explicitly emphasizes vendor-neutral project stewardship.67 vLLM began incubation under LF AI & Data with an explicit statement that no one party should hold exclusive control over its future.8 In the agent layer, the Agentic AI Foundation launched with cross-vendor contributions around MCP, goose, and AGENTS.md, and OpenAI reported broad adoption of AGENTS.md across open-source repositories and coding-agent frameworks.910

    You do not need to believe every growth number to see the strategic pattern. Competitors at the model layer are increasingly collaborating at the protocol and infrastructure layer, because they all benefit from shared plumbing that reduces integration friction.

    That is exactly how open systems compound.

    In practice, this means the future is less likely to be "one open model wins" and more likely to be "open infrastructure defines the default interface for everyone, including closed-model vendors."

    If that happens, power shifts from whoever owns one model endpoint to whoever can orchestrate quality, reliability, and trust across many interchangeable components.


    Regulation will reward transparent builders, not vague openness claims

    A second force is tightening the market from outside: regulation.

    The EU AI Act created a nuanced reality many teams still misunderstand. The regulation text itself includes a carve-out for some AI systems released under free and open-source licenses, with explicit exceptions.11 That is not a blanket pass, and the Commission's implementation guidance reinforces this point by explaining that obligations vary by model role and risk profile, including extra duties for systemic-risk models.12

    The enforcement clock is not theoretical either. The Commission states that obligations for providers of general-purpose AI models started applying on 2 August 2025, and says full compliance enforcement, including fines, begins from 2 August 2026.12

    The signal here is bigger than one region. Governments are moving from "AI is new" to "show your work."

    For open-source projects, this is both risk and opportunity. Projects with sloppy provenance and weak governance will struggle. Projects that can document data origins, model lineage, safety policies, and operational controls will become easier for enterprises and public-sector buyers to adopt.

    In other words, policy pressure may end up filtering open-source AI toward higher quality, not lower relevance.

    A compliance and engineering team reviewing model documentation on laptops

    Open ecosystems now have to win the trust battle, not just the access battle

    Access used to be the argument for open source AI.

    Now trust is the argument.

    Recent supply-chain incidents in the broader AI tooling ecosystem show why. Datadog Security Labs documented a March 2026 campaign involving compromises of legitimate package releases, including LiteLLM and Telnyx on PyPI, with credential theft and persistence behavior that forced defenders to treat affected environments as full compromise events.13

    This is not a "gotcha" against open source. Closed ecosystems have severe incidents too.

    But open ecosystems carry a specific scaling challenge: when adoption is easy, blast radius can become easy too.

    That pushes the next phase of open-source AI toward security maturity: signed artifacts, better provenance, dependency hardening, reproducible builds, safer defaults, and shared incident-response playbooks.

    The projects that make security boring and verifiable will earn enterprise trust. The projects that rely on goodwill and velocity alone will get filtered out.

    That is healthy.

    The future of open source AI should not be "open at any cost." It should be "open and dependable enough to run the systems people actually depend on."


    Productivity data is telling us something deeper than "AI fast" or "AI slow"

    There is another reason shallow narratives are collapsing: real-world productivity data is more complex than vendor demos.

    METR's 2025 randomized study on experienced open-source developers found a 19% slowdown in that specific setting with early-2025 tools.14 METR's 2026 update then reported early signs of improved speedups with newer tools, while explicitly warning that selection effects now make measurement harder because many developers no longer want to work without AI assistance at all.15

    Two things can be true at once: early tools slowed experts in some high-context repositories, and later tools are improving quickly while usage norms shift.

    The deeper insight is not about one number. It is about where performance actually comes from.

    AI uplift is increasingly system-dependent. Context quality, task clarity, repo hygiene, evaluation loops, and integration boundaries often matter more than raw model capability. That is exactly where open-source ecosystems can excel, because they let teams inspect and optimize the full pipeline rather than accept opaque defaults.

    The winners in this decade will not be the teams with the most impressive demo prompts. They will be the teams with the most reliable open workflows from planning to deployment to feedback.

    An engineering team comparing delivery metrics and AI-assisted workflow traces on a shared screen

    The business model of open source AI is becoming clearer

    A lot of people still ask the wrong question: "How can open source compete with closed labs that have bigger training budgets?"

    Sometimes it cannot, at least at the very frontier.

    That is not the whole market.

    Most value creation sits in application-level outcomes, integration depth, operational reliability, and domain adaptation. Open-source AI can win those layers by lowering switching costs and increasing control over stack economics.

    The model that is emerging looks familiar if you have watched infrastructure history. Open standards and shared primitives keep the base layer portable, while commercial differentiation shifts into deployment quality, reliability, enterprise support, and vertical execution. In that world, the moat moves from "exclusive model access" to "execution quality on top of open capabilities."

    That is a harder moat to fake and a better moat for customers.

    For founders and engineering leaders, this translates into a practical strategy: treat open source AI as strategic leverage, not ideology. Use it to avoid single-vendor lock-in, to preserve negotiating power, and to build product intelligence where your users actually feel it.


    The three hard problems open source still has to solve

    None of this means open source AI is on autopilot. There are at least three hard constraints that will decide whether this ecosystem compounds or stalls.

    The first is compute concentration. Training and serving costs are still brutal at the frontier, and capital access remains uneven. Open communities can reduce this pressure through shared optimization work, model distillation, better serving stacks, and narrower domain models, but they cannot pretend physics and infrastructure economics do not exist. A realistic open strategy is not "beat every closed frontier run on day one." It is "build a stack where useful capability diffuses quickly and can be deployed efficiently by many actors."

    The second is legal and data provenance discipline. The era of vague dataset stories is ending. If open projects want durable enterprise adoption, they need clearer lineage around training inputs, licensing assumptions, and downstream usage constraints. That does not mean every project can publish every raw sample. It does mean maintainers need credible transparency artifacts that survive legal review and regulatory scrutiny.

    The third is maintenance economics. Open source has always depended on invisible labor, but AI infrastructure magnifies that burden because models, evals, serving runtimes, and security concerns evolve fast. If maintainers burn out, ecosystems fragment and trust collapses. The projects that last will be the ones that pair technical openness with sustainable operating models: foundation support, commercial backing that does not capture governance, and contributor pathways that reward long-term stewardship.

    These are not side issues. They are the core execution challenges that determine whether "open" becomes foundational infrastructure or just another launch strategy.


    What open source AI will look like by 2030

    If current trajectories hold, by 2030 we are likely to see open-source model families continuing to narrow practical gaps for many real workloads, even if the absolute frontier remains expensive. We are also likely to treat agent interoperability standards as table stakes, the same way API compatibility became table stakes in earlier software waves.

    At the same time, regulatory reporting, provenance metadata, and lifecycle documentation will likely become native parts of model distribution instead of afterthought PDFs. Security posture will become a product feature in its own right for open-source projects, with stronger supply-chain verification and faster coordinated response built into project expectations.

    Most importantly, the center of innovation should keep shifting from "who has the biggest model" to "who can compose trustworthy AI systems fastest."

    That future is not guaranteed. It depends on whether open communities keep doing the unglamorous work: governance, maintenance, documentation, security, and standards.

    But if they do, open source will shape the AI era the same way it shaped the cloud era: not by owning every endpoint, but by defining the substrate everyone else builds on.


    The practical move right now

    If you are building with AI today, the smartest near-term move is not to pledge loyalty to one camp.

    It is to design for optionality.

    Use open components where they give you leverage. Keep architecture portable. Track provenance. Treat security and evaluation as first-class engineering work. Standardize how humans and agents share task context. Make your system legible enough that you can swap tools without rewriting your company.

    That is how you stay fast without becoming fragile.

    At One Horizon, this is exactly the problem we care about: giving teams a clearer chain from intent to execution in AI-native workflows, so humans and agents can collaborate in systems that remain auditable, improvable, and grounded in real delivery.

    The future of open source in AI will not be decided by slogans.

    It will be decided by whether we can make open systems the most reliable way to build software that survives production.

    That is a future worth building.


    Footnotes

    1. Open Source Initiative. "The Open Source AI Definition - 1.0." https://opensource.org/ai/open-source-ai-definition ↩

    2. Open Source Initiative. "The Open Source Initiative Announces the Release of the Industry's First Open Source AI Definition." October 28, 2024. https://opensource.org/blog/the-open-source-initiative-announces-the-release-of-the-industrys-first-open-source-ai-definition ↩

    3. Open Source Initiative. "Open Weights: not quite what you've been told." https://opensource.org/ai/open-weights ↩

    4. Meta. "Meta Llama 3 Community License Agreement." https://github.com/meta-llama/llama3/blob/main/LICENSE ↩

    5. Open Source Initiative. "Meta's LLaMa license is still not Open Source." https://opensource.org/blog/metas-llama-license-is-still-not-open-source ↩

    6. Linux Foundation. "Meta Transitions PyTorch to the Linux Foundation, Further Accelerating AI/ML Open Source Collaboration." September 12, 2022. https://www.linuxfoundation.org/press/press-release/meta-transitions-pytorch-to-the-linux-foundation ↩

    7. PyTorch Foundation. "PyTorch Foundation Expands to an Umbrella Foundation to Accelerate AI Innovation." April 29, 2025. https://pytorch.org/blog/pt-foundation-expands/ ↩

    8. vLLM. "vLLM's Open Governance and Performance Roadmap." July 25, 2024. https://vllm.ai/blog/lfai-perf ↩

    9. OpenAI. "OpenAI co-founds the Agentic AI Foundation under the Linux Foundation." December 2025. https://openai.com/index/agentic-ai-foundation/ ↩

    10. Agentic AI Foundation. "Linux Foundation Announces the Formation of the Agentic AI Foundation (AAIF), Anchored by New Project Contributions Including Model Context Protocol (MCP), goose and AGENTS.md." December 9, 2025. https://aaif.io/press/linux-foundation-announces-the-formation-of-the-agentic-ai-foundation-aaif-anchored-by-new-project-contributions-including-model-context-protocol-mcp-goose-and-agents-md/ ↩

    11. European Union. "Regulation (EU) 2024/1689 (AI Act)." (see Article 2(12) carve-out language for some free and open-source licensed AI systems and its exceptions). https://eur-lex.europa.eu/eli/reg/2024/1689/oj/fre ↩

    12. European Commission. "Guidelines on obligations for General-Purpose AI providers." https://digital-strategy.ec.europa.eu/en/faqs/guidelines-obligations-general-purpose-ai-providers ↩ ↩2

    13. Datadog Security Labs. "LiteLLM and Telnyx compromised on PyPI: Tracing the TeamPCP supply chain campaign." March 24, 2026. https://securitylabs.datadoghq.com/articles/litellm-compromised-pypi-teampcp-supply-chain-campaign/ ↩

    14. METR. "Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity." July 10, 2025. https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/ ↩

    15. METR. "We are Changing our Developer Productivity Experiment Design." February 24, 2026. https://metr.org/blog/2026-02-24-uplift-update/ ↩


    Share this article


    Related Posts

    The AI Arms Race Is Now a Systems War

    The AI Arms Race Is Now a Systems War

    The AI race between OpenAI, Anthropic, Google, Meta, and the next tier is no longer just about model IQ. It is a multi-front systems war across compute, distribution, product ergonomics, economics, and regulation.

    Alex van der Meer•May 5, 2026•39m
    How Rapid AI Progress Is Rewriting Software Fundraising

    How Rapid AI Progress Is Rewriting Software Fundraising

    AI has changed fundraising from a familiar SaaS playbook into a two-speed market of mega-round concentration, faster early traction, and harsher diligence on moats, margins, and model risk.

    Alex van der Meer•May 2, 2026•11m
    AI Won't Replace QA. It Will Redefine It.

    AI Won't Replace QA. It Will Redefine It.

    AI is accelerating software delivery, but it is also increasing uncertainty. The teams that win will treat QA as a continuous trust system, not a final testing phase.

    Tijn van Daelen•April 30, 2026•11m
    AI Code Generation Is Not the Product

    AI Code Generation Is Not the Product

    The current AI debate is stuck on the cheapest part of software. Code generation matters, but great products still come from problem selection, interaction design, and the discipline to make an app or website genuinely worth using.

    Alex van der Meer•April 16, 2026•8m