One Horizon
    • Log inJoin Beta

    Roadmap-first AI development. Plan what matters, hand work to agents, and keep every update tied back to execution.

    Main

    • Home
    • About
    • Pricing
    • Changelog
    • Docs

    Features

    • Roadmaps
    • Planning
    • Standups
    • Status updates
    • Insights
    • AI assistant / MCP
    • CLI
    • Integrations

    Solutions

    • Startups
    • Dev shops / agencies
    • Software teams
    • Internal IT & platform teams

    Company

    • Blog
    • Log in
    • Sign up
    • Terms of Use
    • Privacy Policy

    © 2026 One Horizon. All rights reserved

    FacebookInstagramThreadsXRedditTikTokYouTubeMedium


    Back to blogs

    AI Code Generation Is Not the Product

    Alex van der Meer•April 16, 2026•8 Min Read
    AI Code Generation Is Not the Product

    AI is making implementation cheaper. Good. The scarce part is still product judgment. We do not need more prompt-generated software. We need better apps and websites.

    Everyone keeps turning AI and software into the same shallow argument: can the model write the app, should every product become "agentic," will prompts replace teams?

    That is the cheapest layer of the problem.

    Code matters. Of course it does. But the reason people love a product is almost never that the code arrived quickly. They love it because it solves the right problem, removes friction, feels trustworthy, and gets a pile of small decisions right.

    That is product work.

    Not code generation.


    We already had builders before LLMs

    The current hype makes it sound as if software only became buildable once chat interfaces showed up.

    Come on.

    Wix started in 2006 and launched Wix ADI in 2016.1 Bubble started in 2012 because its founders wanted startup creation to cost less engineering effort.2 Webflow was founded in 2013.3

    We have had website builders, app builders, low-code tools, no-code tools, scaffolding tools, templates, themes, plugins, and code generators for a long time.

    Generative AI did not invent the dream of "describe what you want and watch software appear."

    It just lowered the friction again.

    That matters. Lowering friction is useful. It means more people can get to a prototype, a workflow, or a first version without drowning in setup.

    But if cheaper implementation was the whole game, the internet would already be full of nothing but excellent products.

    It is not.

    Because the hard part was never only the code.

    A person sketching a website or app wireframe on paper beside a keyboard

    "Agentic" is a mode, not a product strategy

    There is another version of the same mistake floating around now: the idea that every app should be rebuilt for agents because agents will be using software on our behalf.

    Some products will absolutely need agentic flows. Internal tooling, repetitive back-office actions, research loops, support operations, and system-to-system work are obvious examples.

    But turning that into a universal product law is lazy.

    People still live in mobile apps and websites.

    They still compare interfaces. They still hesitate at unclear copy. They still abandon slow onboarding. They still remember whether a product felt calm, obvious, and trustworthy or whether it felt like one more messy admin panel pretending to be software.

    An agent may be able to click buttons for me.

    That does not make the buttons irrelevant.

    If anything, it raises the bar. The moment an agent takes action on my behalf, I need an even clearer place to inspect what happened, override a bad call, understand the state of the system, and trust that the product is not making a mess quietly in the background.

    If your onboarding is confusing, your settings are a maze, your notification model is obnoxious, or your mobile experience feels like a web dashboard squeezed into a phone, you do not have an "agentic future" problem.

    You have a product problem.

    The future is not a binary choice between human UI and agent UX.

    Good products will likely have both.

    Maybe an agent handles setup, triage, or repetitive actions in the background. The human still needs an experience they can understand, trust, override, and maybe even enjoy using.

    Especially on the web and on mobile, where the experience still has to earn its place in somebody's day.


    Design is where products actually win

    Design is not "make it pretty after the code works."

    Design is deciding what deserves to exist in the first place.

    It is choosing the right abstraction. Cutting the extra screen. Naming the action clearly. Setting the default. Handling the failure state. Respecting the platform. Making the thing feel obvious five seconds sooner.

    It is understanding the problem well enough that the product feels inevitable once somebody touches it.

    That is why "not everyone can build a great product with prompts" is not gatekeeping.

    It is just reality.

    Prompts can generate a UI, a CRUD app, and five variations of the same landing page. They do not automatically know which workflow matters, which step creates anxiety, which detail creates trust, or which feature should be killed before it ships.

    Prompts can get you to a first screen. They cannot tell you that the first screen should not exist.

    They do not give you taste.

    They do not give you restraint.

    They do not give you product judgment.

    And product judgment is still where the difference between another app and a great product lives.

    A laptop and tablet side by side on a table, representing web and mobile product surfaces

    AI changes the floor, not the finish line

    This is the useful framing.

    AI lowers the cost of first drafts. It helps with scaffolding, implementation, copy variations, code cleanup, and broader exploration.

    That is real leverage.

    But the finish line did not move.

    The best products still need sharp specs, good research, real feedback loops, and people who know when something is technically complete but experientially wrong.

    DORA's 2025 AI Capabilities work makes this point clearly: AI behaves like an amplifier. The return depends more on the system underneath than on the tool itself.4 METR found the same pattern from a different angle when experienced open-source developers working with early-2025 AI tools were slower overall, not faster, in that setting.5

    Different context, same lesson.

    If you already know what you are building and why, AI can help you move faster.

    If you do not, AI helps you produce confident-looking waste at scale.

    That is why I care a lot more about problem clarity, spec quality, and feedback loops than about how many lines the model wrote for you.


    We do not need more software. We need better software

    The market is not starving for more dashboards, more cloned SaaS, more generic websites, or more apps that look finished in screenshots and fall apart in daily use.

    We already have plenty of surface area.

    What is scarce is software with taste.

    Software with clarity.

    Software that solves an actual problem and does it in a way that feels thoughtful, fast, and a little delightful.

    That is why I am not very interested in the question "can AI write the app?"

    Mostly, yes.

    The more important question is whether anyone involved understands the problem well enough to make the app worth opening tomorrow.

    That sits upstream of code generation.

    It sits upstream of agentic workflows.

    It sits upstream of whatever the next model can autocomplete.

    So no, I do not think every product now needs to become "agentic."

    And no, I do not think prompting alone suddenly turns everyone into a product designer.

    AI is making software easier to produce.

    Good.

    If implementation keeps getting cheaper, the saved time should go where products actually win: understanding the user, cutting scope, fixing the rough edges, sharpening the flow, and making the experience feel considered instead of merely generated.

    Now the bar moves back to where it always should have been: better problems, better decisions, better apps, better websites.

    That is the signal I care about.


    Care has to be part of the product

    This is part of why I care about One Horizon.

    Too much software right now is optimized to look smart before it earns trust. The screenshot looks good. The demo lands. Then somebody has to live with the mess.

    I want the opposite. Software built with care. Care for the human being asked to trust it. Care for the agent being allowed to act inside it. Care for the ordinary Tuesday after launch, when the gloss is gone and the work is real.

    That means structure where agents need structure, and clarity where humans need clarity. Less noise. More truth about what is happening.

    That is what we are trying to build with One Horizon.

    See One Horizon


    Footnotes

    1. Wix. "About us." Wix says the company was founded in 2006 and lists Wix ADI as a 2016 milestone. https://www.wix.com/about/us ↩

    2. Bubble. "Bubble raises $6M to become the platform for building startups." Bubble says, "We started Bubble in 2012" because starting companies was too costly in engineering resources. https://bubble.io/blog/bubble-raises-seed-round/ ↩

    3. Webflow. "About Webflow." Webflow lists 2013 as its founding year. https://webflow.com/company/about ↩

    4. Google Cloud / DORA. "Unlocking AI's full potential: 2025 DORA AI Capabilities Model report." The page says AI is an amplifier and that success depends more on culture and capabilities than on the tools themselves. https://cloud.google.com/resources/content/2025-dora-ai-capabilities-model-report ↩

    5. METR. "Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity." Published July 10, 2025. METR reports experienced developers in its study were 19% slower overall with early-2025 AI tools in that setting. https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/ ↩


    Share this article


    Related Posts

    What a Dev Team Looks Like in 2030

    What a Dev Team Looks Like in 2030

    By 2030, the best dev teams will be smaller, more senior, and much less tolerant of fuzzy work. Agents will handle more first drafts. Humans will spend more time on specs, review, and judgment.

    Alex van der Meer•April 14, 2026•8m
    AI Is Rewiring Every Stage of the SDLC

    AI Is Rewiring Every Stage of the SDLC

    AI is no longer just helping engineers write code. It is reshaping planning, implementation, review, release, and operations, which means the real bottleneck in the SDLC is now context.

    Alex van der Meer•April 14, 2026•9m
    What Is AI Slop?

    What Is AI Slop?

    AI slop is not anything touched by AI. It is the polished-looking code, docs, tickets, and summaries that show up without enough context, judgment, or accountability. The fastest test is simple: did this save work, or did it just dump review cost on somebody else?

    Alex van der Meer•April 13, 2026•8m
    Product Roadmaps vs. Engineering Reality

    Product Roadmaps vs. Engineering Reality

    Product teams plan around outcomes. Engineers build in technical deliverables. An entire layer of people exists just to translate between the two. What if you could eliminate that layer entirely?

    Gijs van de Nieuwegiessen•February 16, 2026•7m