One Horizon
    • Log inJoin Beta

    Roadmap-first AI development. Plan what matters, hand work to agents, and keep every update tied back to execution.

    Main

    • Home
    • About
    • Pricing
    • Changelog
    • Docs

    Features

    • Roadmaps
    • Planning
    • Standups
    • Status updates
    • Insights
    • AI assistant / MCP
    • CLI
    • Integrations

    Solutions

    • Startups
    • Dev shops / agencies
    • Software teams
    • Internal IT & platform teams

    Company

    • Blog
    • Security
    • Log in
    • Sign up
    • Terms of Use
    • Privacy Policy

    © 2026 One Horizon. All rights reserved

    FacebookInstagramThreadsXRedditTikTokYouTubeMedium


    Back to blogs

    AI Is Rewiring Every Stage of the SDLC

    Alex van der Meer•April 14, 2026•9 Min Read
    AI Is Rewiring Every Stage of the SDLC

    Most teams still talk about AI as if it lives inside the editor.

    That was true for a moment. It is not true anymore.

    The software development lifecycle is starting to compress from end to end. AI can help turn rough ideas into scoped issues, working code into reviewable pull requests, noisy production failures into likely root causes, and scattered activity into readable recaps. The code generation part gets most of the attention, but that is only one link in the chain.

    That does not mean the SDLC disappears. It means every handoff gets faster, and every weak handoff gets more expensive.

    Google Cloud's 2025 DORA AI Capabilities report says almost 90% of surveyed organizations are already using AI in the software development lifecycle, but it also makes a more important point: the best returns come from the underlying system, not from the model alone.1 That matches what engineering teams are running into in practice. Early-2025 AI tools even slowed experienced open-source developers in one METR study, while METR's February 24, 2026 update says later agentic tools likely sped some of that work up, with huge variation by task and workflow.23

    That is the frame worth using.

    AI is not replacing the lifecycle. It is exposing which parts of your lifecycle were vague, manual, or disconnected all along.


    Planning is becoming executable

    The first meaningful change shows up before a single line of code gets written.

    Planning used to mean somebody translated a conversation into a ticket, then somebody else translated that ticket into a spec, and then a third person translated that spec into code. Every translation step leaked intent. The result was predictable: fuzzy requirements, shallow acceptance criteria, and a pull request that technically matched the ticket while missing the actual problem.

    That translation tax is getting smaller.

    GitHub now documents workflows where Copilot turns a product idea into an issue tree with epics, features, and tasks, or drafts structured issues directly from natural language and screenshots.45 That is useful, not because AI suddenly understands your product better than your team, but because it is very good at converting raw intent into a first draft that humans can sharpen quickly.

    The win at this stage is speed with structure. You can move from "we should probably build this" to a scoped backlog faster than before. You can pressure-test acceptance criteria earlier. You can ask for missing edge cases before implementation starts.

    But this is also where bad teams get themselves in trouble.

    If the brief is weak, AI just gives you a faster weak brief. If nobody owns the outcome, you end up with beautifully formatted ambiguity. Planning improves when AI helps teams make intent explicit. It gets worse when AI becomes a machine for producing official-looking sludge.

    AI-assisted engineering workflow moving from planning into implementation

    Design is getting cheaper to iterate

    The next shift happens in the part of the lifecycle that usually burns the most senior attention.

    Architecture used to be expensive to explore. If you wanted to compare approaches, think through edge cases, or challenge your own assumptions, you either burned a lot of whiteboard time or you committed code just to see where it broke.

    AI makes early design loops cheaper.

    You can ask for multiple implementation strategies, compare tradeoffs, surface obvious failure modes, and turn a hand-wavy idea into something technical enough to argue about. That does not remove the need for senior judgment. It just means the first draft of the thinking no longer has to start from a blank page.

    This is where design starts behaving less like a phase gate and more like a live conversation with implementation. The faster you can make tradeoffs concrete, the faster you can decide what is actually worth building.


    Implementation finally looks agentic

    This is the stage everyone notices first, and for good reason.

    Modern coding agents are no longer limited to autocomplete. OpenAI describes Codex as a software engineering agent that can work on many tasks in parallel, write features, answer questions about a codebase, fix bugs, propose pull requests, and iteratively run tests until they pass.6 That changes the shape of implementation work.

    The boring parts go away sooner. Boilerplate, scaffolding, test setup, repetitive refactors, migration scripts, and obvious edge-case coverage all get cheaper.

    The definition of "ready to start" changes too. When a task has strong context, clear constraints, and a real done state, the gap between idea and first implementation has almost collapsed.

    That does not mean every task is faster. METR's July 10, 2025 study is still a useful warning sign here. In a realistic open-source setting with style, testing, and documentation expectations, experienced developers took 19% longer when they used early-2025 AI tools.2 The February 24, 2026 follow-up suggests later agentic tooling likely accelerated work more often, but even that update is explicit that the true speedup depends heavily on what kinds of tasks get selected and how people actually use the tools.3

    That nuance matters.

    AI is strongest when the task is well-bounded and the local context is rich. It is weakest when the work is politically messy, underspecified, or full of hidden rules that live in somebody's head.

    So yes, implementation is faster now. But mostly for teams that learned how to make work legible.


    Review and testing are moving left

    The most underrated shift in the SDLC is not code generation. It is review compression.

    GitHub now lets teams configure Copilot to review pull requests automatically, including draft pull requests and new pushes, specifically to catch problems before a human reviewer steps in.7 That changes the economics of review.

    Instead of waiting for a senior engineer to spot naming issues, shallow test coverage, or obvious regression risks, the first pass can happen immediately. PR summaries get easier to generate. Diff context gets clearer. Smaller fixes get suggested earlier. That is not a replacement for human judgment, but it is a meaningful reduction in reviewer drag.

    The same thing is happening in testing.

    AI is useful at generating test cases, filling in edge conditions, and turning a spec or a bug report into a candidate regression suite. It is especially good at turning "we should probably never break this again" into an actual test file while the context is still fresh. That moves quality work closer to the moment the code is written.

    The failure mode is obvious.

    If AI increases output volume without shrinking batch size, review gets worse. You do not want an AI assistant that helps engineers produce 2,000-line pull requests twice as fast. You want an AI-assisted workflow that helps them ship smaller changes with tighter context.

    That is the real bar.

    Review is better when AI reduces ambiguity before human attention gets involved. It is worse when AI just floods the queue.


    Release and operations are becoming part of the same conversation

    The old SDLC diagram treated operations like the last box.

    Ship the code. Monitor it. Fix problems later.

    That boundary is breaking down.

    Sentry's Seer docs describe an AI debugging flow that uses issue context, tracing data, logs, profiles, and repository access to root-cause an issue, identify a solution, generate a code patch, and even open a pull request.8 Whether you use Seer or something else, the pattern matters: production telemetry is starting to feed directly back into the development loop instead of stopping at an alert.

    That changes release work too.

    As releases get smaller and more frequent, AI becomes useful for summarizing what changed, connecting an incident to the exact task or pull request that introduced it, and turning raw operational noise into a usable narrative. Postmortems become easier to draft. Change logs become easier to generate. The time between "something is wrong" and "we know where to look" keeps shrinking.

    Engineering teams using AI support across release coordination and incident follow-up

    The risk is the same as everywhere else in the lifecycle: speed without continuity.

    A root-cause summary is only useful if it connects back to the work that caused it. A generated fix is only useful if someone can see which product decision, task, or release context it belongs to. Otherwise operations gets faster, but the organization stays blind.


    The real bottleneck now is lifecycle context

    This is the part most AI discussions still miss.

    Once planning gets faster, coding gets faster, review gets earlier, and operations gets more automated, the constraint stops being any single stage. The constraint becomes continuity across stages.

    Can the team see how a product decision became a task? Can they see how that task became a branch, a pull request, a release, and maybe an incident? Can they see what the AI actually changed, why it changed it, and whether the outcome matched the original intent?

    If the answer is no, then you do not have an AI-native SDLC.

    You have faster fragments.

    That is why the next layer matters more than another code assistant. Teams need the connective tissue between roadmap, tasks, issues, commits, pull requests, releases, and the operational signals that come after release. Not because process is fashionable. Because AI increases throughput, and throughput without context turns into confusion very quickly.

    That is the hook to One Horizon.

    We are building for the part of the SDLC that most tooling still treats as somebody else's problem: the chain of intent from plan to shipped work. The more your team uses AI to plan, build, review, and respond, the more valuable that shared record becomes. Not another status-update ritual. Not another disconnected tracker. A system where humans and agents work from the same task context and where shipped work stays tied to the roadmap that justified it.

    The SDLC is not getting replaced.

    It is getting tighter, faster, and less forgiving of broken handoffs.

    AI can improve every step of that loop. But the teams that really benefit will be the ones that stop thinking in isolated tools and start thinking in connected lifecycle context.

    If that is the problem you are running into, take a look at One Horizon. That is the layer we think the AI era actually needs.


    Footnotes

    1. Google Cloud. "Unlocking AI’s full potential: 2025 DORA AI Capabilities Model report." The landing page says the report draws on nearly 5,000 technology professionals, that almost 90% are using AI, and that the greatest returns come from foundational systems and capabilities. https://cloud.google.com/resources/content/2025-dora-ai-capabilities-model-report ↩

    2. METR. "Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity." Published July 10, 2025. METR reports that experienced developers in its randomized trial took 19% longer when using early-2025 AI tools in that setting. https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/ ↩ ↩2

    3. METR. "We Are Changing Our Developer Productivity Experiment Design." Published February 24, 2026. METR says late-2025 AI likely accelerated open-source developers, while noting strong selection effects and wider agentic-tool adoption. https://metr.org/blog/2026-02-24-uplift-update/ ↩ ↩2

    4. GitHub Docs. "Planning a project with GitHub Copilot." GitHub describes using Copilot to turn a product idea into an issue tree with epics, features, and tasks. https://docs.github.com/copilot/tutorials/plan-a-project ↩

    5. GitHub Docs. "Using GitHub Copilot to create or update issues." GitHub says Copilot can draft structured issues from natural language or images and fill in title, body, labels, assignees, and templates. https://docs.github.com/en/copilot/how-tos/copilot-on-github/copilot-for-github-tasks/use-copilot-to-create-or-update-issues ↩

    6. OpenAI. "Introducing Codex." OpenAI says Codex can work on many tasks in parallel, write features, answer questions about a codebase, fix bugs, propose pull requests, and iteratively run tests. https://openai.com/index/introducing-codex/ ↩

    7. GitHub Docs. "Configuring automatic code review by GitHub Copilot." GitHub documents automatic pull request reviews, including draft pull requests and new pushes, to catch errors earlier in the review flow. https://docs.github.com/en/copilot/how-tos/copilot-on-github/set-up-copilot/configure-automatic-review ↩

    8. Sentry Docs. "Seer." Sentry describes an AI debugging flow that uses issue context and telemetry to root-cause problems, propose solutions, generate code patches, and open pull requests. https://docs.sentry.io/product/ai-in-sentry/seer/ ↩


    Share this article


    Related Posts

    AI Won't Replace QA. It Will Redefine It.

    AI Won't Replace QA. It Will Redefine It.

    AI is accelerating software delivery, but it is also increasing uncertainty. The teams that win will treat QA as a continuous trust system, not a final testing phase.

    Tijn van Daelen•April 30, 2026•11m
    How to Be a Super Good Engineering Manager (Without Becoming a Status Robot)

    How to Be a Super Good Engineering Manager (Without Becoming a Status Robot)

    Great engineering managers are not the people with the best dashboards. They build clarity, protect attention, and turn messy work into systems teams can trust.

    Tijn van Daelen•April 28, 2026•8m
    What a Dev Team Looks Like in 2030

    What a Dev Team Looks Like in 2030

    By 2030, the best dev teams will be smaller, more senior, and much less tolerant of fuzzy work. Agents will handle more first drafts. Humans will spend more time on specs, review, and judgment.

    Alex van der Meer•April 14, 2026•8m
    What Is AI Slop?

    What Is AI Slop?

    AI slop is not anything touched by AI. It is the polished-looking code, docs, tickets, and summaries that show up without enough context, judgment, or accountability. The fastest test is simple: did this save work, or did it just dump review cost on somebody else?

    Alex van der Meer•April 13, 2026•8m