In the age of AI, security has to be built into the entire path from code to release
Until recently, AI in development meant one thing: speed. Write a function faster. Ship a prototype faster. Close a ticket faster.
That’s no longer the whole story.
The question that actually matters now is how safe the code is — and how well protected the entire path is from the moment something is written to the moment it ships.
The problem is no longer theoretical
Start with something simple: the risk is real.
Veracode tested code samples generated with AI help and found that 45% failed security checks and contained dangerous vulnerabilities. More troubling: newer and larger models didn’t improve that number. (veracode.com)
That reframes the question.
Code generated with AI can’t be evaluated on just two axes — does it work, and how fast was it written. There’s a third:
Does it introduce risk that nobody will catch until it’s too late.
Why “better AI” doesn’t solve this
It’s tempting to treat this as a temporary problem — the next model will be smarter and produce safer code by default. That’s not how it works.
AI doesn’t evaluate systems the way an experienced engineer does. It doesn’t own the product, doesn’t feel the cost of a mistake in production, and carries no professional responsibility for what breaks. It produces the most statistically likely version of the code, not the most careful one.
That’s why model capability doesn’t translate into higher security. Veracode’s data says so directly. (veracode.com)
AI can speed up writing code. But it doesn’t remove the need for skepticism, review, and security discipline.
Why the security question goes beyond the code itself
In a real product, a change almost never arrives as just “new code.” It brings libraries, external dependencies, scripts, build tools, automated checks, containers, and third-party services — everything the code passes through to reach production.
So the question isn’t only whether the code is safe. It’s whether the entire delivery path is.
OWASP makes this explicit: software passes through a whole chain of creation, build, testing, and delivery, and failures can appear anywhere along it, not just in the code itself. (owasp.org) (owasp.org)
For AI, three things make this especially relevant.
1. AI speeds up how fast changes arrive
Faster-written code means faster-added libraries, faster-connected third-party components, faster-built technical links. If a team is moving quickly but isn’t tracking what exactly it’s pulling into the product, speed starts working against it.
2. AI can produce a weak solution with confidence
A quick read tells you almost nothing. The data handling might be sloppy. Permissions could be wider than anyone meant to set. Input validation might only work when nothing unusual comes in. None of that shows up in the diff. Gets approved.
3. The AI layer itself is part of the risk surface
Once a team builds AI into development, the risk zone expands beyond the code. It includes models, plugins, agents, automated checks, external integrations, and the whole pipeline through which AI influences what ships.
The risk isn’t only in what AI wrote. It’s in how AI is wired into the path from idea to release.
A “smart comment” isn’t enough
In 2025, GitHub said explicitly that Copilot code review combines LLM analysis with external tool calls and rule-based checks via ESLint and CodeQL. (github.blog)
That’s worth noting. One of the major players in this space isn’t betting that AI reading the code is sufficient.
That means static analysis, dependency scanning, and a human who reads the diff before approving it.
One AI comment isn’t enough, even if it sounds convincing.
What this means for websites and web applications
For the web, the conclusion is blunt:
Any output AI produces has to go through the same full security path as code written by a human.
Not a simplified version. Not “it’s just a draft.” Not “we’ll check it later.” The same path.
At minimum, that means the following.
Security has to be built in from the start
Security added at the end isn’t security — it’s a check that routinely gets skipped. OWASP is explicit that security must run through the entire development lifecycle, from design to release. (owasp.org)
For AI-generated code, this is direct: if security isn’t built into the process, AI speed just carries risk into production faster.
Automated checks are required
Static analysis, linters, dependency scanning, and technical filters aren’t optional layers — they’re the baseline. That’s why GitHub is building code review around tools like CodeQL and ESLint rather than relying on AI alone. (github.blog)
External components need vetting
Dependencies are one of the main attack surfaces for web applications. OWASP recommends keeping an inventory of components, setting dependency rules, verifying artifact origin, and maintaining a controlled build process. (owasp.org)
Human review still matters
AI can flag a suspicious pattern. But the judgment call can’t be handed off to a machine — especially not for access control, input validation, external integrations, user data, or edge case behavior.
The most dangerous mistake: treating security as something to add later
In 2026, that’s not just naïve. It’s a bad bet.
DORA’s report on AI in development describes AI as an amplifier: it strengthens strong systems and exposes the weak spots in weak ones. The teams that actually benefit aren’t the ones that moved fastest to adopt AI — they’re the ones that embedded it into an engineering system that already had checks, tests, and accountability. (dora.dev)
AI doesn’t lower security requirements. It raises the cost of ignoring them.
What maturity looks like in 2026
Code generated with AI help isn’t treated as safe by default. It gets the same skeptical review as anything else — maybe more.
No AI output skips the standard checks: security baked into the process, automated analysis, dependency scanning, tests, human review.
Risk gets assessed broadly — not just the code, but the libraries, tools, automation, external integrations, and the AI pipeline itself.
Speed and safety are kept separate. If AI accelerated a change, that doesn’t mean the change is safer.
Security is part of the delivery system, not an inspection at the end.
Conclusion
AI has accelerated not just productivity — it’s accelerated the rate at which weak decisions, dangerous patterns, and risky dependencies enter a product. Code security and supply chain security can’t stay at the margins of the development process as a result.
In 2026, treating this as someone else’s problem — the security team’s, the next model’s, the next sprint’s — is just a way of not dealing with it.
It’s a baseline condition of working at this level.
Sources
- October 2025 Update: GenAI Code Security Report (Veracode)
- Insights from 2025 GenAI Code Security Report (Veracode)
- New public preview features in Copilot code review: AI reviews that see the full picture (GitHub, 2025)
- DORA | State of AI-assisted Software Development 2025
- A03:2025 Software Supply Chain Failures (OWASP Top 10 2025)
- Software Supply Chain Security Cheat Sheet (OWASP)
- OWASP in SDLC
Continue Reading
Browse all journal entriesIf this article was useful, there are more notes on architecture, AI workflows, delivery, and engineering practice in the journal.