yk.camelcase.work

Journal

Personal notes, thoughts, and articles on tech, work, and ideas.

Yevhen Kim
techaiAI-Assisted Developmentpipeline

Why your deployment pipeline is about to become your competitive advantage

The speed advantage from AI is real. 41% of code written in 2025 came from AI tools. Teams using them merge 98% more pull requests than they did a year ago. That's measurable, and it's happening now.

But here's what's going to hit you: faster code generation doesn't flow evenly through your system. It concentrates pressure at whatever's slowest. For most teams, that's code review.

Take a team tracking their own metrics: PR volume jumped 98%. The time per review stayed the same—a senior engineer reviewing AI output takes roughly as long as reviewing human code. The math breaks immediately. You have 2x the PRs, the same human attention, and review time becomes the thing that stops shipping.

The pressure doesn't stop there. More code flowing to production means more to test. More surface area for bugs. Veracode found that 45% of AI-generated code introduces OWASP vulnerabilities. CodeRabbit's data showed AI code fails at 1.7x the rate of human code. If your test suite catches those failures, great—you buy time to fix them. If it doesn't, production catches them.

This is why the 2025 DORA report found AI acts as a multiplier. It doesn't level the field. It amplifies what's already true about your engineering. Strong teams with real testing and fast deployments get stronger. Teams with weak infrastructure get visibly broken.

Why your competitors are fixing this now

Most tech leads I talk to framed AI tooling the same way: it's a productivity tool. You hand out licenses, engineers write faster, you ship more. Velocity problem solved.

That's the mistake. The velocity problem isn't solved. It's just moved.

The teams actually shipping faster aren't the ones with the best AI models. They're the ones whose CI runs in under ten minutes. Who deploy without ceremony. Whose monitoring catches problems. They're the teams that invested in the stuff everyone thought was solved: testing infrastructure, deployment safety, observability.

When code was the bottleneck, you could afford to have testing be optional. You could afford manual deployments. You could rely on code review to catch bugs because code was rare and valuable. Now code is abundant. Everything that used to be "nice to have" is now on the critical path.

The competitive edge isn't the AI tool. It's the infrastructure that can actually absorb the output without breaking.

What this means for you

Look at your deployment pipeline. Not theoretically. Actual data. How long does your test suite run? What's the false negative rate on your security scanning? How often do you actually catch bugs that would have hit production?

Now ask: what breaks if code volume doubles?

If it's code review, stop pretending senior engineers can review more. Parallelize, automate the mechanical checks, let humans focus on architecture and intent. If it's testing, your current test suite didn't cover enough cases before, and it definitely doesn't now. You need parallel test execution, better instrumentation, probably different testing strategies—more integration tests, fewer unit tests in some areas.

If it's deployment, you need to make it safer and faster so you can do it more often with less risk.

You might hire differently. You might need infrastructure engineers instead of more developers. You might invest in tooling that doesn't ship features—the stuff that validates, tests, and deploys code.

And you might decide that not all the velocity spike is worth taking. If your team can generate code but can't safely deploy it, the code sitting in a branch doesn't help anyone.

The teams I know who are actually winning with AI right now? They're not celebrating code generation speed. They're obsessing over deployment speed. They're the ones who realized that when input speed changes, everything downstream becomes visible. The ones who fixed it instead of just accepting it.

That's the competitive advantage. Not the fancier AI model. The infrastructure that actually works.

Sources

Yevhen Kim
techaiAI-Assisted Development

AI in development is no longer an add-on — it is becoming a base layer

For a few years, AI in development meant one thing: code completion. Ready-made patterns. A minute saved here, a useful hint there.

That phase is over.

In 2026, the shift is not that AI writes code faster. It is that AI no longer stops at writing code. It now works through the full path of a change: collecting context, proposing a plan, editing code, reviewing proposed changes, flagging security and quality risks, suggesting fixes, and handing the work into whatever automated process prepares it for release.

Once AI touches multiple stages of a change, it stops being a convenient extra. It becomes part of how the team actually works.

GitHub made this visible in late 2025, describing AI, agents, and typed languages as forces

driving the biggest shifts in software development in more than a decade.

What matters there is not the phrasing. It is what the phrasing reflects: AI is no longer treated as a side tool. It is increasingly treated as part of the engineering foundation.

Around the same time, GitHub's Copilot code review system changed. It moved away from a model where AI only leaves comments, toward something that combines LLM analysis, tool calls, and rule-based checks that produce consistent results. GitHub describes it as combining

detections from large language models, calls to external tools, and consistent rule-based checks through tools such as ESLint and CodeQL.

AI is not just suggesting text anymore. It is working alongside the tools teams already use to inspect code.

The same shift shows up in GitHub's Copilot cloud agent, which can inspect a repository, build an implementation plan, make changes in a separate branch, and prepare them for review. GitHub calls it

an autonomous and asynchronous software development agent.

So AI is no longer waiting in the editor for the next prompt. It can take a scoped task, work through several steps, and return the result into the team's normal review flow. That is what "agentic workflows" actually means — not the label, but the shift: from reacting to a single instruction, to participating in several connected ones.

Once that happens, the questions teams ask have to change too.

"Should we use AI?" stopped being interesting. In 2026, even "which model do we prefer?" is already the wrong level. The more useful question is how well AI is woven into change review, testing, release prep, security, and quality control.

Because once AI works through the full path from draft to finished change, speed stops being what you optimize for. Control does.

Without control, AI accelerates noise.

A mediocre requirement becomes a neatly packaged change faster. A fragile architectural decision gets embedded earlier. A weak assumption survives longer because building a quick prototype to test an idea has become cheap. A review process that was already missing things misses them even faster.

This is why the strongest current research does not describe AI as something that automatically matures engineering. DORA's 2025 findings used one word:

an amplifier.

AI strengthens good systems and magnifies problems in weak ones. Teams with solid delivery discipline, real code review, and clear standards get more out of it. Teams with weak foundations do not get more maturity. They get faster instability.

The practical consequences are not abstract. They run through the actual tools and habits teams use every day.

Code review becomes a mixed system — AI surfaces issues, rule-based checks confirm or reject them, people decide what is genuinely risky. The job shifts from reading everything to managing signal and escalation. CI/CD can no longer just run a build: if AI can create changes, patch them, and hand them forward, the pipeline has to check intent, test behavior, and catch the kinds of mistakes a model can make while sounding completely confident. Repository quality starts to matter as an input, not just a byproduct — the better the documentation, architectural boundaries, and tests inside a project, the less chaos AI introduces when it operates autonomously.

And what makes a strong developer is shifting. The best people stand out not only because they ship faster, but because they are better at knowing what to hand to AI, what to constrain, what to verify by hand, and where their own attention still has to stay sharp.

GitHub described the direction plainly:

Copilot used to be an autocomplete tool. Now, it is a full AI coding assistant that can run multi-step workflows.

Once AI participates in the system rather than assisting inside the editor, teams need a different level of discipline: clear rules in the repository, strong automated tests, a reliable review process before changes merge, security tools inside the delivery path, documented architectural limits, and explicit policies for what AI is allowed to do autonomously.

None of this is about whether AI belongs in software development. That is already settled.

The question is whether teams treat it as a loose trick for speed, or as something that requires the same rigor as any other critical part of how changes are made and delivered. AI in development is no longer an add-on. It is becoming part of the foundation. The teams that do well will not be the ones that generate the most code — they will be the ones that build the strongest system around it.

Sources

Yevhen Kim
techcreativity

Creativity in software development is not about inspiration, but about the quality of decisions

Creativity in software development is usually described in terms that don't say much. Inspiration. Unusual ideas. Thinking outside the box. None of that is a useful description for engineering work.

What matters is not whether a solution looks original, but whether it is stronger: more precise for the task, more stable as the system evolves, easier to maintain. In engineering, novelty alone proves nothing. It matters only when it improves the outcome.

That is why creativity starts not with the answer, but with how the problem is defined. A strong engineer does not simply search for a solution inside a given frame — first, they check whether the frame itself is correct. Research on problem framing shows that experienced practitioners are more likely to question and reframe the problem, while less experienced ones more often solve it as initially presented. In software development, this difference is often decisive. Many expensive solutions are born from the wrong question asked too early.

Weak creativity locks onto the first understandable version of the task and works hard inside accidental constraints. Strong creativity does something else first: it checks whether the team is operating at the right level. Maybe the problem should not be solved with more complexity. Maybe it should be reformulated. Maybe the best move is to remove the source of complexity altogether, not improve what is already there.

Real systems exist under pressure: time, cost, backward compatibility, legacy architecture, operational risk. Creativity is not an escape from these constraints. It is the ability to work with them better than others. Constraints are not the enemy of a strong solution — they are the material from which it is shaped.

And creativity in development is inseparable from trade-offs. There is no real-world system where you get maximum flexibility, minimum cost, perfect simplicity, and zero maintenance risk at the same time. A mature approach does not try to avoid trade-offs — it works with them more precisely. Research on design trade-offs shows that strong practitioners do not just choose the least bad option inside a fixed solution space. They often change the solution space itself — redefine the problem so that part of the original conflict disappears.

One of the strongest forms of creativity in development is simplification: remove a layer instead of adding one, choose a narrow and accurate solution now rather than “future-proof universality” by default, choose a form that survives real use rather than one that looks impressive on a diagram.

Weak creativity produces complexity. It likes structures that look clever. Strong creativity produces clarity — cuts unnecessary entities, dependencies, and exceptions. The goal is not to impress with the construction. It is to improve the quality of the system.

In a good engineering environment, creativity rarely looks romantic. It is not a flash of inspiration at a whiteboard. It is a sequence of exact moves: redefining the question, noticing where the team fixed on the first option too early, refusing an elegant but expensive architecture, choosing a trade-off that will age well in production.

It is also not just an individual trait. Research on IS development teams shows that creativity emerges from how people, structure, and task interact — not from one “brilliant individual” working alone. Teams either can question the problem framing and revise their assumptions, or they cannot. That depends on the environment. Even a strong engineer narrows where the system rewards only fast execution of the first accepted idea.

The spread of generative AI makes this more visible. As routine implementation gets cheaper, the value of human judgment rises. Recent work on AI in software engineering argues directly: programming is not the same as software engineering, and human judgment, creativity, and adaptability remain central. When a draft implementation can be produced quickly, choosing the right direction matters more, not less. And when code appears faster, the cost of a wrong idea scales faster too.

A weak idea is no longer implemented slowly. It can now be expanded across services, workflows, and interfaces at speed. A wrong architectural assumption multiplies at the same velocity that makes delivery feel productive. A bad problem framing can quickly become a large volume of convincing-looking code. AI does not reduce the importance of creativity. It raises the demands on it.

Creativity in software development is not a decorative trait and not a pleasant bonus on top of the “real” engineering work. It is part of mature engineering judgment. It shows up in how a person defines the problem, works under constraints, handles trade-offs, removes unnecessary complexity, and tells an interesting idea apart from a genuinely strong one.

In that sense, creativity is not about inspiration.

It is about the quality of decisions.

It is about seeing the stronger path before the system has time to grow around the weaker one.

Sources

Yevhen Kim
techaisecurity

In the age of AI, security has to be built into the entire path from code to release

Until recently, AI in development meant one thing: speed. Write a function faster. Ship a prototype faster. Close a ticket faster.

That’s no longer the whole story.

The question that actually matters now is how safe the code is — and how well protected the entire path is from the moment something is written to the moment it ships.


The problem is no longer theoretical

Start with something simple: the risk is real.

Veracode tested code samples generated with AI help and found that 45% failed security checks and contained dangerous vulnerabilities. More troubling: newer and larger models didn’t improve that number. (veracode.com)

That reframes the question.

Code generated with AI can’t be evaluated on just two axes — does it work, and how fast was it written. There’s a third:

Does it introduce risk that nobody will catch until it’s too late.


Why “better AI” doesn’t solve this

It’s tempting to treat this as a temporary problem — the next model will be smarter and produce safer code by default. That’s not how it works.

AI doesn’t evaluate systems the way an experienced engineer does. It doesn’t own the product, doesn’t feel the cost of a mistake in production, and carries no professional responsibility for what breaks. It produces the most statistically likely version of the code, not the most careful one.

That’s why model capability doesn’t translate into higher security. Veracode’s data says so directly. (veracode.com)

AI can speed up writing code. But it doesn’t remove the need for skepticism, review, and security discipline.


Why the security question goes beyond the code itself

In a real product, a change almost never arrives as just “new code.” It brings libraries, external dependencies, scripts, build tools, automated checks, containers, and third-party services — everything the code passes through to reach production.

So the question isn’t only whether the code is safe. It’s whether the entire delivery path is.

OWASP makes this explicit: software passes through a whole chain of creation, build, testing, and delivery, and failures can appear anywhere along it, not just in the code itself. (owasp.org) (owasp.org)

For AI, three things make this especially relevant.

1. AI speeds up how fast changes arrive

Faster-written code means faster-added libraries, faster-connected third-party components, faster-built technical links. If a team is moving quickly but isn’t tracking what exactly it’s pulling into the product, speed starts working against it.

2. AI can produce a weak solution with confidence

A quick read tells you almost nothing. The data handling might be sloppy. Permissions could be wider than anyone meant to set. Input validation might only work when nothing unusual comes in. None of that shows up in the diff. Gets approved.

3. The AI layer itself is part of the risk surface

Once a team builds AI into development, the risk zone expands beyond the code. It includes models, plugins, agents, automated checks, external integrations, and the whole pipeline through which AI influences what ships.

The risk isn’t only in what AI wrote. It’s in how AI is wired into the path from idea to release.


A “smart comment” isn’t enough

In 2025, GitHub said explicitly that Copilot code review combines LLM analysis with external tool calls and rule-based checks via ESLint and CodeQL. (github.blog)

That’s worth noting. One of the major players in this space isn’t betting that AI reading the code is sufficient.

That means static analysis, dependency scanning, and a human who reads the diff before approving it.

One AI comment isn’t enough, even if it sounds convincing.


What this means for websites and web applications

For the web, the conclusion is blunt:

Any output AI produces has to go through the same full security path as code written by a human.

Not a simplified version. Not “it’s just a draft.” Not “we’ll check it later.” The same path.

At minimum, that means the following.

Security has to be built in from the start

Security added at the end isn’t security — it’s a check that routinely gets skipped. OWASP is explicit that security must run through the entire development lifecycle, from design to release. (owasp.org)

For AI-generated code, this is direct: if security isn’t built into the process, AI speed just carries risk into production faster.

Automated checks are required

Static analysis, linters, dependency scanning, and technical filters aren’t optional layers — they’re the baseline. That’s why GitHub is building code review around tools like CodeQL and ESLint rather than relying on AI alone. (github.blog)

External components need vetting

Dependencies are one of the main attack surfaces for web applications. OWASP recommends keeping an inventory of components, setting dependency rules, verifying artifact origin, and maintaining a controlled build process. (owasp.org)

Human review still matters

AI can flag a suspicious pattern. But the judgment call can’t be handed off to a machine — especially not for access control, input validation, external integrations, user data, or edge case behavior.


The most dangerous mistake: treating security as something to add later

In 2026, that’s not just naïve. It’s a bad bet.

DORA’s report on AI in development describes AI as an amplifier: it strengthens strong systems and exposes the weak spots in weak ones. The teams that actually benefit aren’t the ones that moved fastest to adopt AI — they’re the ones that embedded it into an engineering system that already had checks, tests, and accountability. (dora.dev)

AI doesn’t lower security requirements. It raises the cost of ignoring them.


What maturity looks like in 2026

Code generated with AI help isn’t treated as safe by default. It gets the same skeptical review as anything else — maybe more.

No AI output skips the standard checks: security baked into the process, automated analysis, dependency scanning, tests, human review.

Risk gets assessed broadly — not just the code, but the libraries, tools, automation, external integrations, and the AI pipeline itself.

Speed and safety are kept separate. If AI accelerated a change, that doesn’t mean the change is safer.

Security is part of the delivery system, not an inspection at the end.


Conclusion

AI has accelerated not just productivity — it’s accelerated the rate at which weak decisions, dangerous patterns, and risky dependencies enter a product. Code security and supply chain security can’t stay at the margins of the development process as a result.

In 2026, treating this as someone else’s problem — the security team’s, the next model’s, the next sprint’s — is just a way of not dealing with it.

It’s a baseline condition of working at this level.

Sources

Yevhen Kim
techai

AI does not reduce the developer's role. It raises the bar.

There's a lazy argument people keep repeating: if AI writes more code, developers must matter less. I get why that sounds convincing. It just falls apart the second you remember that software development was never only about typing.

Typing is the easy part. Or at least the easier part.

The real work is understanding what needs to be built, noticing what's missing, making tradeoffs, catching dumb decisions before they harden into system behavior, and figuring out whether a solution is actually good or just temporarily convenient. That part did not go away. If anything, AI makes it more obvious.

Because yes, the routine stuff is getting cheaper. Boilerplate. Test scaffolding. Refactors you don't want to do by hand. The sort of code that feels like moving furniture from one side of the room to the other. If AI helps with that, great. Nothing sacred was lost there.

What changes is where the weight sits.

If a tool can give you working code in minutes, then your value is less about raw production and more about direction. Can you frame the problem properly? Can you give enough context? Can you tell when the output is wrong, even when it looks polished? Can you stop a bad idea before it spreads through five services and a data model nobody wants to touch six months later?

That's the part that gets harder.

AI is fast, but fast has this nasty habit of impersonating competence. That's what makes it useful and risky at the same time. I've seen generated code that looked clean, read well, passed a few checks, and still missed the point completely. Not in some dramatic sci-fi way. Just normal, expensive wrongness.

So no, I don't think this lowers the bar for developers. It does the opposite.

Context matters more now, because weak input produces weak output faster.

Verification matters more, because plausible code is not trustworthy code.

Architecture matters more, because once implementation gets cheap, high-level mistakes get expensive.

And process matters more too. If tools help a team move faster, then review discipline has to get tighter. Otherwise you're not accelerating engineering, you're accelerating mess.

The old idea that a developer's value comes from how much code they can personally churn out was already shaky. AI just exposed how shaky it was. The useful developers were never valuable because they typed a lot. They were valuable because they judged well. They knew what to build, what not to build, and where the risk really was.

That still sounds true to me. Probably more true than before.

At this point, arguing about whether AI can write code feels beside the point. Obviously it can. The real question is who can use it without getting sloppy. Who can hand off the boring work without handing off judgment. Who can still protect the shape of the system while everything around them gets faster.

That's the real shift.

A strong developer is not becoming less important. They're being asked to operate at a higher level, with less room for fuzzy thinking and more consequences for bad calls. AI does not shrink the role. It makes the role stricter.

Sources

Yevhen Kim
techcode

Why Writing Good Code Is No Longer Enough

Good code still matters. Clean implementation, engineering discipline, architectural judgment—these remain the foundation of strong engineering. That hasn't changed.

What changed is simpler: good code by itself no longer guarantees a good outcome. A team can ship a feature quickly and elegantly and still solve the wrong problem. It can overbuild, spend effort in the wrong place, or produce a system that looks impressive in the code but adds weak value in practice.

So the bar for engineers shifted. The market rewards not just the person who writes the cleanest code or ships the fastest. It rewards the engineer who helps a team make sharper decisions. Where to simplify. Where to challenge the brief. Where to protect the architecture. Where to cut scope. And crucially—where a technically beautiful path just isn't worth its long-term cost.

This is the real shift.

For years, broad context was treated as a nice-to-have. An engineer who understood the product, noticed trade-offs early, and asked sharp questions had an edge.

Now it's the baseline. That's partly because software doesn't exist in isolation. A decision in one place affects cost of change, team speed, system resilience, user experience, and the product's ability to evolve. In that environment, good code is necessary. It's no longer sufficient.

So evaluation changed too. Strong engineers are judged on implementation quality, yes. But also on judgment and decision-making under constraints. Can they see when the problem is framed badly? When a feature adds more complexity than value? When the team is optimizing activity instead of usefulness? Can they simplify without breaking the system? Can they push back before expensive momentum hardens into roadmap?

That's where engineering maturity shows.

Sometimes in architecture. Sometimes in knowing when to remove a layer instead of adding one. Sometimes in refusing an elegant solution because maintenance is too costly. And sometimes in seeing the real issue isn't the code at all—it's how the task was defined.

That kind of thinking used to distinguish an engineer. Now it's becoming standard.

AI makes this even clearer. When routine work is automated or at least accelerated, mechanical code writing loses its edge. It doesn't disappear. But as a competitive advantage, it becomes smaller.

What becomes valuable is harder to automate: understanding context, framing problems correctly, choosing the right constraints, separating signal from noise, and making mature trade-offs under uncertainty.

The research backs this. Microsoft's 2026 developer study found developers spend roughly 10% of their day actually writing code, even though most AI tools target exactly that fraction. A 2026 study on GenAI in development found the strongest gains in design, implementation, testing, and documentation. But the center of value shifted toward specification quality and architectural reasoning. The faster the mechanics get, the more visible judgment becomes.

That matters because AI doesn't just accelerate good work. It accelerates bad decisions too. A weak requirement becomes a polished implementation faster. A shallow feature idea survives longer because prototyping is cheap. Teams mistake motion for progress because output arrives quickly and looks convincing.

So AI doesn't reduce the role of the strong engineer. It exposes what that role was always supposed to contain.

Not just writing code well. Understanding what should be built. Seeing what should be cut. Recognizing where complexity is unjustified. Connecting technical depth to real outcomes.

Product thinking, creativity, and comfort with uncertainty aren't external to engineering anymore. They're part of modern engineering work. Not instead of technical depth. Alongside it.

Good code is still mandatory. But it no longer answers the full question of professional value. More depends on the ability to see beyond your immediate surface: to know what matters, where compromise is needed, where the brief should be challenged, and where removing complexity is the stronger move.

A truly strong engineer connects implementation quality with product understanding. Technical depth with practical impact. Professional confidence with mature decision-making.

What used to distinguish an engineer is steadily becoming the baseline.

Sources

Yevhen Kim
techai

From Task Executor to Product Co-Author

For a long time, software development followed a simple model: product defines the task, engineering implements it.

That model was never wrong. Speed, technical depth, reliability, and execution quality still matter. None of that has become less important.

What changed is something else: as a full description of where engineering creates value, that model is no longer enough.

This shift is not universal. There are still organizations where engineers mainly receive requirements and execute them. But across startups, stronger product teams, and AI-heavy environments, the market is clearly moving in a different direction: engineers are increasingly expected to shape not only how something is built, but whether the chosen direction makes sense.

Today, too much depends not only on how well a solution is built, but on whether the direction itself is right. In many teams, the most expensive mistake no longer happens in code. It happens earlier — in weak product logic, shallow market understanding, inflated assumptions about user value, or a badly underestimated cost of complexity.

That is why a strong engineer is increasingly valuable not only at the point of implementation, but earlier — where a team still has room to question the path itself.

This was always easier to see in startups. There, the distance between idea, product, and implementation is short. Engineers often stand close to the actual foundation of the decision: the hypothesis, the user value, the market constraint. The point where a feature either helps in practice or only sounds convincing in a meeting.

That environment makes one thing obvious very quickly: high-quality execution does not rescue a weak premise.

A technically strong product can still miss its market. Not because the team is weak. Not because the engineering is poor. Sometimes the original assumptions about users, timing, demand, or product value were wrong. That becomes visible only after the team has already committed to a roadmap.

Once goals are locked, tickets flow, and delivery is measured by throughput, development can quietly turn into high-quality execution of a questionable direction.

That is where the engineer's role changes.

The difference between a task executor and a product co-author is not rhetorical. It is practical.

A task executor receives a formulation and focuses on implementation.

A product co-author looks one level earlier: What problem are we actually solving? Why this path and not another? For whom does it matter? What is the cost of this decision over time? Are we creating complexity where the real value is too small to justify it?

This is not a replacement for product management. It is part of mature engineering.

A strong engineer contributes not only technical choices, but consequences. They can see where a seemingly reasonable feature creates long-term drag. They can spot when a roadmap is building around assumptions that have not yet earned that certainty. They can notice when the team is optimizing delivery around a weak premise.

In other words, they do not just help build the thing right. They help the team avoid building the wrong thing with great discipline.

Modern product practice increasingly points in the same direction. Product development is becoming more cross-functional, more discovery-driven, and more dependent on fast learning loops between customer needs, business decisions, and technical constraints.

In that environment, engineering is not just a downstream recipient of requirements. It is one of the functions that helps shape which requirements deserve to exist in the first place.

This matters even more now because AI is compressing the cost of mechanical execution.

Prototypes can be assembled faster. Flows can be tested earlier. Draft implementations arrive quicker. Documentation, boilerplate, and routine transformations take less effort than before. That speed is useful — but it changes the economics of bad decisions.

When production becomes cheaper, weak product logic scales faster. When prototyping gets easier, shallow ideas survive longer than they should unless someone challenges them. When teams can move faster, the cost of moving in the wrong direction rises with the same speed.

So AI does not make the engineer less relevant to product thinking. It makes that contribution harder to ignore.

If routine implementation gets cheaper, value moves toward judgment. Understanding the problem correctly. Seeing constraints before they become failures. Distinguishing signal from motion. Choosing a strong trade-off. Recognizing when apparent progress is just accelerated waste.

This is one reason the old picture of the developer as a highly skilled recipient of tasks is becoming too small.

It is not that implementation matters less. It is that implementation alone no longer explains enough.

In many teams, the engineer is now expected to participate in shaping the solution — not as a political power grab and not as role inflation, but as a professional response to how product development actually works under speed, uncertainty, and constant iteration.

That shift is not a fashionable status upgrade. It is a structural consequence of modern software work.

The more complex products become, the faster delivery cycles get, and the cheaper mechanical implementation turns, the more valuable the engineer becomes who can strengthen not only the code, but the decision behind the code.

That is what role maturity looks like now:

not just building well, but helping the team build the right thing more accurately.

Sources