This is the shift! So many engineers (#NotAllEngineers) used to spend all their time looking at code and none of their time looking at the actual product users are using. When Claude is doing the coding for you, it leaves a lot more time for dogfooding.
"Human-written code died in 2025. Code reviews will die in 2026."
What won't go away. Humans maintaining bad AI code. Cognitive debt.
LLMs generate security-vulnerable code thirty percent of the time. That may improve if models are specifically trained on proper DevSecOPS. But it won't go away.
Bottom line: LLMs are not deterministic. They are not AGI. They are trained on bad human coding practices, which is endemic in the software industry, so they produce bad code.
If you look around, a lot of code written by humans also have vulnerabilities and tech debt. Even if models don't get significantly better, some of the quality problems can be solved by building better guardrails using both predictive and deterministic rules.
Code written by humans with vulnerabilities is precisely my point. That's where AI learned to do the same. Cognitive debt is the new term and it's worse than tech debt.
I doubt the quality problems can be "solved". Improved, no doubt. Not "solved" - until we get much better AI than LLMs.
As for guardrails, they don't exist, inherently. See here:
If You Bought Anthropic's Make Believe, It's Time To Grow Up
Anecdotally speaking, I think code reviews mostly died with human-written code. There's still some precedent for my teams checking each other's work, but the pressures to ship faster have forced us to sacrifice "bottlenecks" to velocity.
I do not think this is good. But from a business perspective (eg, from the perspective of the COO), ROI on AI costs is justified by empirical increases in velocity.
You'd think this effectively opens up bandwidth for code reviews. But if the code review takes longer than the AI took to write the feature, the math doesn't make sense to higher ups.
I think a lot more people are shipping AI-authored code than will admit.
100%. Interestingly, a few days ago, a semi-formal study was published showing how Claude Code produces poor-performance code. This probably tracks with how it’s trained. And even though this could be partially mitigated by people telling it “write production code”, I doubt this is practice in most cases.
It’s funny to see how this dynamic plays out, over time:
1. Agents almost never get it right.
2. Agents getting it right sometimes.
3. Agents getting it right often.
4. Agents getting it right most of the time.
5. Agents getting it right enough to teach other agents.
… which actually feels like how you onboard and upskill a new engineer.
Funny how that works.
Excellent that you quoted from StrongDM!
More awareness is needed for "Code must not be written by humans / Code must not be reviewed by humans".
I am not reviewing code anymore. But I am user testing my software 10x more.
This is the shift! So many engineers (#NotAllEngineers) used to spend all their time looking at code and none of their time looking at the actual product users are using. When Claude is doing the coding for you, it leaves a lot more time for dogfooding.
"Human-written code died in 2025. Code reviews will die in 2026."
What won't go away. Humans maintaining bad AI code. Cognitive debt.
LLMs generate security-vulnerable code thirty percent of the time. That may improve if models are specifically trained on proper DevSecOPS. But it won't go away.
Bottom line: LLMs are not deterministic. They are not AGI. They are trained on bad human coding practices, which is endemic in the software industry, so they produce bad code.
If you look around, a lot of code written by humans also have vulnerabilities and tech debt. Even if models don't get significantly better, some of the quality problems can be solved by building better guardrails using both predictive and deterministic rules.
Code written by humans with vulnerabilities is precisely my point. That's where AI learned to do the same. Cognitive debt is the new term and it's worse than tech debt.
I doubt the quality problems can be "solved". Improved, no doubt. Not "solved" - until we get much better AI than LLMs.
As for guardrails, they don't exist, inherently. See here:
If You Bought Anthropic's Make Believe, It's Time To Grow Up
https://disesdi.substack.com/p/if-you-bought-anthropics-make-believe
Do I believe that AI can ever write code without bugs - No
Do I believe that AI can ever write code better than most humans - Soon!
Guardrails will follow similar patterns. So if millions of lives depend on it, you cannot rely on automated systems.
Anecdotally speaking, I think code reviews mostly died with human-written code. There's still some precedent for my teams checking each other's work, but the pressures to ship faster have forced us to sacrifice "bottlenecks" to velocity.
I do not think this is good. But from a business perspective (eg, from the perspective of the COO), ROI on AI costs is justified by empirical increases in velocity.
You'd think this effectively opens up bandwidth for code reviews. But if the code review takes longer than the AI took to write the feature, the math doesn't make sense to higher ups.
I think a lot more people are shipping AI-authored code than will admit.
This is similar to going from manual deploys to automated deploys. We need better guardrails to make this work
100%. Interestingly, a few days ago, a semi-formal study was published showing how Claude Code produces poor-performance code. This probably tracks with how it’s trained. And even though this could be partially mitigated by people telling it “write production code”, I doubt this is practice in most cases.
https://www.codeflash.ai/blog-posts/hidden-cost-of-coding-agents