Beyond Agentic Coding: Are We Just Asking for More Trouble?
The AI Agents Are Coming… Or Are They Just Bringing More AI Slop?
Alright, you lot. Gather ’round, because your resident ‘Wong Edan’ has something to say about this latest shiny bauble in the tech world: agentic coding. Everywhere I look, from my morning coffee dregs to the deepest corners of the internet, people are buzzing about AI agents that code themselves, manage workflows, and apparently, make us all instantly productive superheroes. Sounds like a dream, right? Or perhaps, as many of us cynical old dogs (and some surprisingly wise young pups over at “Haskell for All”) are starting to suspect, it’s just another elaborate way to generate more… well, slop.
My friends, for those of you who’ve been living under a rock – perhaps programming in assembly or something equally masochistic – agentic coding is the idea that AI, not just as a code completion buddy, but as an autonomous entity, can take a high-level prompt, break it down, write the necessary code, test it, and even deploy it. It’s the dream of a fully automated software development lifecycle, where you, the human, merely provide the grand vision, and the AI handles the greasy bits. We’re talking about systems like Kiro, which Amazon’s been touting as a “spec-driven development methodology that transforms ideas into production ready systems with unprecedented clarity and speed.” Fancy words, eh?
Now, don’t get me wrong. I’m generally pretty pro-AI. I mean, who wouldn’t want a sentient toaster that also fixes your CSS? But there’s a creeping dread, a persistent itch, a feeling that with agentic coding, we might just be trading one set of problems for an entirely new, potentially more infuriating, set. We’re going “Beyond agentic coding” today, not because it’s dead, but because if we don’t look critically past the hype, we might just end up digging our own very efficient, AI-generated graves.
What Exactly Is Agentic Coding, And Why Is Everyone Going Mad About It?
Let’s quickly define what we’re talking about before my caffeine wears off and I start ranting about JavaScript frameworks again. Agentic coding, at its core, refers to AI systems designed to operate with a degree of autonomy in the software development process. It’s not just your GitHub Copilot suggesting a line; it’s an AI agent that might:
- Interpret requirements: Take a natural language description (or a formal spec, if you’re lucky enough to have one that detailed) and understand the desired outcome.
- Break down tasks: Decompose the problem into smaller, manageable sub-tasks.
- Generate code: Write code snippets, functions, classes, and even entire modules.
- Test and debug: Create test cases, run them, identify errors, and attempt to fix them.
- Refactor and optimize: Improve the quality, performance, or maintainability of the generated code.
- Integrate and deploy: Potentially push changes to a repository, trigger CI/CD pipelines, and monitor deployment.
The promise is alluring. Imagine telling an agent, “Build me a microservice that processes user uploads, scales automatically, and stores data securely in a geo-redundant database,” and then sipping your latte while it churns out production-ready code. This “form factor of agentic coding is pretty ideal,” as one Redditor wisely noted, and it’s “not going anywhere.” It’s pitched as the ultimate evolution of productivity, moving us “beyond TDD & Spec-Driven Development” into a new era of “Agentic Engineering.” Companies are even publishing “Agentic Engineering Guides” after allegedly churning out “990k LOC” in 18 months using these methods. The idea of “hands-off code review” by an agent that also creates the code? It’s a tantalizing glimpse into a future of seemingly effortless development.
But let me tell you, “effortless” usually means someone, somewhere, is going to have to put in triple the effort later to fix the “effortless” mess.
The Grand Promises vs. The Gritty Reality: Where Agentic Coding Falls Short
Okay, let’s peel back the layers of marketing hype and get to the core of why agentic coding, for all its dazzling potential, is also making many of us veterans clench our teeth.
1. The Deskilling Dilemma: Are We Training Engineers or Button-Pushers?
This is perhaps the loudest gong being struck against agentic coding. As the “Haskell for all” piece succinctly put it, “My consistent impression is that agentic coding… harm[s] learning.” If AI agents are doing the heavy lifting – generating code, structuring solutions, even debugging – what exactly are human engineers learning?
Think about it. The process of becoming a skilled developer isn’t just about syntax. It’s about problem-solving, architectural thinking, understanding complex data structures, debugging intricate issues, and developing an intuition for what makes good, maintainable code. When an agent abstracts away these cognitive challenges, we risk producing a generation of “prompt engineers” who can issue commands but lack the foundational understanding to actually reason about the code.
“If an agent is writing all the code, testing all the code, fixing all the bugs, what are you doing? Are you still an engineer? Or are you just a highly-paid interpreter of ambiguous business requirements, constantly battling the AI’s creative interpretations?”
The human brain thrives on challenge. Remove the challenge, and you remove the growth. We learn by struggling, by making mistakes, by tracing through a debugger for hours until that “Aha!” moment hits. If an AI just hands us the “correct” answer (or what it thinks is the correct answer), we lose that crucial developmental loop. We’re essentially creating an intellectual laziness epidemic, where the next generation of engineers might be brilliant at prompt design but utterly stumped when faced with a legacy codebase written by actual humans.
2. The “Vibe-Coding” Vortex and the AI Slop Scandal
Ah, “vibe-coding.” A brilliant term coined, I suspect, by someone who’s spent too much time sifting through AI-generated garbage. This refers to the output of AI that looks plausible, feels right, but lacks the deeper understanding, robustness, and specific context required for real-world applications. Several articles, like “Beyond Vibe Coding: Amazon Introduces Kiro, the Spec-Driven Agentic AI IDE” and “Beyond Vibe Coded AI Slop: Agentic Workflows For Professionals,” hint at this problem, trying to distinguish “good” agentic output from “bad.”
The reality is, Large Language Models (LLMs) are pattern-matching machines. They are incredibly good at predicting the next token, at generating code that conforms to patterns they’ve seen. What they are not good at is true understanding, nuanced business logic, long-term architectural foresight, or critical thinking about edge cases that weren’t explicitly covered in their training data or your (likely incomplete) prompt.
This often leads to code that:
* Is generic: Lacks specific optimizations or domain-specific knowledge crucial for performance or maintainability.
* Contains subtle bugs: Appears correct on the surface but fails under specific, non-obvious conditions.
* Is inefficient: Solves a problem but uses an unnecessarily complex or resource-intensive approach.
* Is hard to maintain: Follows a common pattern but doesn’t align with existing codebase conventions or principles.
* Introduces security vulnerabilities: A common pattern might include an insecure practice the AI isn’t trained to flag.
Essentially, you get a lot of code that works 80% of the time, 80% of the way, for 80% of the scenarios. That remaining 20%? That’s where the humans spend 80% of their time debugging, fixing, and refactoring the AI’s “brilliant” work. It’s like asking a talented mimic to write a novel; they can perfectly imitate the style, but they lack the original thought and depth.
3. The Black Box of Agentic Debugging: When the AI Breaks Its Own Code
So, an agent writes 10,000 lines of code based on your loosely defined spec. It then tests it, finds a bug, and fixes it. Sounds great! Until the bug resurfaces in production, or a new, more insidious bug emerges. Now, you have to debug it. But this isn’t code you wrote, or even code written by a human colleague whose thought process you can try to infer. This is code generated by a probabilistic model.
Debugging AI-generated code can be like trying to understand the dreams of an alien. The logic might be sound, but the reasoning behind the choices is opaque. If the agent’s internal “thought process” (i.e., its chain of prompts and self-corrections) isn’t transparent, you’re left with a massive blob of code that you have to treat as a black box. This dramatically increases the cognitive load for human engineers and turns what should be a productive collaboration into a frustrating forensic exercise.
// AI generated code block 1:
function processUserData(user, data) {
// ... 500 lines of 'optimal' but context-less logic ...
}// AI generated code block 2:
function calculateDiscount(item, quantity) {
// ... Another 300 lines of highly specific but undocumented logic ...
}// Human trying to debug: "Wait, why is `processUserData` calling `calculateDiscount` directly and not through the `PricingService`?
// And why does it use a global variable `TAX_RATE_VAT_EU_2024_Q3_V2` that was deprecated last week?!"
This leads us to the critical point: the human always remains ultimately responsible. If the agent ships a critical bug, it’s not the agent getting fired. It’s the human engineer or manager who approved the agent’s output. This burden of responsibility, without commensurate understanding or control over the output, is a recipe for burnout.
4. UX Annoyances: The Chatbot Problem Reborn
One insightful comment on Lobsters pointed out, “I find chatbox-style applications annoying, but it might just be because I’m associating it with those online helpdesk type of things.” And they’re not wrong! Many agentic interfaces, especially early iterations, default to conversational models. While great for high-level interaction, for detailed, iterative coding tasks, a chat interface can be incredibly inefficient.
Imagine trying to refine a complex algorithm by typing paragraphs back and forth. You lose the visual cues of an IDE, the immediate feedback of syntax highlighting, the power of interactive debugging tools. While sophisticated IDEs like Kiro are attempting to integrate agents more seamlessly, the fundamental challenge remains: how do you create an intuitive, efficient human-computer interface for autonomous code generation that doesn’t feel like talking to a particularly verbose, slightly dense customer service bot?
The Agentic Engineer: From Code Crafter to Orchestrator?
If agentic coding is truly the future, then the role of the engineer must change. The articles talking about “managing the coding agent” rather than “hand crafting all the code” are on the right track. But what does “managing” entail? It’s more than just typing a prompt. It requires:
- Deep Architectural Understanding: You need to understand the system’s overall architecture to ensure the agent’s output fits seamlessly and doesn’t introduce technical debt or violate design principles.
- Meticulous Specification Design: If the agent is “spec-driven,” then the quality of its output hinges entirely on the clarity, completeness, and correctness of your specifications. This is a skill in itself, often harder than writing the code!
- Critical Evaluation of AI Output: You can’t just trust the AI. You need the expertise to critically review its generated code for correctness, efficiency, security, and maintainability. This requires more skill, not less.
- Advanced Debugging of Agent Failures: When the agent fails to produce the desired outcome, you need to debug why it failed – was it the prompt? The context? A limitation of the model? This is a meta-debugging skill.
- Orchestration and Toolchain Management: Integrating agents into existing CI/CD pipelines, managing their access to resources, and ensuring their outputs are validated and deployed responsibly.
So, the “agentic engineer” isn’t a lesser engineer; they’re an engineer operating at a higher level of abstraction, managing a complex AI system. It’s like moving from being a skilled carpenter to being an architect who can also occasionally build a perfect dovetail joint if needed. But let’s be honest, many companies will see this as an opportunity to reduce headcounts or push more work onto fewer people, without acknowledging the immense cognitive load this new role demands.
Beyond Agentic Coding: The Future Isn’t About Replacement, It’s About Augmentation
So, if autonomous agentic coding has all these hairy bits, where do we go beyond it? The answer, my friends, isn’t to reject AI entirely, but to adopt a more nuanced, intelligent approach. It’s about moving from “agentic coding” to “agentic software engineering beyond code.”
1. AI as a Super-Powered Assistant, Not an Autonomous Overlord
The sweet spot for AI in software development lies in augmentation, not replacement. Think of it as an Iron Man suit for your brain.
* Intelligent Code Generation (with human oversight): AI suggests, generates snippets, completes functions, and even prototypes entire modules – but the human always has the final say, the critical review, and the responsibility to integrate and refine. It’s like having the fastest junior dev ever, who still needs an experienced senior to review their PR.
* Advanced Code Review: Instead of an agent creating the code, let the AI be an extremely thorough code reviewer. It can spot security vulnerabilities, performance bottlenecks, stylistic inconsistencies, and potential bugs far faster than any human. This aligns with the “hands-off code review” idea but keeps the creative author human.
* Smart Testing and QA: AI excels at generating comprehensive test cases (unit, integration, end-to-end), identifying edge cases, and even fuzzing inputs. It can automate regression testing and help validate complex behaviors. This offloads a huge amount of grunt work from humans, allowing them to focus on exploratory testing and critical scenarios.
* Context-Aware Documentation: Agents can analyze codebases and generate initial drafts of documentation, API references, and internal guides. This saves countless hours and improves code comprehension.
* Architectural Analysis and Design Validation: This is where the real potential lies. Imagine an AI that can analyze your proposed system architecture, identify potential bottlenecks, security flaws, scalability issues, or suggest alternative design patterns before a single line of code is written. This is “agentic software engineering beyond code” in action. It elevates AI from a mere coder to a strategic advisor.
The key is that the AI provides insights and proposals, but the ultimate decision-making, the architectural vision, and the accountability remain firmly with the human engineer. We leverage AI’s speed and pattern-matching abilities, but we retain our unique human capacity for creativity, intuition, and holistic understanding.
2. From Vibe-Coding to Validated Specs: The Human’s Role in Clarity
If we are to embrace agentic tools, the emphasis must shift dramatically to the quality of our inputs. “Spec-driven development,” as touted by Kiro, is the direction we need to go, but with a critical caveat: the specs must be meticulously crafted by humans. This means investing in:
* Domain Expertise: Engineers must deeply understand the problem domain, business rules, and user needs to translate them into unambiguous specifications.
* Formal Specification Languages: Moving beyond vague natural language prompts to more structured, testable specifications that minimize ambiguity for both human and AI interpretation.
* Human-AI Collaboration on Specs: AI can assist in validating specifications, checking for internal consistency, identifying missing requirements, and even generating test cases from the spec itself. This ensures the spec is robust before any code generation occurs.
This isn’t about feeding a vague prompt and hoping for the best (the “vibe-coding” approach). It’s about precision, clarity, and intentional design, with AI acting as a co-pilot to ensure that clarity.
3. The New Skillset: Orchestration, Validation, and Strategic Vision
The future engineer, operating beyond agentic coding, will be an orchestrator. Their skills will be less about memorizing syntax and more about:
* Systems Thinking: Understanding how all the pieces of a complex system interact, including human teams, AI agents, and various tools.
* Critical Thinking and Problem Decomposition: The ability to break down complex problems into modular, manageable components suitable for both human and AI processing.
* Prompt Engineering for Clarity and Specificity: Not just “make a web app,” but “design an OAuth 2.0 flow for a multi-tenant SaaS application with these specific security protocols and user roles, integrating with our existing payment gateway API.”
* Auditing and Validation: The rigorous examination of AI-generated artifacts – code, tests, documentation, architecture – to ensure quality and adherence to standards.
* Ethical AI Use: Understanding the biases, limitations, and ethical implications of using AI in critical systems.
This demands a higher level of cognitive function, pushing engineers up the value chain. It’s challenging, yes, but it’s also immensely rewarding. It means focusing on the what and the why, rather than getting bogged down in the how.
My ‘Wong Edan’ Vision: Don’t Let the Machines Make You Madder Than I Am!
Look, the seismic shift that “Agentic AI is poised to usher in” in Software Engineering is real. We’re on the cusp of a new era. But let’s not be ‘wong edan’ (crazy people) and dive headlong into blind automation. The seductive allure of “unprecedented clarity and speed” can blind us to the fundamental human elements that make software truly great: creativity, empathy for the user, intuition, and the sheer joy of solving a complex problem with your own ingenuity.
The path beyond agentic coding isn’t about throwing the AI baby out with the bathwater. It’s about intelligently integrating these powerful tools to enhance human capability, not diminish it. It’s about leveraging AI to handle the tedious, repetitive, and pattern-matching tasks, freeing up human engineers to focus on the truly complex, creative, and strategic challenges.
Imagine a world where:
* You sketch out a high-level system diagram, and an AI analyzes it, suggesting improvements and generating a skeleton codebase.
* You write the critical business logic, and an AI automatically generates all the boilerplate, tests, and documentation around it.
* You define your security requirements, and an AI rigorously audits your entire codebase, flagging every potential vulnerability.
* You focus on the user experience and architectural elegance, while AI handles the mundane integration headaches.
This future isn’t one where engineers are obsolete. It’s one where engineers are empowered to be more creative, more strategic, and more impactful than ever before. It’s a future where we move “beyond vibe-coding” to truly professional, validated, and human-guided intelligence.
So, let’s proceed with caution, critical thinking, and a healthy dose of skepticism. Embrace the tools, but never forget the human touch. Otherwise, you’ll end up with a codebase so utterly “agentic,” it’ll drive you madder than I am! And trust me, you don’t want that. Now, if you’ll excuse me, my coffee’s cold, and I have to go debug why my toaster-AI tried to order 50 pounds of artisanal sourdough. It’s always something with these agents, isn’t it?