Wong Edan's

The Rise of Agentic Coding: Why I Built Mema

February 08, 2026 • By Azzar Budiyanto




The Rise of Agentic Coding: Why I Built Mema

The Rise of Agentic Coding: Why I Built Mema

Let’s be honest. For the last forty years, we’ve been programming like highly-trained monkeys. Smart monkeys, sure. Monkeys with mechanical keyboards and a concerning caffeine addiction. But monkeys nonetheless. We sit in front of a glowing box, translating pristine, logical business requirements into a language a rock with lightning trapped inside it can understand. And what do we get for our troubles? YAML files. Thousands of them. A digital purgatory of indentation errors and cryptic key-value pairs.

We’ve built cathedrals of abstraction, towers of frameworks, and labyrinthine CI/CD pipelines, all in an effort to what? Write code slightly faster so we can have more time to debug the code we just wrote. It’s a self-licking ice cream cone of complexity. We are hamsters on a wheel, mistaking our furious spinning for forward progress. I call this the Great Stagnation. We’ve optimized the syntax, but the semantics? The intent? It’s still lost in translation.

I looked at this landscape, at the legions of brilliant minds arguing about whether to use tabs or spaces while the world burns, and I thought: “Ini gila.” This is madness. The problem isn’t the programmer. The problem isn’t the language. The problem is the verb. We shouldn’t be “writing” code. We should be directing it.

The Tyranny of the Explicit

Modern programming is a masterclass in micromanagement. We tell the computer what to do, yes, but we also tell it how to do it, step by agonizing step. `for i = 0 to array.length`, `if (x > 5)`, `await fetch(url)`. We are digital puppeteers, but our strings are made of brittle syntax and our puppets are dumb as a bag of hammers. Forget one semicolon, and the whole show comes crashing down.

This is where the real cost lies. The cognitive overhead. We hold the entire state of the machine in our heads, a fragile house of cards that collapses the moment someone asks us what we want for lunch. We spend more time wrestling with boilerplate, deciphering cryptic error messages, and navigating byzantine codebases than we do solving the actual problem. We’re not knowledge workers; we’re complexity janitors.

And then came the AI revolution. Suddenly, we had models that could write code. And what did we do? We asked them to write more boilerplate. We used them as fancy autocompletes, a slightly smarter monkey to our slightly dumber monkey. We took a tool that could reason and turned it into a glorified snippet generator. It’s like discovering fire and using it to warm your coffee, one cup at a time. Pathetic.

Wong Edan Presents: Agentic Coding

This is where the “Wong Edan” (crazy thinking) comes in. What if we stopped being puppeteers and started being directors? What if we could communicate our intent and have an agent—a tireless, brilliant, slightly unhinged digital entity—figure out the rest?

This is the core of Agentic Coding. It’s not about generating code; it’s about generating outcomes. You don’t write a script to deploy a server. You tell the agent, “I need a web server for a high-traffic Node.js application, optimized for low-latency in Southeast Asia. Make it secure, scalable, and don’t be cheap, but don’t sell my kidney either.”

The agent doesn’t just write a Terraform script. It understands the nuance. It queries cloud providers for pricing. It provisions the VPC, sets up the security groups, configures the load balancer, deploys the application, and sets up monitoring and alerts. It might even argue with you about your choice of database, presenting a well-reasoned argument for why Postgres is superior to your sentimental attachment to MySQL. It has agency.

This is a fundamental shift from imperative to declarative, but on a whole new level. It’s not just declaring a state; it’s declaring a goal. The “how” becomes the agent’s problem. Your job is to have good taste in goals.

And So, I Built Mema

I couldn’t find a tool that embraced this philosophy. So, in a fit of caffeine-fueled hubris, I built one. I call it Mema, from the Javanese word for “to expand” or “to swell,” because it takes a tiny seed of intent and grows it into a fully-realized reality.

Mema is not an IDE. It’s not a framework. It’s a partner. It’s a command-line interface to a legion of specialized AI agents that live on my ridiculously overpowered home server. When you work with Mema, you’re not writing code. You’re having a conversation.

At its heart, Mema is built on three crazy principles:

  1. The Principle of Intent Distillation: Mema’s first job is to understand what you really want, not just what you say. It uses a custom-trained model I call the “Semantic Distiller” to break down your natural language requests into a structured “Intent Graph.” This graph captures not just the task, but the constraints, the unspoken assumptions, and the desired quality attributes. “Make it fast” is translated into measurable latency targets. “Make it secure” becomes a checklist of security best practices.
  2. The Principle of Composable Agents: There is no one “God” agent. Mema is a swarm. There’s `TerraformAgent`, who dreams in HCL. There’s `CodeAgent`, a polyglot programmer who can write rusty-safe Rust or elegant Elixir. There’s `SecurityAgent`, a paranoid schizophrenic who sees vulnerabilities everywhere (and is usually right). There’s `DocsAgent`, a frustrated technical writer who will nag you until your README is perfect. Mema’s “Cognitive Weaver” selects the right team of agents for the job and orchestrates their collaboration. They argue, they refactor each other’s work, and they learn from their mistakes.
  3. The Principle of Radical Transparency: This is not a black box. Mema explains every decision it makes. It produces not just the final code or infrastructure, but a detailed “Audit Trail” that reads like a Socratic dialogue. It shows you the paths it considered and rejected. It links to the documentation it read. If it makes a mistake, it writes a post-mortem. You can interrupt it, question its logic, and force it to take a different approach. It’s your agent, and you are the ultimate authority.

A Glimpse Under the Hood

You want technical details? Fine. The Mema CLI is a lightweight Go binary. Your prompt is sent to the “Cognitive Weaver,” a central orchestration engine running on a Kubernetes cluster that I definitely pay for legally. The Weaver uses the Intent Graph to generate a dynamic execution plan.

Each step in the plan invokes a specific agent. These agents are containerized, specialized LLM instances fine-tuned on a diet of Stack Overflow, a curated library of O’Reilly books, and the entire public corpus of GitHub (with a very strong profanity filter, mostly). The `CodeAgent`, for example, has been fine-tuned on millions of pull request reviews, giving it an uncanny ability to spot off-by-one errors and suggest more idiomatic patterns.

The real secret sauce is the “Semantic Cache.” When an agent solves a problem, the solution—along with the Intent Graph and the Audit Trail—is stored in a vector database. The next time a similar intent is detected, Mema doesn’t start from scratch. It retrieves the previous solution, adapts it to the new context, and executes. It learns. Your organization’s tribal knowledge, the stuff that usually walks out the door when a senior engineer leaves, is captured and codified.

The process feels like magic. You type: `mema create-service –name=user-profile –lang=go –db=postgres –deploy=gcp`. Mema asks you a few clarifying questions (“What region? Any specific performance requirements?”). Then, you watch the log stream. You see agents being dispatched. You see code being written, tested, and pushed to a new Git repository. You see Terraform plans being generated and applied. You see a CI/CD pipeline being configured. Ten minutes later, you get a Slack message with the URL of your new service, a link to the Grafana dashboard, and a gentle reminder from `DocsAgent` that you still haven’t written the API documentation.

This Is the Future. Stop Being a Monkey.

Mema is my proof of concept. It’s my declaration of war against the tyranny of the explicit. It’s a bet that the future of software development is not about writing more code, but about having better ideas.

Agentic Coding will free us from the drudgery. It will allow us to operate at a higher level of abstraction, to focus on the “what” and the “why,” not the “how.” It will make senior-level expertise available to every developer. It will turn us from code monkeys into system architects and product visionaries.

Some people find this scary. They fear the agent will replace them. They are missing the point. The agent is a tool. A bicycle replaces walking, but it doesn’t make your legs obsolete. It lets you go further, faster. Mema is a bicycle for the mind.

So, the next time you find yourself debugging a YAML file at 2 AM, ask yourself: Is this really the best use of my intelligence? Or am I just a very smart monkey, spinning a wheel and going nowhere? The age of agentic coding is here. It’s time to get off the wheel.