Wong Edan's

The AI Brains: Are We Training Skynet or a Really Smart Toaster?

February 08, 2026 • By Azzar Budiyanto

The AI Circus is in Town: From Fancy Algorithms to Full-Blown Digital Consciousness (Maybe?)

Alright, you magnificent collection of organic matter, let’s talk about the digital elephant in the room. Or rather, the entire freaking zoo that is Artificial Intelligence. You’ve heard the whispers, seen the headlines, probably even had a chatbot try to flirt with you. AI, AGI, training, automation – these aren’t just buzzwords anymore; they’re the ingredients in a technological stew that’s simmering on humanity’s stove, and frankly, I’m not sure if it’s going to be a gourmet meal or a recipe for disaster. But hey, I’m Wong Edan, and if there’s one thing I know, it’s that being a little ‘gila’ helps you see things clearer.

Right now, we’re living in what I like to call the “AI adolescence.” It’s awkward, prone to overexcitement, and sometimes it says something profoundly stupid, then something profoundly brilliant. What most people call “AI” today is generally what we in the biz call Narrow AI or Weak AI. Think of it as a prodigy in a single field – a chess grandmaster, a language translator, an image generator. It’s incredibly good at one specific thing. Like that one relative who can cook amazing fried rice but can’t change a lightbulb to save their life.

But then there’s the siren song, the holy grail, the technological equivalent of finding the One Ring: Artificial General Intelligence (AGI). This isn’t just about making a fancy chatbot that can write your terrible poetry; it’s about a system that can theoretically do anything a human mind can do, learn any intellectual task, and adapt to any situation. And that, my friends, is where things get really, really interesting – and potentially a little bit terrifying. The journey to AGI is paved with data, computation, and an ever-increasing amount of what we call “training,” all culminating in the ultimate promise (or threat) of automation. So buckle up, because we’re diving deep into this digital rabbit hole.

AI Today: Your Automated Sidekick (and Occasional Overlord)

Let’s ground ourselves in the present. What does AI actually do right now? Well, if you’ve been anywhere near a computer or a smartphone in the last couple of years, you’ve interacted with it. From recommending your next terrible binge-watch on Netflix to powering the spam filter that keeps your inbox vaguely sane, AI is embedded.

The current poster child, of course, is generative AI. We’re talking ChatGPT, Google Bard, Microsoft Copilot – the whole gang. These are the tools that are democratizing previously complex tasks. Companies like the American Graphics Institute (AGI, confusingly not the AGI, but a very relevant acronym here) are already running courses teaching you how to “write, summarize, organize, and automate tasks using generative AI.” This isn’t some futuristic sci-fi; it’s here, now, in your business workshop.

Think about it:

  • Content Creation: Need a blog post? An email draft? A catchy headline? Generative AI can spit out decent copy in seconds. It’s not always Hemingway, but it gets the job done when you’re staring at a blank page.
  • Information Synthesis: Drowning in a sea of documents? AI can summarize lengthy reports, extract key information, and even organize your chaotic notes into something resembling coherence.
  • Data Handling: Ever struggled with Excel? (Don’t lie, we all have). AGI’s Excel AI courses teach you how to leverage tools like Microsoft Copilot and OpenAI’s ChatGPT to “automate tasks, analyze data using AI,” and probably even make your pivot tables less soul-crushing. This is where automation isn’t just a buzzword; it’s a productivity superpower.
  • Workflow Automation: Beyond just generating text, current AI agents, like those leveraging AutoGPT, are being used to “generate high-quality text, automate workflows, analyze data, automate processes and more.” This means connecting different software, making decisions based on inputs, and executing multi-step tasks without human intervention. Imagine scheduling meetings, sending follow-up emails, and updating CRM records, all while you’re busy contemplating the meaning of life (or just your next coffee).

This level of automation, while impressive, still falls squarely within the “Narrow AI” category. It’s purpose-built. It excels at the specific tasks it was trained for. It’s the highly specialized technician, not the general manager. Yet, even this narrow automation is already sending shockwaves through the job market.

“Is AI Really Going to Take Over Jobs? Or Is This Just Another Tech Hype Cycle?”

That’s a question everyone’s asking, and frankly, if you’re not asking it, you’re probably not paying attention. The headlines are screaming about AI “automating tasks, causing layoffs across various industries, and changing the way companies operate.” It’s not just hype; it’s happening. From customer service to coding, AI is shifting the goalposts. Your job might not disappear, but its nature is definitely changing. The smart ones are learning to wield AI, not just fear it.

So, yes, AI is already deeply integrated into our daily lives, making us more efficient, more productive, and sometimes, a little lazier. But this is just the appetizer. The main course, the grand buffet, is AGI.

The Holy Grail: AGI and the Quest for True Digital Brainpower

Now, let’s talk about AGI – Artificial General Intelligence. This is the big one, the ultimate prize, the technological equivalent of creating life, but out of silicon and algorithms instead of primordial soup.

The consensus definition, often reiterated in discussions around the ARC Prize, is pretty clear: AGI is “a system that can automate all cognitive tasks.” Let that sink in. Not just some tasks, not just most tasks, but all cognitive tasks. If it involves thinking, learning, reasoning, problem-solving, understanding, or creating – AGI should be able to do it, and potentially, do it better than any human.

This is the fundamental difference from our current crop of Narrow AIs. A chatbot can write an essay, but it doesn’t understand the concepts it’s writing about in the same way a human does. It’s a master pattern matcher. An AGI, on the other hand, would possess a deeper, more flexible understanding, allowing it to:

  • Learn New Skills Autonomously: Without specific retraining for every new task.
  • Reason and Problem Solve Across Domains: Apply knowledge from one area to solve problems in a completely different area.
  • Exhibit Common Sense: Navigate the messy, ambiguous world with intuition and generalized understanding.
  • Create and Innovate: Not just recombine existing data, but generate truly novel ideas and solutions.

Measuring progress towards such an elusive goal requires specific benchmarks. This is where something like ARC-AGI comes in. It’s “the only AI benchmark that measures our progress towards general intelligence.” Unlike traditional AI benchmarks that test specific skills, ARC-AGI aims to evaluate an AI’s ability to reason and solve novel problems it hasn’t been explicitly trained on. It’s like testing a child not just on their multiplication tables, but on their ability to figure out a new puzzle or adapt to an unfamiliar social situation.

The philosophical implications of AGI are profound. What does it mean for humanity when our most cherished attribute – intelligence – can be replicated, and potentially surpassed, by a machine? Is it still intelligence if it doesn’t have consciousness, emotions, or a biological origin? These are questions that will plague philosophers, ethicists, and probably even me, as I try to wrap my head around a digital entity that might be smarter than I am (which, let’s be honest, wouldn’t take much).

From my ‘Wong Edan’ perspective, AGI isn’t just about building a bigger brain; it’s about building a fundamentally different kind of brain. One that doesn’t suffer from biological limitations, cognitive biases, or the need for sleep. The stakes are astronomically high.

The Engine Room: Training, Compute, and the Algorithmic Grind

You don’t just ‘make’ an AGI out of thin air. It’s built on a foundation of monumental effort, gargantuan data sets, and a terrifying amount of computational power. This brings us to “training,” the often-overlooked but absolutely crucial heart of AI development.

The Unsung Hero: Data and Algorithms
Imagine trying to teach a child everything they need to know by showing them only five books. It’s impossible. Similarly, AI models, especially the large language models (LLMs) that underpin generative AI, require truly colossal amounts of data. We’re talking petabytes of text, images, code, and more, scraped from the internet, digitized archives, and proprietary databases. This data is the raw material.

Then come the algorithms – the recipes, the instructions, the mathematical frameworks that tell the AI how to learn from that data. These algorithms are constantly evolving, becoming more sophisticated, more efficient, and more capable of extracting patterns and relationships from the noise. As a compute-centric framework suggests, “Software” refers to the quality of algorithms for training AI. Better algorithms mean more efficient learning, potentially reducing the brute-force compute requirements, but so far, the trend has been more compute, bigger models.

The Compute-Centric Framework: A Digital Arms Race
This framework, often discussed in relation to AI takeoff speeds, emphasizes the critical role of computational power (compute) in driving AI progress. It posits that the advancements in AI are highly correlated with the amount of compute we throw at it. The more powerful the chips, the more complex the models we can train, and the faster we can iterate on new designs.

Consider these insights:

  • “Medium correlation between AGI training requirements and growth of AI investments.” This implies that as we aim for more ambitious AGI systems, the financial and resource investment in raw computational power is going up, significantly. It’s not cheap to bake a digital brain.
  • “AI automation → rapid increase in the largest training run.” This is a feedback loop. As AI itself becomes better at optimizing tasks, it can be used to optimize its own training. This means AI systems are not just automating business processes; they are automating the very process of creating better AI. This leads to increasingly massive and complex “training runs” – the processes where an AI model learns from its data.
  • “Growth rate fraction compute training.” This refers to how quickly the amount of computational power dedicated to training AI is increasing. And believe me, it’s not slowing down. We’re talking about dedicated data centers filled with specialized GPUs, consuming energy equivalent to small cities, just to teach these digital entities how to think (or simulate thinking, for now).

The “Training Run” Arms Race
Every new generation of AI model (GPT-3, GPT-4, and the eagerly anticipated GPT-5, etc.) requires exponentially more compute power than its predecessor. This isn’t just a linear growth; it’s often an exponential one. The sheer scale of these training runs is mind-boggling, measured in billions of parameters and zettaflops of computation. It’s an arms race where the currency is silicon and electricity.

The core concept is iterative improvement:

    LOOP (while model performance is below AGI threshold):
        1. GATHER massive datasets (text, images, video, code, etc.)
        2. DESIGN/REFINE neural network architecture (algorithms)
        3. ALLOCATE immense computational resources (GPUs, TPUs)
        4. RUN TRAINING:
            - Feed data through model
            - Model makes predictions
            - Compare predictions to ground truth
            - ADJUST model parameters (weights, biases) to reduce error
        5. EVALUATE model performance on benchmarks (e.g., ARC-AGI)
        6. OPTIMIZE training process (using AI, naturally)
    END LOOP
    

This cycle, especially step 6, is where the automation of training truly kicks in. As one article noted, “AI systems automating training AI systems” is a key driver. We’re building machines that can help build even smarter machines. This self-improving loop is a critical aspect of how we get from our current “smart toaster” AIs to something truly generally intelligent.

Without this relentless focus on training, without the ever-increasing compute, and without the constant algorithmic innovation, AGI would remain a distant dream. It’s the engine room, folks, and right now, it’s running at full throttle.

Automation: The Double-Edged Blade (or, “Is My Job Safe?” Edition)

So, what’s the end game of all this AI, AGI, and training madness? In a single word: automation.

Current State: Task-Specific Efficiencies
As we discussed, current AI is already automating a significant chunk of our daily tasks. Whether it’s “automating routine tasks” with Microsoft Copilot, as taught in Kansas City AI Training courses from AGI, or leveraging AutoGPT to “automate workflows” and “automate processes,” the efficiency gains are undeniable. Businesses are streamlining operations, reducing human error, and freeing up employees from mundane, repetitive work. This is the positive spin: AI as an augmentation tool, helping us do our jobs better, faster, and with less drudgery.

But let’s be ‘Wong Edan’ for a moment. “Freeing up employees” often sounds suspiciously like “making employees redundant.”

Future Implications: AGI-Driven Transformation (and Disruption)
When we talk about AGI, we’re talking about a system that “can automate all cognitive tasks.” This isn’t just about sending automated emails; it’s about writing the code for those emails, designing the marketing strategy, managing the entire email campaign, optimizing the servers, and then perhaps even generating a new product idea based on the campaign’s success. All without human oversight, if we let it.

The implications for the job market are, to put it mildly, seismic. The common refrain, “AI is automating tasks, causing layoffs across various industries, and changing the way companies operate,” isn’t just future speculation; it’s a trend already in motion. What happens when all cognitive tasks can be automated?

  • The End of Work (As We Know It): Many jobs, from truck drivers to financial analysts, from entry-level data entry to complex legal research, rely on cognitive tasks. If AGI can do them all, what’s left for humans?
  • A Post-Scarcity Utopia? The optimistic view suggests AGI could unlock unprecedented productivity, leading to a world where basic needs are met for everyone, and humans are freed to pursue creative, artistic, or purely intellectual endeavors. Imagine scientific breakthroughs happening at warp speed, solving climate change, disease, and poverty.
  • A Dystopian Nightmare? The pessimistic view worries about mass unemployment, unprecedented wealth inequality, and a loss of human purpose. If machines do everything, do humans still have value? And who controls these hyper-intelligent, hyper-efficient systems?

Takeoff Speeds and Timelines: Are We Entering Crunch Time?
This leads us directly to the concept of “takeoff speeds.” This refers to the speed at which AI capabilities accelerate once AGI is achieved or even neared. Think of it like a rocket. It slowly rumbles, then ignites, and suddenly, it’s gone.
One article states, “The Takeoff Speeds Model Predicts We May Be Entering Crunch Time.” What does “crunch time” mean here? It means that as AI gets better at automating its own research and development (AI R&D automation), the pace of progress could become incredibly, almost unimaginably fast. An AGI could design a better AGI faster than any human team could, leading to an intelligence explosion.

Timelines are, of course, a source of constant debate, and they fluctuate wildly. For instance, an AGI timeline update “from GPT-5 (and 2025 so far)” notes that “very short timelines (< 3 years to full AI R&D automation) look roughly half as likely” as previously thought. This suggests a slight pull-back from extreme optimism/pessimism, but still, we’re talking about years, not decades, for profound shifts. If AI can automate its own improvement, even a slightly longer timeline is still an alarmingly fast one.

The “compute-centric framework” also touches on this, stating that the effects of “AI systems automating training AI” and general “AI systems automating” processes will lead to an “rapid increase in the largest training run.” This self-reinforcing loop—AI creating better AI, which in turn accelerates training and development—is what makes the “takeoff” concept so compelling and so concerning.

My ‘Wong Edan’ take on timelines? Who the hell knows? We’re building something fundamentally new. Predicting its exact trajectory is like trying to predict the weather on Mars with a soggy biscuit. What we do know is that the acceleration is real, the implications are vast, and being unprepared is the only truly “wrong” option.

The Human Element: Surviving or Thriving in the AI Era?

So, if AI is coming for all cognitive tasks, what’s left for us, the squishy, error-prone humans? This isn’t just a rhetorical question; it’s the defining challenge of our generation.

One answer lies in adaptability and leveraging AI rather than competing directly with it. The very “AI classes” offered by institutions like American Graphics Institute are designed to equip people with these skills. They’re not teaching you how to become an AI; they’re teaching you “how to use Microsoft Copilot and other AI tools,” how to “automate tasks,” and how to “leverage AutoGPT.” This is about becoming an AI-augmented human, a centaur of thought, where the human provides the creativity, ethics, and strategic direction, and the AI handles the grunt work, data crunching, and rapid execution.

The skills for the future will likely include:

  • Prompt Engineering: The ability to communicate effectively with AI, asking the right questions to get the desired output.
  • AI Oversight and Management: Ensuring AI systems are performing as intended, detecting biases, and intervening when necessary.
  • Critical Thinking and Ethical Reasoning: AI can present information, but humans must interpret it, apply ethical frameworks, and make value-laden decisions.
  • Creativity and Innovation (Truly Novel): While AI can generate permutations, truly novel artistic or scientific leaps often still require human intuition and subjective experience.
  • Interpersonal Skills: Communication, empathy, leadership – skills that are inherently human and crucial for collaboration, both human-to-human and human-to-AI.

We need to redefine work, value, and purpose. Perhaps our future isn’t about working 9-to-5 for a paycheck, but about pursuing passions, engaging in community, and solving problems that require uniquely human insight. This requires not just technological foresight but also societal planning, robust ethical frameworks, and perhaps even reimagining our economic systems.

Don’t just sit there lamenting the impending robot overlords, my friends. Learn to ride the damn robot. Understand its capabilities, learn its language, and steer it towards a future that benefits us all. Or at least, steer it away from making us all redundant while it sips our lattes.

Conclusion: The Future is Wild, Witty, and Potentially Wireless

We’ve taken a whirlwind tour, from the narrowly focused AI tools that automate our spreadsheets and churn out emails, to the grand, daunting vision of Artificial General Intelligence, capable of automating all cognitive tasks. We’ve peered into the engine room, understanding that “training” fueled by ever-increasing “compute” and sophisticated algorithms is the relentless force driving this progress. And we’ve wrestled with the profound implications of “automation,” recognizing it as a double-edged blade that promises both utopia and existential crisis.

The journey from a glorified calculator to a potential digital consciousness is not linear; it’s exponential, fueled by self-improving systems and an accelerating “takeoff speed.” Whether AGI arrives in three years or thirty, the underlying trajectory is clear. The machines are learning, and they’re learning fast.

My ‘Wong Edan’ advice? Embrace the chaos. Stay curious. Keep learning, especially how to interact with and manage these powerful tools. Because the future isn’t just coming; it’s already here, trying to summarize your emails, analyze your data, and maybe, just maybe, preparing to write the next chapter of human (and artificial) history. And trust me, you want to be a co-author, not just a footnote. Now go forth, and don’t let the algorithms bite!