Wong Edan's

MIT’s AI Circus: From Chatbot Anthropology to Saving the Planet

March 26, 2026 • By Azzar Budiyanto

Greetings, fellow carbon-based lifeforms and aspiring silicon overlords! It is I, the Wong Edan of the tech world, back from a deep dive into the hallowed, coffee-stained halls of the Massachusetts Institute of Technology (MIT). While you were busy arguing with your smart toaster or trying to figure out why your AI-generated cat has seven legs and a human ear, the big brains in Cambridge were busy redefining the very fabric of reality. Or at least, they’re trying to make sure the robots don’t accidentally delete us while trying to optimize a sourdough recipe.

Today, we are dissecting the sheer madness and brilliance coming out of MIT News and the Computer Science & Artificial Intelligence Laboratory (CSAIL). We’re talking about everything from using anthropology to teach chatbots some manners to the horrifying realization that our generative AI habits are basically burning the planet down one “draw me a cyberpunk potato” prompt at a time. Buckle up, because this is going to be a long, technical, and slightly unhinged ride through the MIT ecosystem.

1. The Anthropology of the Bot: Teaching Chatbots to be Human-ish

In a move that sounds like the plot of a sci-fi indie film, a new MIT class is using anthropology to improve chatbots. Now, why would you take a bunch of computer science students—people who generally prefer talking to C++ compilers over actual humans—and force them to study anthropology? Simple: because our current AI is socially awkward. It’s like that one cousin who knows everything about train schedules but can’t look you in the eye without making it weird.

The goal here is to design AI chatbots that help young users become more social, rather than more isolated. MIT is looking at how we can use the lens of human culture and social structures to build “socially-aware” AI. Instead of just predicting the next token in a sentence (which is what your typical LLM does), these students are trying to build systems that understand the nuances of human interaction. The primary focus is on helping young people navigate the complexities of social life, using the chatbot as a sort of training wheel for real-world empathy. Imagine a bot that doesn’t just give you a Wikipedia summary but understands why you’re asking and how a human should respond without sounding like a soulless automaton.

“MIT computer science students design AI chatbots to help young users become more social, and socially aware…”

This is a pivot from “AI as a tool” to “AI as a social bridge.” If this works, maybe the next generation won’t be terrified of making a phone call to order pizza. One can only dream.

2. The Architecture of the Mind: CSAIL and the Schwarzman College of Computing

If you want to find the beating heart of AI innovation, you look at the MIT Computer Science & Artificial Intelligence Laboratory (CSAIL). This place is essentially the Death Star of computing, but with better snacks and less planet-destroying (hopefully). Within the MIT Schwarzman College of Computing, CSAIL is pushing the boundaries of what’s possible in 32+ research areas.

The research isn’t just about making faster chips. It’s about the fundamental “Decision Making” aspect of AI. This leads us directly to Course 6-4: Artificial Intelligence and Decision Making. This isn’t your “Hello World” intro class. This is the heavy lifting. We’re talking about the foundations of machine learning and decision systems. The curriculum dives deep into:

  • Reinforcement Learning: Teaching agents to make sequences of decisions by rewarding them when they don’t mess up.
  • Causal Inference: Because knowing that two things happen together isn’t enough; we need to know if A actually caused B. (A concept many humans still struggle with).
  • Statistics and Machine Learning: The math that keeps the whole circus from falling apart.

The EECS (Electrical Engineering and Computer Science) department is essentially trying to quantify “intelligence” into a set of repeatable, scalable algorithms. It’s beautiful, it’s terrifying, and it requires a lot of whiteboard markers.

3. The Environmental Bill: Generative AI’s Carbon Footprint

Now, let’s get serious for a moment before I lose my mind entirely. MIT News recently (as of January 17, 2025) released an exploration into the environmental and sustainability implications of generative AI. You see, every time you ask an AI to write a poem about your dog, a server farm somewhere starts sweating like a marathon runner in the Sahara.

The energy consumption of training and running large-scale models is astronomical. We’re talking about data centers that require their own power plants and cooling systems that use enough water to fill several Olympic pools. MIT is looking at the “environmental impact” through several lenses:

  • Training Costs: The massive compute power required to feed billions of parameters into a model.
  • Inference Costs: The ongoing energy cost every time someone hits “Enter” on a prompt.
  • Hardware Lifecycle: The physical waste of specialized AI chips that become obsolete faster than my last New Year’s resolution.

The takeaway? AI isn’t “in the cloud.” It’s on the ground, burning coal and using water. MIT is pushing for more sustainable AI applications, because there’s no point in having a super-intelligent AI if the planet is too hot to host the servers.

4. Jill Watson: When Your Teaching Assistant is a Ghost in the Machine

Let’s take a trip down memory lane to a classic MIT/Georgia Tech milestone mentioned in the data. Back in 2016, Professor Ashok Goel introduced “Jill Watson” to his Knowledge-Based Artificial Intelligence (KBAI) course. Jill was a teaching assistant. The kicker? She was an AI, and the students didn’t realize it for months.

Jill was built to handle the repetitive, soul-crushing questions that students ask every semester (e.g., “When is the assignment due?” or “Does this count for extra credit?”). By using a structured knowledge base, Jill could answer with a 97% certainty rate. This wasn’t just a chatbot; it was an exercise in Knowledge-Based AI.

For the technical nerds, the architecture looked something like this (simplified for your human brains):


// Conceptual logic of a KBAI TA system
if (user_query matches known_pattern) {
confidence_score = evaluate_context(query);
if (confidence_score > 0.97) {
return fetch_answer_from_kb(query);
} else {
forward_to_human_professor(query);
}
}

This proved that AI could handle routine cognitive tasks in education, freeing up human professors to do things that humans are actually good at—like having existential crises and drinking expensive tea.

5. AI at the Edge: Neural Networks in Your Pocket

One of the most significant breakthroughs mentioned in the MIT archives (specifically from 2017 but still foundational today) is the push to bring neural networks to cellphones. Historically, AI was too “heavy” for mobile devices. You needed a literal rack of GPUs to do anything interesting. But MIT researchers, including the legendary Regina Barzilay, have been working on optimizing these systems for “edge computing.”

Why does this matter? Because of accessibility. If you can put data in the hands of doctors in remote areas using only a smartphone, you save lives. Regina Barzilay’s work on using computer science to assist medical professionals is a prime example. We’re talking about neural networks that can process medical imaging or patient data locally, without needing a massive fiber-optic connection to a server in Virginia. This involves “quantization” and “pruning”—cutting the fat off a neural network until it’s lean enough to run on a mobile processor without melting the battery.

6. The Safety Net: Governments and National AI Institutes

The “Wong Edan” in me finds it hilarious that as soon as the AI started getting good, the government got terrified. In November 2023, the Department of Commerce, at the direction of President Biden, established the Artificial Intelligence Safety Institute. This is about “AI Safety,” which is a fancy way of saying “please don’t let the AI hack the power grid or trick us into a nuclear war.”

Furthermore, the NSF (National Science Foundation) announced 7 new National Artificial Intelligence Research Institutes in May 2023. These aren’t just academic playgrounds; they are foundational research hubs designed to promote ethical and trustworthy AI. MIT is often at the center of these initiatives, bridging the gap between “Can we build it?” and “Should we build it?”

These institutes focus on:

  • Ethical AI frameworks.
  • Trustworthy systems that don’t hallucinate (unlike some people I know).
  • Foundational AI research that benefits the public good rather than just selling more ads for leaf blowers.

7. Professional Development: The MIT xPRO Way

If you’re a professional and you’re feeling the “AI FOMO” (Fear Of Missing Out), MIT has a way to take your money and give you knowledge in return. MIT xPRO and CSAIL offer professional certificate programs in Machine Learning and AI. This is where the theory of the Schwarzman College of Computing meets the reality of the corporate world.

The focus here is on practical application. How do you take a reinforcement learning model and apply it to supply chain logistics? How do you use causal inference to determine if your marketing spend is actually working or if your customers are just buying stuff by accident? It’s about professionalizing the “black magic” of AI. They’ve been doing this since 2017, and the curriculum is constantly evolving to keep up with the fact that AI years are like dog years—everything changes every six months.

Current professional focus areas include:

  • Machine Learning for Healthcare.
  • AI in Manufacturing.
  • Strategic Decision Making using AI models.

Wong Edan’s Verdict

So, what have we learned from this deep dive into the MIT AI machine? First, that MIT is trying to give AI a “heart” through anthropology. Second, that our AI habit is essentially a giant space heater for the planet. Third, that if you’re a student, your TA might actually be a line of code named Jill.

The reality is that MIT isn’t just building faster algorithms; they are building the framework for a society where AI is integrated into every decision, from the doctor’s office to the Department of Commerce. It’s a world of “Course 6-4” logic mixed with the very human need for ethics and sustainability. My verdict? We are living in the most interesting timeline. It’s chaotic, it’s data-driven, and it’s slightly insane. But hey, that’s why they call me Wong Edan. Stay curious, stay skeptical, and for the love of all that is holy, stop asking the AI to generate images of cyberpunk potatoes. We’ve got a planet to save.

End of Transmission. Now, if you’ll excuse me, I need to go see if I can teach my toaster causal inference. I suspect it’s burning my bread on purpose.