Wong Edan's

TechEmpower Round 23: The Ultimate Web Framework Hunger Games

February 24, 2026 • By Azzar Budiyanto

Welcome to the Asylum: Why Round 23 Matters

Listen up, you syntax-slinging code-monkeys! Your favorite digital gladiator pit is back. TechEmpower Round 23 has finally dropped, and if you aren’t vibrating with the frequency of a high-performance event loop, you’re probably in the wrong profession. My brain is currently overclocked to 5.2GHz, fueled by black coffee and the sweet, sweet smell of optimized assembly. We’ve waited for this like a desperate garbage collector waiting for a memory leak to finally hit the threshold. Round 23 isn’t just a minor update; it is a full-blown seismic shift in the web framework landscape. If you thought your favorite framework was fast, prepare to have your delusions shattered into a million non-blocking fragments.

For the uninitiated—or those of you who have been living under a rock without an internet connection—the TechEmpower Web Framework Benchmarks are the “Olympics of Latency.” They take hundreds of frameworks, throw them into a standardized hardware environment (the Citrine cluster), and force them to perform tasks ranging from simple JSON serialization to complex database updates. It’s brutal. It’s unforgiving. It’s exactly what my Wong Edan soul craves. Round 23 is particularly special because it reflects the massive jumps in performance we’ve seen in 2024 and early 2025, especially regarding how frameworks handle network-bound bottlenecks.

The Hardware and the Environment: The Citrine Beast

Before we look at the blood on the floor, we have to look at the arena. Round 23 utilized the updated Citrine cluster, and let me tell you, the results show that the infrastructure team has been doing some serious heavy lifting. We’ve seen a substantial increase in performance across the board, particularly in network-bound tests. When the underlying hardware and network stacks get an upgrade, it exposes the frameworks that were actually efficient versus the ones that were just “faking it” by riding on the coattails of previous hardware generations.

In Round 23, the testing environment has been tightened. We’re talking about Physical Bare Metal servers. No virtualization fluff. No “noisy neighbor” cloud nonsense. Just raw silicon and copper. This is where the C++ and Rust zealots usually start salivating, and for good reason. When you remove the abstractions, the frameworks that talk directly to the kernel start to pull away from the pack like a nitro-injected dragster in a school zone.

The Ruby Renaissance: Ruby Rage and the Speed Demon

Stop the presses! Hold my “Wong Edan” coconut water! The biggest shocker in Round 23 isn’t a new Rust framework—it’s the incredible comeback of Ruby. For years, the industry “experts” (the ones who spend more time on LinkedIn than in an IDE) have been calling Ruby “slow” and “obsolete.” Well, Ruby just walked into the room and slapped the skeptics with a high-performance glove.

The framework getting everyone’s attention is Ruby Rage. It’s packed with performance optimizations that leverage the latest improvements in Ruby’s YJIT (Yet Another Just-In-Time) compiler. In the composite scores for Round 23, Ruby frameworks showed some of the most dramatic percentage improvements compared to previous rounds. We are seeing Ruby handle requests per second (RPS) that would have been unthinkable five years ago. It’s not just about raw speed; it’s about the fact that Ruby can now compete in the middle-tier of performance while keeping the developer experience (DX) that we all fell in love with.

“Performance is not just about how fast the machine runs; it is about how little the code gets in the way of the hardware’s potential.”

Ruby Rage is optimized for I/O-bound tasks, which is what 90% of the web actually is. By minimizing object allocation and maximizing the efficiency of the fiber-based concurrency model, it has managed to climb the leaderboards, proving that you don’t always need to rewrite everything in a low-level language to get a 10x performance boost.

The Rust Empire: Actix and the Perfectionists

Of course, we can’t talk about TechEmpower without mentioning the Rust crowd. These people are the Wong Edan of the systems world—obsessed with every single byte. In Round 23, Rust frameworks like Actix-Web and May-Minihttp continue to dominate the top of the charts. If you look at the Plaintext test, where the framework just has to return “Hello, World!”, these things are hitting millions of requests per second. It’s basically just the speed of the NIC (Network Interface Card) at that point.

However, there’s been some drama in the Rust community regarding deadpool_postgres. In some of the database-heavy tests, people were wondering why Rust wasn’t even faster. The answer usually comes down to the driver level. In Round 23, we see that the choice of connection pooler and the underlying Postgres driver can make or break a score. Even if your framework is fast, if your database driver is doing too much synchronization or has inefficient memory management, your score will tank. Rust developers in Round 23 have been fine-tuning these drivers to the point of insanity, often sacrificing “ergonomics” for “pure, unadulterated speed.”

The Java Giant: Spring vs. The World

Every year, someone asks on Reddit: “Why is Spring so slow in TechEmpower?” And every year, I have to drink a liter of herbal tea to keep from screaming. Listen: Spring is not slow. Spring is heavy. There is a difference. Spring is a full-stack, enterprise-grade battleship. TechEmpower is a race for jet-skis. If you try to race a battleship against a jet-ski in a sprint, the jet-ski wins. But if you want to cross the ocean with 5,000 containers of enterprise logic, you want the battleship.

In Round 23, the high-performance Java contenders are Vert.x and Jooby. Vert.x, in particular, continues to be a monster. It’s reactive, it’s non-blocking, and it consistently lands in the top tier. For the Java developers who want to prove that the JVM can hang with the C++ and Rust kids, Vert.x is their champion. It proves that the problem isn’t the JVM; it’s the layers of abstraction we pile on top of it. In Round 23, Java’s Project Loom (Virtual Threads) is starting to show its influence, allowing for more efficient concurrency models that don’t require the mental gymnastics of traditional reactive programming.

GoFrame and the Pragmatic Middle Ground

Go has always been the “Goldilocks” language—not too low-level, not too high-level, just right. In Round 23, GoFrame has emerged as a top-tier performer. GoFrame is a full-featured framework, which makes its high performance even more impressive. Usually, you expect “Full-Featured” to mean “Slow,” but the Go ecosystem is obsessive about profiling.

The evaluation results show that GoFrame achieves excellent throughput in the Multiple Queries and Data Updates tests. This is critical because this is what real apps do. Nobody gets paid to serve “Hello, World!” in plaintext all day. We get paid to fetch data from a database, mangle it into a JSON object, and send it back. Go’s built-in net/http and frameworks like Fiber (built on fasthttp) continue to push the boundaries of what’s possible with a garbage-collected language.

Deep Dive: The Six Tests of Doom

To truly understand Round 23, you have to look at the specific test categories. Each one tests a different “muscle” of the framework.

  • JSON Serialization: Tests the CPU’s ability to turn objects into strings. This is where high-level languages with fast serializers (like those using Simdjson) shine.
  • Single Query: One request, one database row. This tests the efficiency of the database driver and the connection pool.
  • Multiple Queries: This is the “Round 23 Killer.” It requires fetching multiple rows (typically 20) and serializing them. This tests how well the framework handles concurrent asynchronous tasks.
  • Fortunes: This is the most realistic test. It requires a database query, then sorting the results in memory, and finally rendering them into an HTML template with proper escaping. If your framework’s templating engine is garbage, your Fortunes score will be garbage.
  • Data Updates: The most brutal of them all. Read a row, modify it, save it back, and deal with transaction overhead. This is where “cheating” is hardest because you can’t just cache your way to victory.
  • Plaintext: The raw, naked speed of the framework. If you fail here, go home and rethink your life choices.

In Round 23, we saw a massive jump in the Fortunes test across the board. This suggests that template engines are getting significantly smarter, likely using more aggressive pre-compilation and avoiding unnecessary string allocations. Even the PHP frameworks showed some “Wong Edan” level gains here, thanks to PHP 8.3/8.4 optimizations.

The “Composite Score” Controversy

TechEmpower Round 23 uses a “Composite Score” to rank frameworks overall. This is where the fighting starts. Some people argue that the Composite Score is biased toward frameworks that perform well in the Plaintext test, which doesn’t reflect real-world usage. Others argue that it’s the only way to get a holistic view.

My take? The Composite Score is like a Decathlon. You might be the fastest sprinter (Plaintext), but if you can’t throw a javelin (Data Updates), you aren’t the best athlete. Round 23 shows that the gap between the “Speed Demons” and the “Workhorses” is closing. We are seeing a “compression” of performance where even the most bloated frameworks are being forced to optimize their core loops because the competition is simply too fierce.

The Ghost of Frameworks Past: Where did they go?

A curious thing happens in every TechEmpower round: some frameworks just… disappear. In Round 23, a few niche players like Ditsmod vanished from the results for unknown reasons. Usually, this happens when a framework fails the “Correctness” check. TechEmpower doesn’t just measure speed; it measures if you’re actually returning the right data. If you try to skip the HTML escaping in the Fortunes test to gain 5ms, the “Correctness” bot will find you, and it will delete you from the leaderboard. There is no mercy in the asylum.

This is a vital lesson for all of us. Performance at the cost of correctness is just an efficient way to be wrong. Round 23 has become much stricter about these checks, ensuring that the winners are actually viable for real-world production use and not just specialized “benchmark-only” hacks.

Wong Edan Philosophy: Why do we obsess?

You might ask, “Wong Edan, why are you shouting about microseconds? My users are on a 3G connection in the middle of a forest. They won’t notice if my Go service is 2ms faster than my Python service.”

You’re wrong. And that’s why you’re not as Edan as me yet. Performance is about Efficiency. If your framework can handle 10x more requests per second on the same hardware, your cloud bill is 90% lower. If your framework has lower latency, your server stays cooler, your power consumption drops, and you stop contributing to the heat death of the universe. High-performance code is a moral imperative. In Round 23, we see frameworks achieving efficiency levels that allow us to run massive applications on a single Raspberry Pi that would have required a whole rack of servers ten years ago. That is the magic. That is the madness.

The Verdict: Who Won Round 23?

If we look at the raw data, the winners are:

  • The Speed Kings: Rust (Actix, May-Minihttp), C++ (Drogon), and C# (ASP.NET Core – yes, Microsoft is actually doing God’s work here).
  • The Most Improved: Ruby (Ruby Rage) and PHP (Workerman/Swoole). These two have proven that interpreted origins are no longer an excuse for being slow.
  • The Productivity Champions: Go (GoFrame) and Java (Vert.x). They provide the best balance of “I can actually read this code” and “This code is incredibly fast.”

But the real winner of Round 23 is the developer. The competition between these frameworks has pushed the entire industry forward. The techniques used by the top-tier frameworks eventually trickle down into the libraries we use every day. Even if you use a “slow” framework, it’s probably faster today than it was two years ago because of the pressure created by these benchmarks.

Final Thoughts from the Asylum

Round 23 is a testament to human obsession. We have reached a point where we are optimizing the way a single byte travels from the network card to the L1 cache. Is it crazy? Yes. Is it unnecessary for a simple blog? Probably. But is it beautiful? Absolutely.

As I close my terminal and watch the fan on my laptop finally slow down, I realize that the search for the “perfect” framework is never-ending. But Round 23 gives us a map of the territory. It shows us where the bottlenecks are and how to break them. So, go forth, download the results, look at the source code of the winners, and start optimizing. Or just stick with your favorite framework and wait for them to copy the winners’ homework. Either way, the web is getting faster, and I am here for the chaos.

Stay crazy. Stay fast. Stay Wong Edan.