Wong Edan's

TechEmpower Round 23: The Ultimate Framework Hunger Games Revealed.

March 17, 2026 • By Azzar Budiyanto

The Digital Asylum is Open: Welcome to Round 23

Greetings, you glorious syntax-sniffing addicts and byte-obsessed lunatics! Your favorite neighborhood “Wong Edan” is back, and I’ve got something better than a bucket of caffeine and a stable production environment. I’m talking about the TechEmpower Framework Benchmarks Round 23. If you’ve been living under a rock (or just stuck in a 14-hour debugging session for a memory leak that turned out to be a typo), Round 23 was officially unleashed upon the world on March 17, 2025. Yes, the Director for Open Source Solutions at TechEmpower finally pushed the button, and the results are enough to make a senior architect weep into their mechanical keyboard.

Why do we care? Because benchmarks are the high-stakes poker of the software world. They tell us who is lying about their performance and who is actually pushing electrons at the speed of thought. Round 23 isn’t just an update; it’s a technological bloodbath. We are talking about thousands of community-contributed test implementations across a spectrum of web application frameworks so wide it makes the Grand Canyon look like a crack in the sidewalk. So, grab your favorite overpriced artisanal coffee, sit back, and let’s dissect the madness of Round 23.

1. The Timeline of Terror: From Continuous Benchmarking to Final Results

Before we dive into the guts of the frameworks, let’s talk about the timeline. This wasn’t some overnight miracle. The road to Round 23 was paved with GitHub issues, failed builds, and a lot of “why is this query taking 5ms?” complaints on Reddit. While some preview results were floating around as early as February 2025, the official coronation happened in mid-March 2025. The TechEmpower team has been leaning heavily into TFB Status, their continuous benchmarking project. This allows developers to see how their code performs in real-time, or at least in the “real-time” of a massive server farm grinding through millions of requests.

The beauty of Round 23 is its transparency. Everything is hosted on GitHub under TechEmpower/FrameworkBenchmarks. If you think the results are rigged, you can go and audit the source yourself—assuming you have the mental fortitude to read through thousands of lines of configuration files without losing your mind. The community-driven nature of these tests means that the implementations are constantly being refined. In Round 23, we saw a massive push for “automatic push of test results,” a feature that developers have been clamoring for since Round 20. It’s about moving away from static snapshots and toward a living, breathing performance leaderboard.

2. The Ruby Redemption: The Rage of the Underdog

If you told me five years ago that we’d be talking about Ruby as a performance contender in 2025, I would have laughed so hard I’d have dropped my vintage 2012 MacBook. But here we are. One of the most shocking revelations of Round 23 is the massive improvement in Ruby frameworks. Specifically, the Ruby Rage framework has come out swinging with its eyes set on blood.

According to the Reddit discussions and official GitHub issues (specifically issue #9589), Ruby frameworks have seen a significant jump in their composite scores. The “Rage” framework is packed with performance optimizations that leverage modern concurrency models. In the past, Ruby was the slow, comfortable sedan of web development—great for the driver, terrible for the race track. In Round 23, it’s like someone strapped a jet engine to that sedan. The performance gap between Ruby and some of the mid-tier compiled languages is narrowing, and that is a technical miracle in itself.

Consider the typical Ruby implementation in the benchmarks. It used to struggle with the “Multiple Queries” and “Fortunes” tests because of the Global Interpreter Lock (GIL) and traditional blocking I/O. But with refinements seen in Round 23, the efficiency of handling concurrent requests has improved drastically. This isn’t just about raw speed; it’s about the composite score, which measures a framework’s versatility across different types of workloads.

3. GoFrame: The New Gold Standard for Go?

Go has always been the darling of the TechEmpower benchmarks. It’s the language of the cloud, the language of Docker, and the language of people who think generics were a mistake (until they weren’t). In Round 23, GoFrame stepped into the spotlight. As a full-featured Go web development framework, GoFrame isn’t just trying to be fast; it’s trying to be productive without sacrificing the performance that Go is known for.

The evaluation results from March 17, 2025, show GoFrame achieving “excellent” results. This is critical because usually, there is a trade-off. You either get a “bare-metal” framework that is basically a wrapper around a socket (fast but miserable to code in), or you get a “full-featured” framework that includes the kitchen sink but runs like it’s mired in molasses. GoFrame seems to have cracked the code in Round 23, maintaining high throughput in the plaintext and json serialization tests while still providing a robust set of features for developers.

For those of you who speak code, a typical high-performance Go implementation in Round 23 looks something like this:


// Illustrative snippet of a high-performance Go handler
func (a *Api) JsonHandler(r *ghttp.Request) {
r.Response.WriteJson(g.Map{
"message": "Hello, World!",
})
}

In the Round 23 environment, this kind of streamlined handling allows Go to maintain its dominance in the upper echelons of the rankings, particularly in the “Multiple Queries” category where database connection pooling and asynchronous execution are paramount.

4. The Spring Paradox: Why Is Java “Slow”?

Let’s address the elephant in the room. Or rather, the giant green leaf in the room. A common theme in the Reddit threads (r/java) surrounding Round 23 is the perennial question: “Why is Spring so slow in TechEmpower benchmarks?” It’s a valid question. Spring is the powerhouse of the enterprise world. It runs the banks, the airlines, and probably your smart toaster. Yet, when you look at the TechEmpower rankings, it’s often buried under a mountain of C++, Rust, and Go frameworks.

The answer, as discussed in the Round 23 context, is multifaceted. Spring is designed for everything. It has layers of abstraction, security filters, interceptors, and a massive dependency injection container. In a benchmark that measures the absolute maximum number of requests per second for a simple JSON response, those abstractions are pure overhead. However, Round 23 shows that even the “heavy” frameworks are evolving. There is a lot to learn from the discussion around Spring’s performance—it’s not that Java is slow (Java frameworks like Netty or Vert.x are consistently at the top), it’s that the Spring configuration used in benchmarks often prioritizes “real-world” features over “benchmark-gaming” speed.

In Round 23, the community has worked to optimize the Spring implementations to use more efficient database drivers and reactive stacks (Project Reactor). While it might not beat a raw C++ epoll server, the gap is becoming less about the language and more about the architectural choices.

5. Deep Dive into the Benchmarking Categories

To understand Round 23, you have to understand the tests. TechEmpower doesn’t just run one test; they run a gauntlet. Here is what the frameworks had to survive:

  • JSON Serialization: The bread and butter of modern APIs. The framework must respond with a JSON object. This tests the overhead of the framework and the efficiency of the JSON library.
  • Single Query: One request, one database query. This tests the basic database driver efficiency.
  • Multiple Queries: The framework must perform multiple database queries and return the result. This is where connection pooling issues usually rear their ugly heads.
  • Fortunes: This is the “real-world” test. It involves a database query, some server-side logic (sorting), and then rendering an HTML template. It’s where frameworks like Ruby Rage and GoFrame really show their mettle.
  • Data Updates: This tests the framework’s ability to perform writes and updates, which is a different beast entirely from simple reads.
  • Plaintext: This is a test of the absolute maximum throughput of the HTTP parser and the event loop.

In Round 23, the Fortunes test remains the most respected metric because it requires a balance of CPU, memory, and I/O. A framework that wins at Plaintext but fails at Fortunes is like a sprinter who can’t walk up a flight of stairs.

6. The Infrastructure and the “TFB Status” Evolution

The hardware for Round 23 is no joke. We are talking about dedicated physical servers, not some throttled virtual machines in a dusty basement. The benchmarking environment uses a 10GbE network and high-performance database servers (usually Postgres or MySQL). The consistency of the results in Round 23 is largely thanks to the improved “Continuous Benchmarking” infrastructure.

The TFB Status site has become the hub for the community. It provides a dashboard where developers can see the results of the “Latest” runs. This move toward continuous evaluation means that the “Round 23” we see today is the result of thousands of iterative improvements. If a framework had a regression in early 2025, the developers would see it on the TFB Status page and push a fix before the final “Round 23” results were published on March 17.

This has led to a much more competitive environment. In previous rounds (like Round 20), a framework could sit on a lucky result for years. Now, if you aren’t optimizing, you are falling behind. The GitHub issues show a constant stream of PRs (Pull Requests) where developers are swapping out libraries, tweaking buffer sizes, and optimizing Dockerfiles to squeeze every last drop of performance out of the hardware.

7. Wong Edan’s Technical Analysis: Why the “Composite Score” is Your New Best Friend

The composite score is the “Golden Fleece” of Round 23. It’s a weighted average that takes into account performance across all the different test types. Why does this matter? Because any idiot can write a C program that returns “Hello World” really fast. But writing a framework that handles JSON, complex database queries, and HTML templating while maintaining high throughput? That’s where the real engineering happens.

The “Composite Score” in Round 23 has been adjusted to reflect modern development needs. It places a significant emphasis on the Fortunes and Multiple Queries tests. This is why we see frameworks like Ruby Rage and GoFrame getting so much buzz. They aren’t just fast in one category; they are consistently high across the board. If you are choosing a framework for your next project, look at the composite score, not just the “Plaintext” chart. Unless, of course, your business model involves serving empty strings to users as fast as possible.

“Performance is not just about doing things fast; it’s about doing the right things without wasting resources. Round 23 shows us that the ‘right things’ are finally being prioritized.” — A very wise, slightly manic developer.

8. Code Comparison: What High Performance Looks Like in Round 23

Let’s look at what separates the “men from the boys” (or the “optimized binaries from the interpreted scripts”) in Round 23. A high-performance implementation focuses on non-blocking I/O and minimal object allocation.

Here is a conceptual example of a Fortunes test implementation that would perform well in the Round 23 environment:


// Concept for an optimized 'Fortunes' handler
// 1. Get connection from a highly-tuned pool
// 2. Execute query with minimal mapping overhead
// 3. Use a pre-compiled template engine
// 4. Stream response directly to the buffer

async function fortunesHandler(req, res) {
const fortunes = await db.query("SELECT * FROM Fortune");
fortunes.push({ id: 0, message: "Additional fortune added at runtime" });
fortunes.sort((a, b) => a.message.localeCompare(b.message));

// Using a high-performance template library
const html = renderTemplate(fortunes);
res.setHeader('Content-Type', 'text/html; charset=utf-8');
res.send(html);
}

The frameworks that dominated Round 23 used techniques like “zero-copy” parsing, where the HTTP header data is never actually copied in memory, just pointed to by pointers. It’s dangerous, it’s low-level, and it’s why these frameworks are faster than your favorite high-level abstractions.

Wong Edan’s Verdict

Alright, you beautiful nerds, here is the bottom line. TechEmpower Round 23 is a wake-up call. If you are still using the same old “it’s fast enough” excuse, you are being left in the dust. The results from March 2025 prove that performance and productivity are no longer mutually exclusive.

The Winners: GoFrame for proving that a full-featured framework can still run like a scalded cat. Ruby for its “Rage” fueled comeback that has shocked the industry. And the TFB Infrastructure for finally making continuous benchmarking a reality.

The Losers: Anyone who thinks that performance doesn’t matter in 2025. In an era of cloud costs and carbon footprints, every millisecond you save is money in the bank and a tree that doesn’t have to be cut down to power a data center. Round 23 isn’t just a list of numbers; it’s a roadmap for the future of the web.

Go forth, check the GitHub, audit the results, and for the love of all that is holy, optimize your database queries! Until next time, keep your latencies low and your spirits high. Wong Edan, out!