Wong Edan's

TechEmpower Round 23: The Hunger Games of Backend Frameworks

April 03, 2026 • By Azzar Budiyanto

Welcome to the Madhouse: Decoding the TechEmpower Round 23 Chaos

Listen up, you beautiful code-shoveling peasants! Your favorite digital gladiator arena is back, and it’s messier than a production server after a junior dev discovers sudo rm -rf. I’m talking about the TechEmpower Framework Benchmarks Round 23. If you’ve been living under a rock—or worse, writing vanilla PHP without a framework—TechEmpower is where the world’s most ambitious web application platforms come to flex their muscles, measure their throughput, and occasionally cry in a corner when a C++ framework beats them by 400%.

As your resident “Wong Edan” tech blogger, I’ve spent the last 48 hours vibrating on high-grade caffeine and analyzing these results so you don’t have to. Round 23 is particularly spicy because it landed in early 2025 (with ripples felt into 2026), and let’s just say some of your favorite “fast” frameworks are looking a little bit sluggish. We’ve seen a substantial increase in performance across the board, particularly in network-bound tests. If your framework didn’t get faster this round, it’s basically moving backward in the eyes of the performance gods.

In this deep dive, we are going to dissect the web application performance evaluation of Round 23, look at why Ruby is suddenly acting like it went to the gym, why GoFrame is making headlines, and why the Rust community is arguing about deadpool_postgres. Buckle up; it’s going to be a bumpy, high-throughput ride.

The Ruby Renaissance: Not Dead, Just Optimization-Heavy

Stop the presses! Someone tell the Twitter (or X, whatever) trolls that Ruby is still breathing. One of the most surprising takeaways from the TechEmpower Framework Benchmarks Round 23 is the “nice improvements” seen in Ruby frameworks. Historically, Ruby has been the “slow but pretty” child of the backend world. But in Round 23, if we compare the composite scores, Ruby frameworks have shown a significant upward trend.

What changed? Well, the community has been obsessing over performance optimizations that finally hit the benchmarking suite. While it’s still not going to outrun a raw C++ or Rust implementation in a JSON serialization sprint, the improvements in its composite score suggest that for real-world, high-productivity scenarios, the performance tax is getting lower. This is critical for developers who want to balance developer happiness with backend framework performance.

“Ruby frameworks got some nice improvements… if we compare the composite score to the previous rounds, the growth is undeniable.” – Fragment from the r/ruby analysis.

For those of you building the next “Uber for cats,” this means you can stick with your precious Gems without feeling like you’re running a race in lead boots. Just don’t try to compare it directly to a bare-metal C++ framework unless you enjoy being humbled.

GoFrame: The New Heavyweight in the Go Ecosystem

Moving on to the gophers. If you follow the latest performance evaluation of TechEmpower, you’ll notice a name popping up with aggressive frequency: GoFrame. As a full-featured Go web development framework, GoFrame has managed to achieve “excellent results” in Round 23. This is significant because, usually, “full-featured” or “full-stack” frameworks trade performance for convenience.

GoFrame seems to have cracked the code. It provides the bells and whistles—ORM, logging, configuration management—without sacrificing the raw throughput Go is famous for. In the Round 23 tests, GoFrame showed that it can handle massive concurrency while maintaining low latency. This makes it a formidable contender against micro-frameworks that usually dominate the top of the charts by doing absolutely nothing but returning {"message": "Hello, World!"}.

The success of GoFrame in this round highlights a shift in the TechEmpower Framework Benchmarks. We are seeing more “useful” frameworks climbing the ranks, not just specialized high-performance skeletons that no one would actually use to build a real enterprise app.

The Rust Drama: The Mystery of Deadpool-Postgres

Now, let’s talk about Rust. Rust developers are like the CrossFitters of the tech world—they’re fast, they’re strong, and they’ll tell you about it every five minutes. However, Round 23 brought some confusion to the camp. Specifically, there were discussions around why deadpool_postgres seemed to be slower than expected in certain test scenarios.

On platforms like Reddit, the debate raged: Why would a high-performance library like deadpool_postgres (a dead-simple async pool for tokio-postgres) show overhead? The reality of benchmarking is that sometimes the “safest” or most “feature-complete” library introduces tiny amounts of latency that show up when you’re pushing millions of requests per second. Some developers in the community even suggested swapping libraries just to chase a higher score—a classic case of “benchmark-driven development” madness.


// Example of the kind of code being scrutinized in Round 23 Rust tests
use deadpool_postgres::{Config, Runtime};

fn main() {
let mut cfg = Config::new();
cfg.dbname = Some("benchmark_db".to_string());
// Is the overhead here? Or in the driver?
let pool = cfg.create_pool(Some(Runtime::Tokio1), NoTls).unwrap();
// Round 23 results sparked a deep dive into these pool dynamics.
}

This highlights a key truth about high-performance backend frameworks: at the top tier, the battle isn’t about the language anymore; it’s about the efficiency of the database drivers and connection pools. If your pool is “dead,” your performance is “pool,” or something like that. I’m a blogger, not a comedian.

Network-Bound Gains: A Rising Tide Lifts All Containers

One of the most technical revelations from the March 2025 reports on Round 23 is the substantial increase in performance across the board, particularly in network-bound tests. TechEmpower noted that the benchmarking environment itself and the maturity of the underlying networking stacks (like Netty for Java, or the various Epoll wrappers for C++ and Rust) have reached a new peak.

This means that the “floor” for web performance has been raised. In Round 23, even “middle-of-the-pack” frameworks are posting numbers that would have been record-breaking five rounds ago. This is largely due to better handling of pipelining and more efficient utilization of the network interface cards (NICs) in the benchmarking hardware. If you’re looking at web application performance evaluation metrics, you need to recalibrate your brain. What used to be “fast” is now “average.”

The Bitter End: Is TechEmpower Being Archived?

Now, here is the bombshell. While we are celebrating the Round 23 results, there are whispers—and GitHub issues—suggesting that the TechEmpower Framework Benchmarks are now Archived as of March 2024 to March 2026. After years of being the gold standard for backend performance, the project has reached a point where having good results became a top marketing priority for framework developers, sometimes overshadowing actual utility.

Why archive it? The project became a victim of its own success. When developers start writing code specifically to win a benchmark rather than to build better software, the benchmark loses its “real-world” relevance. Round 23 might be one of the final comprehensive snapshots we get of this era of web development. It’s the “End of Evangelion,” but for backend developers. We are seeing the final scores of a decade-long war for throughput supremacy.

However, the legacy of Round 23 lives on in the source code. The techempower/FrameworkBenchmarks GitHub repository remains a treasure trove of high-performance patterns. If you want to know how to squeeze every drop of juice out of a Linux kernel, that’s where you look.

Wong Edan’s Verdict: Who Actually Won?

So, who is the real king of TechEmpower Framework Benchmarks Round 23? If you look at the raw numbers, the usual suspects (C++, Rust, and sometimes Zig or Nim) are sitting on the iron throne. But if you look at “real-world” victory, the winners are the frameworks that managed to close the gap between performance and productivity.

  • The “Holy Cow” Award: Ruby. For not being as slow as everyone says it is. Those composite score improvements are no joke.
  • The “Workhorse” Award: GoFrame. For proving that you can have a full-featured framework and still run laps around the competition.
  • The “Soul-Searching” Award: Rust. For having an existential crisis over a connection pool’s micro-latency.
  • The “Final Bow” Award: TechEmpower itself. For providing us with years of data to argue about on Reddit while our builds were failing.

In conclusion, Round 23 shows us that the web is faster than ever. Whether you choose a high-performance beast or a developer-friendly framework, the tools available today are insanely optimized. Just remember: a benchmark score won’t fix your crappy database queries or your unoptimized front-end assets. Performance is a holistic nightmare, and Round 23 is just one map of the labyrinth.

Now, if you’ll excuse me, I need to go see if I can get a framework written in COBOL to rank in the top 100. Stay crazy, stay coding!