Given the popularity of certain asynchronous web servers, I’ve decided to use my first research post to look into performance comparisons between asynchronous and synchronous servers. All the benchmarks I’ve seen involve comparing different types of web servers to each other, without a clear apples-to-apples comparison between the two networking models. So, I’ve written two identical web-like servers: an asynchronous event-based server (async_server.cc) and a synchronous thread-based server (sync_server.cc). Both servers would clearly perform equally well handling a single-connection-at-a-time client, so the real test is to see how well each server performs while handling a large number of clients concurrently.

For this test I measured some basic metrics of queries per second, latency, and CPU and memory usage. I performed two runs - the metrics you see here are averages based on the runs and smoothed using a simple smoothing function.


The servers were compiled with clang++ 3.5.0-10, -O3, and boost 1.55. Both were run on Google Compute Engine virtual machine instances of n1-standard-4 size in the us-west1-a region, with the client (client.cc) and server running on separate virtual machine instances on the same local network.

First, let’s start with queries per second as a function of the number of concurrent connections per server (higher is better):

Queries per second

The asynchronous server can sustain around 18% more QPS than the synchronous server, with around 30,000 more QPS at 7,000 concurrent connections. And for latency (lower is better):

Mean latency 50 percentile latency 90 percentile latency

The mean latency is around 19% faster for the asynchronous server. The difference isn’t hugely significant until we look at memory and CPU usage (lower is better):

Memory used CPU used

The memory usage was significantly higher for the synchronous server. CPU usage is about the same, although the asynchronous server still performed slightly better.


In this simple test my asynchronous server slightly outperformed the synchronous server. However, looking at graphs and figures doesn’t tell the whole story! The code for the synchronous server was much easier to write and reason about. I understand people’s desires to always want to use the fastest “webscale” frameworks, but unless you’re concerned about memory usage, it’s probably not worth writing an asynchronous event-based server due to the increase in code complexity.


For the fun of it, I decided to performance test NodeJS 0.10.29 with an identical setup (node_server.js) to see how it compares. Note, I had to switch to a log scaled y-axis so that you could see NodeJS at all:

Queries per second 90 percentile latency