Another comparison of HAProxy and Nginx

In my previous post about web application proxies, I compared HAProxy and Nginx performance when proxying a simple Rails application. While HAProxy was able to serve pages faster and more consistently, the beanchmark also uncovered an apparent design flaw in HAProxy that caused some connections to hang around in the queue for a long time. HAProxy’s author, Willy Tarreau, quickly stepped in to attack the problem, and soon provided a new point release:

My first analysis was that this problem was caused by “direct” requests (those with a server cookie) always being considered before the load balanced ones. But while fixing this design idiocy, I discovered a real problem : it was perfectly possible for a fresh new request to be served immediately without passing through the queue, causing requests in the queue to be delayed for at least as long as the queue timeout, until they might eventually expire. Now *that* explains the horrible peaks on Alexander’s graphs. My problem was that it was a real misdesign, which could not be fixed by a 3-liner patch. So I spent the whole week reworking the queue management logic in a saner manner and running regression tests.

The fix has further repercussions:

[T]he good news is that not only this fixes a number of 503 errors and long response times when running with a low maxconn, but as an added bonus, the “redispatch” option is now naturally considered when a server’s maxqueue is reached, so that it will now not be necessary anymore to trade between large queues and the risk of returning 503 errors.

Willy also realized that his redesign work would lead the way to priority-based request scheduling in the future, which is great news.

With the new release in hand, I have finally found the time to sit down and do a rematch. The conclusion? In short, the patch works as intended: It eliminates the odd spikes while still providing smoother performance than Nginx. The spikes that remain are present with Nginx as well, and their regularity implies some kind of periodic activity, possibly on the box itself, although a much more likely culprit is Ruby’s garbage collection. Damn you, curiously slow and old-fashioned interpreter implementation!

Finally, some people requested CPU usage data from vmstat. For this new benchmark I updated my scripts to run vmstat concurrently with ab, hoping there would some meaty differences for charting, but it turns out that there is no significant difference between HAProxy and Nginx — at best, CPU usage looks a trifle smoother with HAProxy, but this could be a fluke. I suspect you have to amp up the load considerably to achieve a sensible comparison. Still, I have included the vmstat data in the raw data tarball for anyone who is interested.

Anyway, enjoy the graphs. Many thanks to Willy for working out a solution so promptly and expertly.

Nginx vs HAProxy at 3 concurrent connections

Nginx vs HAProxy at 10 concurrent connections

Nginx vs HAProxy at 30 concurrent connections


  1. Posted June 28, 2008 at 6:42 am | Permalink

    Excellent work Alexander! This is the very first demonstration that using “maxconn 1” really improves performance with Rails. Now we have numbers and we can say that it reduces response times by a factor of 2-3 depending on the load. I will add a link to your tests on haproxy’s site because a lot of people often ask if it really helps or not. Now they will have a scientific response :-)

    It’s amazing to see in vmstat that there is a high CPU load with still 25% idle! I suspect that the CPU usage pattern is not regular. I think that if you want to still improve response times, you may want to experiment with 3 axes : add more Ruby instances, slightly increase maxconn (eg: 2 or 3), and add “option forceclose”, which will not wait for the end of the transaction to close the request channel to the server. This should save some memory there, and speedup the connection reassignment.

    If we can figure out best settings for Rails applications, I could add a specific configuration example in the tarball.

    However, I know that running such experiments take a lot of time, so I can imagine that we’ll not see new graphs *that* soon.

    Thanks for the time ou took validating this new version, it further confirms the problem is really fixed :-)


  2. Mitch
    Posted June 28, 2008 at 9:27 pm | Permalink

    Great collaboration guys! I have been debating between haproxy and nginx for my next web farm stack, but after reading this, my mind is made up. Looking forward to the next set of tests Alexander.

  3. Posted June 30, 2008 at 6:23 am | Permalink

    Will be interesting to use ruby-enterprise [ ] – “enhanced garbage collector”, “improved memory allocator” and see if this will remove the spikes.

  4. Posted July 1, 2008 at 1:31 am | Permalink

    Alexander, Willy, awesome work! Rebuilding HAProxy on all of my instances as I write this. Now, of course, the questions is: how much more performance can you squeeze out of HAProxy with some intelligent queuing? I imagine having at least another request queued up on the mongrel might help quiet a bit.

    Alexander: does your benchmark include any DB calls? If not, then theoretically you should be able to semi-safely process concurrent requests. Unless the dispatch code has a mutex in Rails as well? (not sure about latest version)

  5. Alexander
    Posted July 1, 2008 at 1:41 pm | Permalink

    @Ilya: No DB calls. The Rails dispatcher has a big mutex around the dispatching code, and I haven’t heard of any recent changes here. The whole point of this test, however, was to gauge the efficiency of the proxy when proxying a server process which cannot consume more than one connection at a time.

    @Stoyan: Indeed. I’ll give it a shot.

  6. Posted July 10, 2008 at 5:18 am | Permalink

    I think a CDF or histogram is called for here. It would also be interesting to see an overload test although I don’t think ab can generate overload.

  7. Morten
    Posted August 13, 2008 at 11:13 am | Permalink

    How would you handle serving files that require authentication (via the Rails app) in such a setup? Allow nginx to also call Rails for authentication which then returns the X-Accel-Redirect for nginx?

  8. Alexander
    Posted August 16, 2008 at 6:51 pm | Permalink

    Rails can accept/emit the appropriate authentication headers. Why involve Nginx?

  9. Morten
    Posted August 19, 2008 at 3:46 pm | Permalink

    Nginx to serve files, doing that in Rails would be unhealthy – google rails send_file mem

  10. Posted August 20, 2008 at 10:02 pm | Permalink

    Interesting comparsion. Do you think that Nginx with fair proxy module will have similar results as HAproxy?

  11. Alexander
    Posted August 20, 2008 at 10:48 pm | Permalink

    @Morten: So serve them with Nginx and proxy back into HAProxy.

    @daeltar: This comparison is with the fair proxy module.

  12. Posted August 21, 2008 at 12:30 am | Permalink

    @Alexander – sorry – I have read it in previous article after this one.

  13. Tommy Tiddlekins
    Posted October 27, 2008 at 9:03 pm | Permalink

    Can HAProxy serve static files though without involving Mongrel?

  14. Alexander
    Posted October 27, 2008 at 10:44 pm | Permalink

    No. HAProxy is a pure load balancer — it doesn’t know how to map URLs to files. You need a web server such as Nginx or Lighttpd. (I wouldn’t use Mongrel.)

  15. Posted November 15, 2008 at 12:39 pm | Permalink

    “Can HAProxy serve static files though without involving Mongrel?”

    -No. but you can configure an ACL filter and move all static request to another server (i.e. ligthttpd, apache, nginx)

    # Static content
    acl url_static path_beg /javascripts /stylesheets /images
    acl url_static path_end .jpg .jpeg .gif .png .ico .pdf .js .css .flv .swf
    acl host_static hdr_beg(host) -i static0. static1. static2. static3.

    use_backend static if host_static or url_static

    # Default to dynamic content
    default_backend dynamic

  16. Posted January 26, 2009 at 2:35 pm | Permalink

    see for another comparison of haproxy and nginx.

%d bloggers like this: