Now for a change of pace. Recently at work, we’ve been trying to figure out what platform to build an application that will handle serving realtime data to customers. We want the system to be fast, scalable and able to handle hundreds, maybe thousands or even tens of thousands of requests per second.
I did a bit of prototyping in both node and vert.x to see how they performed under pressure. To do this, I built a cute webapp that makes a bunch of requests on basic web servers written in both node.js and vert.x to see how fast they could respond and how they would handle under a heavy load of requests. Here’s a picture of the ui made for the webapp (build in angular.js).

I created a form that allows for various inputs. In it, one can specify several variables including the following:
The variables:
- Iterations – number of http requests to make
- Block Size – How often a result is computed (reports start time for request, end time, average time per call (ms) and total time (ms) )
- Range – How many results to display on screen – the graph tracks this
- Polling – Toggle On/Off (this will start polling requests as fast as node.js can handle them. These are serial in nature).
- Show Graph – toggle graphing on/off (off will provide better javascript performance)
Thanks to angular-seed for the fast prototyping and angular-google-chart for charting.
Benchmarking Parameters: Each request is a simple get request made to the respective webserver, which then writes a header and a “Hello” response. The requests are made through angular’s $http method. When a successful response is received, the callback function calls another $http request, until the number of successful responses received equals the number of iterations specified. I measure the time it takes from the time the request is made until the number of requests received per block is complete.
Time Keeping: I try to avoid all delays that could be attributable to javascript rendering (e.g. the timestamp is taken when the first request in the block is made. Then the timestamp is recorded when the # of responses in a block is received. I send both these parameters to a javascript function, which is responsible for rendering the results to the display. I also added a function to enable polling requests to be made, which makes $http requests as fast as responses can be received in order to add stress to the server’s load. This is enabled with the “polling” button.
Here’s a snippet of the webserver source code.
In Node.js:
StaticServlet.prototype.handleRequest = function(req, res) { var self = this; var path = ('./' + req.url.pathname).replace('//','/').replace(/%(..)/g, function(match, hex){ return String.fromCharCode(parseInt(hex, 16)); }); console.log(path); if (path == './show') { res.writeHead(200, {'content-type': 'text/html'}); res.write("Hello: "); res.end(); } else if (path == './version') { res.writeHead(200, {'content-type': 'text/html'}); res.write("Node.js version: " + 'v0.10.26'); res.end(); } else { var parts = path.split('/'); if (parts[parts.length-1].charAt(0) === '.') return self.sendForbidden_(req, res, path); else { fs.stat(path, function(err, stat) { if (err) return self.sendMissing_(req, res, path); if (stat.isDirectory()) return self.sendDirectory_(req, res, path); return self.sendFile_(req, res, path); }); } } }
In Vert.x:
server.requestHandler(function(req) { var file = ''; if (req.path() == '/show') { req.response.chunked(true); req.response.putHeader('content-type','text/html'); req.response.write("Hello: "); req.response.end(); } else if (req.path() == '/version') { req.response.chunked(true); req.response.putHeader('content-type','text/html'); req.response.write('Vertx version: ' + '2.0.2-final (built 2013-10-08 10:55:59'); req.response.end(); } else if (req.path() == '/') { file = 'index.html'; req.response.sendFile('app/' + file); } else if (req.path().indexOf('..') == -1) { file = req.path(); req.response.sendFile('app/' + file); }
See? Dead simple. Of course there are lots of flaws with this methodology (e.g. the webservers are only serving static data, are writing short responses, are not optimized, etc.). It wasn’t my intention to come to a hard conclusion with this. I just wanted a data point and to experiment with both platforms. It turns out they (at least in this test) came very close to one another in terms of performance. Both servers were running on my machine, which specs are listed below.
System Specs: Macbook Pro, mid-2012, 2.3ghz with 16gb ram and a 512gb ssd. Both webservers are running locally on my machine with a bunch of other apps open.
And here are some preliminary results:
Here’s the node.js webserver, with polling turned on:
Here’s the vert.x webserver, with polling turned on:
You can see that they’re very close. Next I tried stressing both servers a bit through running several concurrent queries and several “instances” of the web app. In a later post, I’ll put up some more detailed results with trying to stress both webservers out. The response time definitely slows down as more are being made.
Conclusions: Both webservers are surprisingly close in terms of response/processing/overhead time. My CPU usage goes a bit higher on the vert.x server, but I do have several other applications running. I also haven’t tested running multiple instances of the same verticle yet in vert.x, or trying to fork processes in node.js. Both webservers are as barebones as they get. So in other words, I can’t make any hard conclusions yet, except to say that both servers are
- Able to handle high loads of requests per second (probably on the order of 500 – 1000
- Out of the box, both servers run roughly equivalently
These results seemed a little bit surprising to me, given that on the web vert.x seems to have faster results. One factor that may contribute to this is the lack of realism in server response. It’s probably not the case that so many requests would be coming in simultaneously (or there would be multiple server instances to handle such requests), and the response size is very small. Since both servers are basically just writing to a stream, as long as the I/O is written with performance in mind, this may be roughly as fast as a my CPU can handle writing responses. Another hypothesis is perhaps that vert.x really shines in handling multiple instances. I’ll have to experiment and report my results.
Postscript: In case you want to try it out for yourself, I’ve made the source code available on my github @ https://github.com/rocketegg/Code-Projects/tree/master/ServerPerformance I know this test has been done with much more sophistication and by a lot of people out there, but hopefully you can play around with this webapp, have fun and learn something.