A simple n-queens solution in javascript

N Queens (8)
N Queens (8)

I was recently just playing around with the n queens problem and decided to try to solve it. I’ve never done it before, but figured that I could use a depth first search kind of solution. I ended up building a solution in node.js which does identify all potential solutions, but decided to put it in angular.js to run on the front end so users can play around and visualize the solutions.

Essentially, the algorithm works like this:

  • If there’s only 1 queen left to place, and multiple possible positions, then it’s a valid solution. If it’s a unique solution, let’s store it.
  • If there is more than one queen left to place, and the number of valid positions left is less than the number of queens, then this is the wrong path, let’s backtrack.
  • If neither applies, lets iterate through all the next potential moves in the adjacent column, then recompute the possible positions from that grid state.
  • Finally, after we place the queen, we recursively call the algorithm for one less queen, a new grid state, recomputed valid positions and the next column.

I haven’t had much time to optimize, but a few ideas I could possibly do:

  1. Since a chess board is symmetrical, I probably don’t need to check all four corners (since they’ll yield the same result if the board is rotated 90 degrees)
  2. Use a stack instead for managing the valid positions instead of a simple array. Right now I actually make a copy of all new possible positions after a queen is placed.
  3. Use a better way to come up with unique solutions than concating a string and comparing each new solution that comes in

A couple optimizations that already helped:

  1. pruning next potential positions to only the starting column + 1 (enormous performance enhancement)
  2. pruning routes when number of queens left > number of open positions
  3. cutting down the length of the unique string when comparing objects

In any case, it was a fun little experiment. Here’s the source code, and here’s where you can see it working in jsfiddle. Warning!!! Angular tends to run pretty slow and blocks when you compute, so don’t try to run with too many queens (like > 12 or 13) otherwise you may cause your browser to freeze.

N Queens – http://jsfiddle.net/rocketegg0/wu6cpp5v/

Some average run times:

Run Times
Num Queens Solutions # Comparisons Run Time (ms)
4 2 12 <1
5 10 41 1
6 4 138 1
7 40 473 1
8 92 1758 7
9 352 7077 16
10 724 30654 45
11 2680 142755 360
12 14200 734048 4936
function Grid(width, height) {
        this.width = width;
        this.height = height;
        var grid = [];
        var validPositions = [];
    
        for (var i = 0; i < width; i++) {
            grid.push([]);
            for (var j = 0; j < height; j++) {
                grid[i][j] = '_';
                validPositions.push(new Pair(i,j))
            }
        }
    
        this.getValidPositions = function() {
            return validPositions;
        }
    
        this.getGrid = function() {
            return grid;
        }
        
        //Position validation
        function isQueen(pair, x, y) {
            return pair.x == x && pair.y == y;
        }
        
        function isInRow(pair, y) {
            return pair.y == y;
        }
        
        function isInCol(pair, x) {
            return pair.x == x;
        }
        
        //Ex: 
        //3, 2 > 4, 1 = -1, 1
        //3, 2 > 4, 3 = -1, -1
        //3, 2 > 2, 1 = 1, 1
        //3, 2 > 2, 3 = 1, -1
        function isInDiagonal(pair, x, y) {
            if (Math.abs(pair.x - x) == Math.abs(pair.y - y)) {
                return true;
            }
            return false;
        }
        
        function testPosition(pair, x, y) {
            if (isQueen(pair, x, y) ||
                isInRow(pair, y) ||
                isInCol(pair, x) ||
                isInDiagonal(pair, x, y)) {
                return true;
            }
            return false;
        }
    
        function recomputeValid(validPositions, queenX, queenY) {
            var newValid = [];
            for (var i = 0; i < validPositions.length; i++) {
                if (!testPosition(validPositions[i], queenX, queenY)) {
                    newValid.push(validPositions[i]);
                }
            }
            return newValid;
        }
        
        function gridToString(grid) {
            var gridstr = '';
            for (var i = 0; i < grid.length; i++) {
                gridstr += '['
                for (var j = 0; j < grid[0].length; j++) {
                    gridstr += grid[i][j];
                    if (j < grid.length-1) {
                        gridstr += '|'
                    }
                }
                gridstr += ']'
                gridstr += '\n';
            }
            return gridstr;
        }
    
        var printGrid = function (grid) {
            console.log(gridToString(grid));
        }
    
        var solutions = [];
        $scope.solution_grids = [];
    
        function convertToSolution(grid) {
            var str = '';
            for (var i = 0; i < grid.length; i++) {
                for (var j = 0; j < grid.length; j++) {
                    if (grid[i][j] == 'Q') {
                        str += i + '#' + j;
                    }
                }
            }
            return str;
        }
        
        this.numcomputations = 0;
    
        this.solve = function (numQueens, grid, validPositions, startcol) {
            if (numQueens <= 1 && validPositions.length > 0) {
                grid[validPositions[0].x][validPositions[0].y] = 'Q';
                var solution = convertToSolution(grid);
                if (solutions.indexOf(solution) == -1) {
                    solutions.push(solution);
                    $scope.solution_grids.push(gridToString(grid));
                }
                grid[validPositions[0].x][validPositions[0].y] = '_'; //reset
                return true;
            } else if (numQueens > validPositions.length) {
                return false;
            } else {
                var x, y;
                var nextcol = validPositions.filter(function(point) {
                    return point.x == startcol + 1 ? true : false;
                });    //prune routes to only next col
                for (var i = 0; i < nextcol.length; i++) {
                    x = nextcol[i].x, y = nextcol[i].y;
                    grid[x][y] = 'Q';
                    this.solve(numQueens - 1, grid,
                               recomputeValid(validPositions, x, y), startcol+1);
                    grid[x][y] = '_';	//reset
                    this.numcomputations++;
                }
            }
        }
    }
    
    $scope.solve = function() {
        var nqueens = $scope.sides;
        var grid = new Grid(nqueens, nqueens);
        var startTime = new Date().getTime();
        console.log('Start Time: ' + startTime);
        grid.solve(nqueens, grid.getGrid(), grid.getValidPositions(), 0);
        $scope.numcomputations = grid.numcomputations;
        $scope.endTime = new Date().getTime() - startTime;
        console.log('End Time: ' + ($scope.endTime) + 'ms');
    };    
Advertisement

Dockumo Public Document sharing is here

Hey.

I finally added public document sharing on Dockumo after two weeks of launching. It was my original goal to get something like this up and running, but it turned out to be slightly more complicated than I had thought. Since I had built everything from the perspective of a user who is logged in, I had to develop some workarounds for users who were just hitting my site. The whole experience actually turned me off a bit to having user login. It seems like such a huge barrier to conversion.

In any case, here’s what public documents do: Say a user wants to share a template (I’ve shared an example cover letter when I was leaving law behind and coming back to software engineering). The user will login, create a new article, click “private” off (it’s enabled by default), then input the article’s content. Once the user saves the article, Dockumo will run a mapreduce behind the scenes and “index” the article so it becomes publicly searchable. The mapreduce runs by category, which are fixed (via a JSON file) so it forces the user into certain options. Tags are user specific and are searchable through the interface. Then the main landing page gets updated with “trending” articles so users can see what’s popular. Here’s a screenshot:

Landing page @ Dockumo.
Landing page @ Dockumo.

Then, if a user finds an article that she likes, she can email the article to herself, or download it to an HTML or word file locally. If the user wants to make some tweaks, she’ll need to login herself, add a copy of the article and then edit it for her own use. Hence the cycle repeats. The hope is that over time, shared articles can get better. It goal is that all of us can use the benefit of our collective intelligences to make and share better documents, whether it’s something simple like a cover letter, something more in depth like a lease, or something more personal like a statement of purpose.

Dockumo article search by category
Dockumo article search by category

That’s the power behind Dockumo and my hope for the community that someday, people can find some use for it. Now if I could just get one user who wasn’t already a friend …

Custom tags directive in angular.js

For me to finish up phase 1 of Dockumo development (allowing users to tag and categorize articles), I had to let users add custom tags to their articles. If the article is made public, then the user’s tags will be indexed and searchable through the search interface. This will let other users sift through content based on tag and could eventually give some good insight into what content was most popular.

Why I created it: I’ve used angular-tag before, but I didn’t like the way it hooked in the “Delete” button, which on my macbook sometimes defaults to going to the previous page in history (like clicking the “Back” button). I also found the CSS to be a bit wonky and difficult to work with. When I would click on the input box, the box would expand and didn’t play nicely with my css. I’ve been feeling more reticent these days with respect to using untested third party libraries out there (even small). Sometimes they save me lots of time, but other times, they only cause lots of headaches. Delving into someone else’s source code, modifying their css, figuring out how to mash my code and theirs takes a lot of time. Sometimes it’s not until I do all that do I realize that the library doesn’t really do what I want it to do. Hence the frustration.

What it does: My angular directive is very simple. It lets the user bind a variable to the directive through ngModel. The variable is an array of strings (a set). The directive then renders an input text box below, letting the user enter in a comma separated list of tags. If the user clicks “add”, these tags are split by commas, trimmed and their lowercase values added to the set of tags already bound. It works similarly to wordpress’s tagging system. That’s it: no fuss, no muss.

Screen Shot 2014-07-28 at 10.09.00 AM

Without further ado, here’s the source:

angular.module('mean.articles')

.directive("mTag", ['$resource', function($resource) {
  return {
    restrict: 'E',
    controller: 'TagController',
    scope: {
      read: '=',  //read or write (I provide an API in case the user only wants to show the tags and not provide the input
      tags: '=ngModel'  //the ng-model binding
    },
    templateUrl: 'public/system/views/directives/tag.html'
  };
}]);

Here’s the controller code:

'use strict';

angular.module('mean.system').controller('TagController', 
    ['$scope', function ($scope) {

    //only allows inputs of alphabetic characters (no numbers or special chars)
    $scope.validate = function(string) {
    	if (!string || string.length === 0) {
    		return true;
    	} else {
    		return string.match(/^\s?[A-Z,\s]+$/i);	
    	}
    };

    //Adds the tag to the set
    function addTag (string) {
    	if ($scope.tags.indexOf(string.toLowerCase()) === -1) {
    		$scope.tags.push(string);
    	}
    };

    //When the user clicks "Add", all unique tags will be added
    $scope.appendTags = function(string) {
    	if ($scope.validate(string)) {
    		if (string) {
	    		var split = string.split(',');

	    		split.forEach(function(tag) {
	    			if (tag.trim().length > 0) {
	    				addTag(tag.trim());
	    			}
	    		});

	    		$scope.temptags = "";
	    	}
    	}
    };

    //When the user wants to delete a tag
    $scope.deleteTag = function(tag) {
    	if (tag) {
    		var idx = $scope.tags.indexOf(tag);
    		if (idx > -1) {
    			$scope.tags.splice(idx, 1);
    		}
    	}
    };

}]);

And lastly, the HTML template:

<div class="input-group" ng-show="!read">
    <input type="text" name="temptags" ng-model="temptags" class="form-control"  placeholder="Enter comma separated tags" ng-class="{'haserror':!validate(temptags)}" style="margin-bottom:0px"/>
    <span ng-show="!validate(temptags)" class="label label-danger">Please only use regular characters for tags (a-z)</span>
    <div class="input-group-btn" style="vertical-align:top">
        <button class="btn btn-inline" ng-click="appendTags(temptags)">Add</button>
    </div>
</div>

<div class="margintopten">
    <span class="label label-default normal tag marginrightten" ng-repeat="tag in tags"><a ng-click="deleteTag(tag)" ng-if="!read"><i class="fa fa-times-circle" ng-click="deleteTag(tag)"></i></a>  {{tag}}</span>
</div>

Here’s a working jsfiddle.

Custom tag directive with error handling
Custom tag directive with error handling

As always, let me know comments or feedback.

Dockumo – the crowd in the cloud

Screen Shot 2014-07-22 at 1.23.04 PM

In the past few months, I’ve been working on a project in my spare time called Dockumo. Dockumo is a web-based tool that lets users create “Articles” or basically blurbs of text, edit them and create new versions of them easily. When a user edits an article, the user is given the option to “Save” it or “Save as New Version.” Saving as a new version will generate a “Series” for the article, which will then start keeping track of all versions of the article. Pretty basic stuff. It’s my first foray into making consumer-facing software, including making better software tools for lawyers (my ultimate goal).

Dockumo main page (for text comparison)
Dockumo main page (for text comparison)

The utility here is that the user can see the article and versions through a timeline view and can easily compare any two versions (or any two articles for that matter) with a couple of clicks. I built this because comparison before required making two word documents, saving them, then comparing them and saving the result. Dockumo does this, but makes it much faster to do. Also, there’s no need to store lots of messy versions on your computer since it’s stored in the cloud. Furthermore, I built in some functionality to export the result to a text file, html doc or word doc (word was by far the most challenging). Exporting actually keeps track of the changes in “Track changes” format in word, which I implemented through a modification to the node.js officegen module.

An example of an article
An example of an article

Here’s what the timeline view looks like:

Timeline view of an article
Timeline view of an article

Finally, Dockumo lets users group articles together in “Journals.” Journals are a way to get related content together and then to share them with your friends or other users on the site. When you share a journal, it becomes a “collaboration” for all users who are invited. Those collaborators can add new versions of any articles grouped in the journal. The idea behind this is that users will be able to view and edit data together, while keeping track of new versions. There’s an option, for instance, to receive a notification whenever a collaborator edits a series. So for example, if five users are collaborators on a journal who are working on a report or a term paper, each one can make his or her modifications and save as a new version. As the document evolves through time, each user can see which other user made the change and how the article changed overtime through the comparison view.

A journal
A journal

What’s left?

When I started out making Docukmo, I only intended it to be a quick and easy way to diff two blurbs or text. I didn’t want feature creep to set in, but it inevitably did. Journals were not planned from the beginning, but I figured users would want a way to see relevant content. I ended up having to add lots of other elements as well, such as “friending” other users, user search, a notifications system, exporting to other formats, and emailing users who want updates, user permissions. All of these things added complexity and time. A good lesson for me though: When I thought I was able to launch the website, I was actually still about a month out. It wasn’t until I tried to launch did I realize that there were so many niggling things left undone. For instance, not allowing users to register under the same username or email address, or giving a password reset mechanism. Fortunately, I can carry over a lot of this code to any future projects down the line.

And feature-creep. I am probably the worst-offender of this concept ever. I always think to myself, “well if the software doesn’t do [X], then users won’t want to use it!” For me, this is an ever present temptation to continue adding feature upon feature to the source while never actually launching. Since launching, there have been other features that I really want to add (and probably will). For instance, I want to give users the option to make articles “public” and tag them with keywords so that the community (all users within the cloud) can search for a particular type of document (e.g. a cover letter or a thank you note), see an example of a it, and suggest modifications and rate existing articles. In other words, have a community of contributors who persistently improve cloud-based, crowd-sourced documents. That’s why I called the product “Dockumo”, or “doc” + “kumo” (“cloud” in Japanese).

Let me know what you guys think. I’m always happy to discuss improvements I can make or my technical/architectural decisions.

About page
About page

Real Time Analytics Engine

My current project at work is to architect a solution to capturing network packet data sent over UDP, aggregate it, analyze it and report it back up to our end users. At least, that’s the vision. It’s a big undertaking and there are several avenues of approach that we can take. To that end, I’ve started prototyping to create a test harness to see the write performance/read performance of various software at different levels in our software stack.

Our criteria are as follows (taken from a slide I created):

Our Criteria
Our Criteria

Based on these four requirements pillars, I’ve narrowed down the platform choices to the following:

SERVER

nodejsStrengths

      • Single-threaded, event loop model
      • Callbacks executed when request made (async), can handle 1000s more req than tomcat
      • No locks, no function in node directly performs I/O
      • Great at handling lots of small dynamic requests
      • Logo is cool

Weaknesses

      • Taking advantage of multi-core CPUs
      • Starting processes via child_process.fork()
      • APIs still in development
      • Not so good at handling large buffers

logo-white-big

Strengths

        • Allows writing code as single-threaded (Uses Distributed Event Bus, distributed peer-to-peer messaging system)
        • Can write “verticles” in any language
        • Built on JVM and scales over multiple cores w/o needing to fork multiple servers
        • Can be embedded in existing Java apps (more easily)

Weaknesses

        • Independent project (very new), maybe not well supported?
        • Verticles allow blocking of the event loop, which will cause blocking in other verticles. Distribution of work product makes it more likely to happen.
        • Multiple languages = debugging hell?
        • Is there overhead in scaling over multiple nodes?

PERSISTENCE

For our data, we are considering several NoSQL databases. Our data integrity is not that important, because our data is not make-or-break for a company. However, it is essential to be highly performant, with upwards of 10-100k, fixed-format data writes per second but much fewer reads.

mongo

Architecture

        • Maps in memory space directly to disk
        • B-tree indexing guarantee logarithmic performance

Data Storage

        • “Documents” akin to objects
        • Uses BSON (binary JSON)
        • Fields are key-value pairs
        • “Collections” akin to tables
        • No conversion necessary from object in data to object in programming language

Data-Replication

        • Mongo handles sharding for you
        • Uses a primary and secondary hierarchy (primary receives all writes)
        • Automatic failover

Strengths

        • Document BSON structure is very flexible, intuitive and appropriate for certain types of data
        • Easily query-able using mongo’s query language
        • Built-in MapReduce and aggregation
        • BSON maps easily onto JSON, makes it easy to consume in front end for charting/analytics/etc
        • Seems hip

Weaknesses

        • Questionable scalability and write performance for high volume bursts
        • Balanced B-Tree indices maybe not best choice for our type of data (more columnar/row based on timestamp)
        • NoSQL but our devs are familiar with SQL
        • High disk space/memory usage

 

cassandra
Architecture

        • Peer to peer distributed system, Cassandra partitions for you
        • No master node, so you can read-write anywhere

Data Storage

        • Row-oriented
        • Keyspace akin to database
        • Column family akin to table
        • Rows make up column families

Data Replication

        • Uses a “gossip” protocol
        • Data written to a commit log, then to an in-memory database (memtable) then to disk using a sorted-strings table
        • Customizable by user (allows tunable data redundancy)

Strengths

        • Highly scalable, adding new nodes give linear performance increases
        • No single point of failure
        • Read-write from anywhere
        • No need for caching software (handled by the database cluster)
        • Tunable data consistency (depends on use case, but can enforce transaction)
        • Flexible schema
        • Data compression built in (no perf penalty)

Weaknesses

        • Data modeling is difficult
        • Another query language to learn
        • Best stuff is used by Facebook, but perhaps not released to the public?

 

467px-Redis_Logo.svg

            TBD

 

        FRONT-END/REST LAYER

For our REST Layer/Front-end, I’ve built apps in JQuery, Angular, PHP, JSP and hands-down my favorite is Angular. So that seems like the obvious choice here.

AngularJS-large

Strengths

      • No DOM Manipulation! Can’t say how valuable this is …
      • Built in testing framework
      • Intuitive organization of MVC architecture (directives, services, controllers, html bindings)
      • Built by Google, so trustworthy software

Weaknesses

      • Higher level than JQuery, so DOM manipulation is unadvised and more difficult to do
      • Fewer 3rd party packages released for angular
      • Who knows if it will be the winning javascript framework out there
      • Learning curve

Finally, for the REST API, I’ve also pretty much decided on express (if we go with node.js):

express

Strengths

      • Lightweight
      • Easy integration into node.js
      • Flexible routing and callback structure, authorization & middleware

Weaknesses

      • Yet another javascript package
      • Can’t think of any, really (compared to what we are using in our other app – java spring

These are my thoughts so far. In the following posts, I’ll begin describing how I ended up prototyping our real time analytics engine, creating a test harness, and providing modularity so that we can make design decisions without too much downtime.

Node.js vs Vert.x (Part 1)

Now for a change of pace. Recently at work, we’ve been trying to figure out what platform to build an application that will handle serving realtime data to customers. We want the system to be fast, scalable and able to handle hundreds, maybe thousands or even tens of thousands of requests per second.

I did a bit of prototyping in both node and vert.x to see how they performed under pressure. To do this, I built a cute webapp that makes a bunch of requests on basic web servers written in both node.js and vert.x to see how fast they could respond and how they would handle under a heavy load of requests. Here’s a picture of the ui made for the webapp (build in angular.js).

Node webapp
Node webapp

I created a form that allows for various inputs. In it, one can specify several variables including the following:

The variables:

  • Iterations – number of http requests to make
  • Block Size – How often a result is computed (reports start time for request, end time, average time per call (ms) and total time (ms) )
  • Range – How many results to display on screen – the graph tracks this
  • Polling – Toggle On/Off (this will start polling requests as fast as node.js can handle them. These are serial in nature).
  • Show Graph – toggle graphing on/off (off will provide better javascript performance)

Thanks to angular-seed for the fast prototyping and angular-google-chart for charting.

Benchmarking Parameters: Each request is a simple get request made to the respective webserver, which then writes a header and a “Hello” response. The requests are made through angular’s $http method. When a successful response is received, the callback function calls another $http request, until the number of successful responses received equals the number of iterations specified. I measure the time it takes from the time the request is made until the number of requests received per block is complete.

Time Keeping: I try to avoid all delays that could be attributable to javascript rendering (e.g. the timestamp is taken when the first request in the block is made. Then the timestamp is recorded when the # of responses in a block is received. I send both these parameters to a javascript function, which is responsible for rendering the results to the display. I also added a function to enable polling requests to be made, which makes $http requests as fast as responses can be received in order to add stress to the server’s load. This is enabled with the “polling” button.

Here’s a snippet of the webserver source code.

In Node.js:

StaticServlet.prototype.handleRequest = function(req, res) {
  var self = this;



  var path = ('./' + req.url.pathname).replace('//','/').replace(/%(..)/g, function(match, hex){
    return String.fromCharCode(parseInt(hex, 16));
  });

  console.log(path);
  if (path == './show') {

      res.writeHead(200, {'content-type': 'text/html'});
      res.write("Hello: ");
      res.end();

  } else if (path == './version') {
    res.writeHead(200, {'content-type': 'text/html'});
      res.write("Node.js version: " + 'v0.10.26');
      res.end();
    
  } else {
    var parts = path.split('/');
    if (parts[parts.length-1].charAt(0) === '.')
      return self.sendForbidden_(req, res, path);
    else {
      fs.stat(path, function(err, stat) {
        if (err)
          return self.sendMissing_(req, res, path);
        if (stat.isDirectory())
          return self.sendDirectory_(req, res, path);
        return self.sendFile_(req, res, path);
      });
    }
  }
}

In Vert.x:


server.requestHandler(function(req) {
var file = '';

  if (req.path() == '/show') {
  	  req.response.chunked(true);
  	  req.response.putHeader('content-type','text/html');
      req.response.write("Hello: ");
      req.response.end();
  } else if (req.path() == '/version') {
  	  req.response.chunked(true);
  	  req.response.putHeader('content-type','text/html');
      req.response.write('Vertx version: ' + '2.0.2-final (built 2013-10-08 10:55:59');
      req.response.end();
  } else if (req.path() == '/') {
    file = 'index.html';
    req.response.sendFile('app/' + file);   
  } else if (req.path().indexOf('..') == -1) {
    file = req.path();
    req.response.sendFile('app/' + file);   
  }
  

See? Dead simple. Of course there are lots of flaws with this methodology (e.g. the webservers are only serving static data, are writing short responses, are not optimized, etc.). It wasn’t my intention to come to a hard conclusion with this. I just wanted a data point and to experiment with both platforms. It turns out they (at least in this test) came very close to one another in terms of performance. Both servers were running on my machine, which specs are listed below.

System Specs: Macbook Pro, mid-2012, 2.3ghz with 16gb ram and a 512gb ssd. Both webservers are running locally on my machine with a bunch of other apps open.

And here are some preliminary results:

Here’s the node.js webserver, with polling turned on:

Node while polling, 1000 requests, 50 request block size
Node while polling, 1000 requests, 50 request block size

Here’s the vert.x webserver, with polling turned on:

Vert.x while polling, 1000 requests, 50 request block size
Vert.x while polling, 1000 requests, 50 request block size

You can see that they’re very close. Next I tried stressing both servers a bit through running several concurrent queries and several “instances” of the web app. In a later post, I’ll put up some more detailed results with trying to stress both webservers out. The response time definitely slows down as more are being made.

Conclusions: Both webservers are surprisingly close in terms of response/processing/overhead time. My CPU usage goes a bit higher on the vert.x server, but I do have several other applications running. I also haven’t tested running multiple instances of the same verticle yet in vert.x, or trying to fork processes in node.js. Both webservers are as barebones as they get. So in other words, I can’t make any hard conclusions yet, except to say that both servers are

  • Able to handle high loads of requests per second (probably on the order of 500 – 1000
  • Out of the box, both servers run roughly equivalently

These results seemed a little bit surprising to me, given that on the web vert.x seems to have faster results. One factor that may contribute to this is the lack of realism in server response. It’s probably not the case that so many requests would be coming in simultaneously (or there would be multiple server instances to handle such requests), and the response size is very small. Since both servers are basically just writing to a stream, as long as the I/O is written with performance in mind, this may be roughly as fast as a my CPU can handle writing responses. Another hypothesis is perhaps that vert.x really shines in handling multiple instances. I’ll have to experiment and report my results.

Postscript: In case you want to try it out for yourself, I’ve made the source code available on my github @ https://github.com/rocketegg/Code-Projects/tree/master/ServerPerformance I know this test has been done with much more sophistication and by a lot of people out there, but hopefully you can play around with this webapp, have fun and learn something.