## A simple n-queens solution in javascript

I was recently just playing around with the n queens problem and decided to try to solve it. I’ve never done it before, but figured that I could use a depth first search kind of solution. I ended up building a solution in node.js which does identify all potential solutions, but decided to put it in angular.js to run on the front end so users can play around and visualize the solutions.

Essentially, the algorithm works like this:

• If there’s only 1 queen left to place, and multiple possible positions, then it’s a valid solution. If it’s a unique solution, let’s store it.
• If there is more than one queen left to place, and the number of valid positions left is less than the number of queens, then this is the wrong path, let’s backtrack.
• If neither applies, lets iterate through all the next potential moves in the adjacent column, then recompute the possible positions from that grid state.
• Finally, after we place the queen, we recursively call the algorithm for one less queen, a new grid state, recomputed valid positions and the next column.

I haven’t had much time to optimize, but a few ideas I could possibly do:

1. Since a chess board is symmetrical, I probably don’t need to check all four corners (since they’ll yield the same result if the board is rotated 90 degrees)
2. Use a stack instead for managing the valid positions instead of a simple array. Right now I actually make a copy of all new possible positions after a queen is placed.
3. Use a better way to come up with unique solutions than concating a string and comparing each new solution that comes in

A couple optimizations that already helped:

1. pruning next potential positions to only the starting column + 1 (enormous performance enhancement)
2. pruning routes when number of queens left > number of open positions
3. cutting down the length of the unique string when comparing objects

In any case, it was a fun little experiment. Here’s the source code, and here’s where you can see it working in jsfiddle. Warning!!! Angular tends to run pretty slow and blocks when you compute, so don’t try to run with too many queens (like > 12 or 13) otherwise you may cause your browser to freeze.

N Queens – http://jsfiddle.net/rocketegg0/wu6cpp5v/

Some average run times:

Run Times
Num Queens Solutions # Comparisons Run Time (ms)
4 2 12 <1
5 10 41 1
6 4 138 1
7 40 473 1
8 92 1758 7
9 352 7077 16
10 724 30654 45
11 2680 142755 360
12 14200 734048 4936
```function Grid(width, height) {
this.width = width;
this.height = height;
var grid = [];
var validPositions = [];

for (var i = 0; i < width; i++) {
grid.push([]);
for (var j = 0; j < height; j++) {
grid[i][j] = '_';
validPositions.push(new Pair(i,j))
}
}

this.getValidPositions = function() {
return validPositions;
}

this.getGrid = function() {
return grid;
}

//Position validation
function isQueen(pair, x, y) {
return pair.x == x && pair.y == y;
}

function isInRow(pair, y) {
return pair.y == y;
}

function isInCol(pair, x) {
return pair.x == x;
}

//Ex:
//3, 2 > 4, 1 = -1, 1
//3, 2 > 4, 3 = -1, -1
//3, 2 > 2, 1 = 1, 1
//3, 2 > 2, 3 = 1, -1
function isInDiagonal(pair, x, y) {
if (Math.abs(pair.x - x) == Math.abs(pair.y - y)) {
return true;
}
return false;
}

function testPosition(pair, x, y) {
if (isQueen(pair, x, y) ||
isInRow(pair, y) ||
isInCol(pair, x) ||
isInDiagonal(pair, x, y)) {
return true;
}
return false;
}

function recomputeValid(validPositions, queenX, queenY) {
var newValid = [];
for (var i = 0; i < validPositions.length; i++) {
if (!testPosition(validPositions[i], queenX, queenY)) {
newValid.push(validPositions[i]);
}
}
return newValid;
}

function gridToString(grid) {
var gridstr = '';
for (var i = 0; i < grid.length; i++) {
gridstr += '['
for (var j = 0; j < grid[0].length; j++) {
gridstr += grid[i][j];
if (j < grid.length-1) {
gridstr += '|'
}
}
gridstr += ']'
gridstr += '\n';
}
return gridstr;
}

var printGrid = function (grid) {
console.log(gridToString(grid));
}

var solutions = [];
\$scope.solution_grids = [];

function convertToSolution(grid) {
var str = '';
for (var i = 0; i < grid.length; i++) {
for (var j = 0; j < grid.length; j++) {
if (grid[i][j] == 'Q') {
str += i + '#' + j;
}
}
}
return str;
}

this.numcomputations = 0;

this.solve = function (numQueens, grid, validPositions, startcol) {
if (numQueens <= 1 && validPositions.length > 0) {
grid[validPositions[0].x][validPositions[0].y] = 'Q';
var solution = convertToSolution(grid);
if (solutions.indexOf(solution) == -1) {
solutions.push(solution);
\$scope.solution_grids.push(gridToString(grid));
}
grid[validPositions[0].x][validPositions[0].y] = '_'; //reset
return true;
} else if (numQueens > validPositions.length) {
return false;
} else {
var x, y;
var nextcol = validPositions.filter(function(point) {
return point.x == startcol + 1 ? true : false;
});    //prune routes to only next col
for (var i = 0; i < nextcol.length; i++) {
x = nextcol[i].x, y = nextcol[i].y;
grid[x][y] = 'Q';
this.solve(numQueens - 1, grid,
recomputeValid(validPositions, x, y), startcol+1);
grid[x][y] = '_';	//reset
this.numcomputations++;
}
}
}
}

\$scope.solve = function() {
var nqueens = \$scope.sides;
var grid = new Grid(nqueens, nqueens);
var startTime = new Date().getTime();
console.log('Start Time: ' + startTime);
grid.solve(nqueens, grid.getGrid(), grid.getValidPositions(), 0);
\$scope.numcomputations = grid.numcomputations;
\$scope.endTime = new Date().getTime() - startTime;
console.log('End Time: ' + (\$scope.endTime) + 'ms');
};
```

## Mongoose / MongoDB performance enhancements and tweaks (Part 3)

Holy crap. MongoDB native drivers are SO much faster than updates through mongoose’s ORM.  Initially, when I set out on this quest to enhance mongoDB performance with node.js, I thought that modifying my queries and limiting the number of results returned would be sufficient to scale. I was wrong (thanks Bush for the imagery). It turns out that the overhead that mongoose adds for wrapping mongoDB documents within mongoose’s ORM is tremendous. Well, I should say tremendous for our use case.

In my previous two posts of tweaking mongoDB / mongoose for performance enhancements (Part 1 and Part 2), I discussed optimization of queries or making simple writes instead of reads. These were worthwhile improvements and the speed difference eventually added up to significant chunks, but I had no idea moving to the native driver would give me these types of improvements (See below).

Example #1: ~400 streams, insertion times.
These numbers are from after I made the initial tweaks after Part 1. Unfortunately, I don’t really have that good of a printout from mongotop, but this kind of gives you an idea. Look at the write times for streams and packets, flowing at a rate of ~400 streams. This is for 400 sources of packets, which all gets written and persisted. Here you can see the write time to streams is @ 193ms / 400 streams or 48.25 ms / 100 streams. Likewise, packet writing is 7.25 ms / 100 streams. (You can mostly ignore read time, these are used for data aggregates and computing analytics). Compare these with the results below:

streams 193ms 0ms 193ms
packets 30ms 1ms 29ms
devices 9ms 9ms 0ms

Example 2: ~1000 streams, insertion times.
You can see here that write time has dropped significantly. Writes to the packets collection is hovering at around 1.7 ms / 1000 streams, and writes to the streams collection hovers at around 7.6 ms / 100 streams. Respectively, that’s a 425% and a 635% improvement in query write times to the packets collection and streams collection. And don’t forget, I had already begun the optimizations to mongoose. Even after the tweaks I made in Part 2, these numbers still represent a better than 100% improvement to query times. Huge, right?

packets 186ms 169ms 17ms
devices 161ms 159ms 2ms
streams 97ms 21ms 76ms

I knew using the mongoDB native drivers would be faster, but I hadn’t guessed that they would be this much faster.

To make these changes, I updated mongoose to the latest version 3.8.14, which enables queries to be made using the native mongoDB driver released by 10gen (github here: https://github.com/mongodb/node-mongodb-native) via Model.Collection methods.  These in turn call methods defined in node_modules/mongodb/lib/mongodb/collection/core.js, which essentially just execute raw commands in mongo. Using these native commands, one can take advantage of things like bulk inserts.

I still like mongoose, because it helps instantiate the same object whenever you need to create and save something. If something isn’t defined in the mongoose.Schema, that object won’t get persisted to mongoDB either. Furthermore, it can still be tuned to be semi-quick, so it all depends on the use case. It just so happens that when you’re inserting raw json into mongoDB or don’t need the validation and other middleware that mongoose provides, you can use the mongoDB native drivers while still using mongoose for the good stuff. That’s cool.

Here’s what the new improvements look like:

```    var Stream = mongoose.model('Stream');
async.waterfall([
//Native mongoDB update returns # docs updated, update by default updates 1 document:
function query1 (callback) {
Stream.collection.update(query1, query1set, {safe: true}, function(err, writeResult) {
if (err) throw err;
if (writeResult == 1) {
callback('Found and updated call @ Query1');
} else {
callback(null);
}
});
},
function(callback) {
Stream.collection.update(query2, query2set, {safe: true}, function(err, writeResult) {
if (err) throw err;
if (writeResult == 1) {
callback('Found and updated stream @ Query2');
} else {
pushNewStream(packet, cb);
callback('No stream found.  Pushing new stream.');
}
});

}
], function(err, results) {});
```

## Mongoose / MongoDB speed improvements (Part 2)

In a previous post MongoDB performance enhancements and tweaks, I described some techniques I’ve found to speed up mongoose inserts and updates on large volume performance statistic data. I was still seeing performance bottlenecks in MongoDB, especially after running a node cluster for our data analytics. Since node is now spread to multiple cores (scaling “horizontally”, to be described in another post), the writes generally come to MongoDB much faster. The main question for me was whether it was mongoose, the node-mongoDB driver or mongo per se slowing me down.

The problem:

When we get performance data, we start tracking it by the creation of a “stream” object. The stream object is first created with a unique identifier, then subsequent packets that come in update the stream. The streams get the highest packet value of the incoming packet and update their timestamp with the packet’s timestamp. Later on when we stop seeing packets flow for a particular stream, we time it out so analytics can be computed for all packets that came in from the beginning of the stream to the end of the stream.

My initial implementation made a series of read queries using mongoose find(query), returned all potential matching streams, then updated the matching stream. The source code looked something like this.

```function updateStream(packet) {
var stream = mongoose.model('Stream');
var query1 = {
\$and: [{
'from.ID': packet.fromID
}, {
'to.ID': packet.toID
}, {
'stream_ended.from': false
}, {
}, {
'from.highestPacketNum': {
\$lt: packet.highestPacketNum
}
}]
};

//Since streams are bilateral, we have to do two query reads in order to find the matching stream
var query2 = {
\$and: [{
'to.ID': packet.toID
}, {
'from.ID': packet.fromID
}, {
'stream_ended.to': false
}, {
}, {
'to.highestPacketNum': {
\$lt: packet.highestPacketNum
}
}]
};

async.waterfall([
function (callback) {
stream.find(query1).exec(function(err, calls) {
if (calls.length > 1) {  //throw error, there should only be 1 stream
throw new Error('There should only be one stream with this unique identifier');
} else if (calls.length == 1) {
//update calls[0]
calls[0].save(cb);
callback(null);
} else {
}
});
},
function (callback) {
stream.find(query2).exec(function(err, calls) {
if (calls.length > 1) {  //throw error, there should only be 1 stream
throw new Error('There should only be one stream with this unique identifier');
} else if (calls.length == 1) {
//update calls[0]
calls[0].save(cb);
callback(null);
} else {
}
});
}
], cb);
}
```

You can see that this sourcecode was highly inefficient because mongoose was returning all possible matches of the stream. This was based on a limitation by our packet simulator, which at the time did not spoof unique IDs in the packets that it would send. At this point in time, we were capped at around 250 simultaneous streams, running in 1 node process.

Result: ~250 simultaneous streams of data

Improvement #1: Limit the search to the first object found, update it and persist it.

Essentially the source code remained the same, but the mongoose.find queries changed from find(query1).exec to find(query1).limit(1).exec(). With this change, we saw an improvement of around 50%, since mongoose would return after finding the first match. At this point, the blocker shifted back to node.js, and I noticed that at a certain point, the event queue would block up with handling too many events. AT the time, node.js was responsible for doing aggregation, decoding the packets, invoking mongoose and storing them, and running our REST API as well as serving up static data. I saw my poor little node.js process pinged out at 100% trying to churn through all the data. One good thing I noticed though, is that even though node.js was capped out in terms of resources, it still continued to churn through the data, eventually processing everything given enough time.

Result: ~350 simultaneous streams of data, query time improved by about 50%

Improvement #2: Clustering node.js

This deserves a post in itself since it required breaking out the various services that the single node process was handling into multiple forked processes, each doing its own thing. Suffice it to say, I eventually decided to fork the stream processors into N instances, where N is the number of cores on the machine / 2, with basic load balancing. This caused node to write to mongoDB much faster, and caused delays which eventually bogged mongoDB down. Thus the pendulum swung back to mongoDB.

Result: ~400 simultaneous streams of data, query time remains the same, mongoDB topped out.

Improvement #3: mongoDB updates in place

Finally, I realized that there was no reason to actually get the stream object returned in our source code, since I was just making updates to it. I had also noticed that it was actually the read time in mongotop that was spiking when the number of streams increased. This was because the find() functions in mongoose return a mongoose wrapped mongoDB document, so the update does not happen in place. For simple updates without much logic, there is no point to getting the object back, even when using the .lean() option to get json back. Here was the update:

```function updateStream(packet) {
//queries remain the same ...

async.waterfall([
function(callback) {
Stream.findOneAndUpdate(query1, {\$set: { 'endTime': packet.timestamp, 'metadata.lastUpdated': new Date(), 'from.highestPacketNum': highestPacketNum }}, {new: false}).exec(function(err, doc) {
if (doc) {
cb(err, doc);
callback('Found and updated stream @ Query1');
} else {
callback(null);
}
});
},

//No matching yet with IP, so try with just swapped SSRCs and update that one
function(callback) {
Stream.findOneAndUpdate(query2, {\$set: { 'to.IP_ADDRESS': IP, 'endTime': packet.timestamp, 'metadata.lastUpdated': new Date(), 'to.highestPacketNum': highestPacketNum }}, {new: false}).exec(function(err, doc) {
if (doc) {
cb(err, doc);
callback('Found and updated stream @ Query2');
} else {
createNewStream(packet, cb);
callback('No streams found.  Pushing new stream.');
}
});
}
], function(err, results) {

});
}
```

It turns out that after this improvement, I saw in mongotop that read time dropped to 0ms per second, and write time slightly spiked. However, this was by far the biggest improvement in overall query time.

Result: ~600 simultaneous streams of data, query time dropped by 100 – 200% (when including read time dropping to 0). This, combined with the stream processors running on multiple cores seemed to be a scalable solution that would significantly spike our capacity, but I noticed that at around 600 simultaneous streams of data, suddenly our stream creation would spike, and continue increasing.

Improvement #4: MongoDB upgrade to 2.6.x and query improvement using \$max field update operator

For all you readers who like mysteries, can you guess what was causing the stream creation to spike? I spent a long time thinking about it. For one, mongotop didn’t seem to be topped out in terms of the total query time on the streams collection. I noticed spikes of up to 400 or so ms for total query time, but it seemed by and large fine. Node.js was running comfortably at around 40% – 50% cpu usage per core on each of the four cores. So if everything seemed fine, why was stream creation spiking?

The answer, it turns out, was a race condition caused by the processing of the second packet of the simultaneous stream before the first packet could be used to instantiate a new stream object. At a certain point, when enough simultaneous streams were incoming, the delay in creation of the new stream would eclipse the duration between the first and second packets of that same stream. Hence, everything after this point created a new stream.

I thought for a while about a solution, but I got hung up either on choosing a synchronous processing of the incoming packets, which would significantly decrease our throughput, or use a “store and forward” approach where a reconciliation process would go back and match up streams. To be fair, I still have this problem, but I was able to reduce its occurrence to a significant extent. Because there’s no guarantee that we would be handling the packets in a synchronous order, I updated our query to make use of the \$max field update operator, which would only update the stream highest packet number if a packet with the same IDs and higher packet number came in. This in turn let us reduce the query time because I no longer had to query to find a stream with a lower packet number than the incoming packet. After this update, I noticed that the reduced query time significantly reduced the total query time on the collection and at the same time helped the race condition issue.

```function updateStream(packet) {
var query1 = {
\$and: [{
'from.ID': packet.fromID
}, {
'to.ID': packet.toID
}, {
'stream_ended.from': false
}, {
}]
};

var query2 = {
\$and: [{
'to.ID': packet.fromID
}, {
'from.ID': packet.toID
}, {
'stream_ended.to': false
}]
};

async.waterfall([

//First see if there is a call object
function(callback) {
Call.findOneAndUpdate(query1, {
\$set: { 'endTime': packet.timestamp,
},
\$max: {
'from.highestPacketNum': highestPacketNum
}
}, {new: false}).exec(function(err, doc) {
if (doc) {
cb(err, doc);
callback('Found and updated stream @ Query1');
} else {
callback(null);
}
});
},

function(callback) {
Call.findOneAndUpdate(query2, {
'endTime': packet.timestamp,
},
\$max: {
'to.highestPacketNum': highestPacketNum
}
}, {new: false}).exec(function(err, doc) {
if (doc) {
cb(err, doc);
callback('Found and updated stream @ Query3');
} else {
pushNewStream(packet, cb);
callback('No stream found.  Pushing new stream.');
}
});
}
], function(err, results) {

});
return true;
}
```

Note that the threshold for the race condition is just higher with this approach and not completely solved. If enough streams come in, I’ll still eventually get the race condition where the stream is not instantiated before the second packet is being processed. I’m still not quite sure what the best solution is here, but as with everything, improvement is an incremental process.

Result: ~1000 simultaneous streams, query time dropped by 100%, 4 cores running at 40% – 50% cpu.

## A candid look at biglaw and big law firms from a biglaw dropout

Leaving biglaw behind

To any readers out there, whether you’re a biglaw attorney, law student, engineer, working professional considering law school or spambot, here’s an update on my life just over a year after my exodus from biglaw. It’s been just over a year since I left:

Giving notice

I still remember the day I gave my notice to my boss, a young and rising partner in my law firm, that I was leaving for a startup. My former partner, a highly intelligent and skilled attorney, of course sensed my reasons for leaving. But instead of speaking candidly about my experience, we spoke through with the layers of etiquette built up over time by biglaw attorneys.

I told him how much I enjoyed working with my fellow associates and for him, how much I respected him and appreciated his mentorship, and how the opportunity was too good to pass up. He told me how he appreciated my contributions to the law firm, how I was practicing at a level beyond my seniority and about my bright future at the firm and that he would regret seeing me go. And though we spoke truthfully to one another, we managed to miss the truth entirely.

I didn’t tell him how much I disliked working at the law firm; nor did I tell him that I probably would have accepted any position that gave me the opportunity to leave. I didn’t tell him about the aggregate toll that responding to emails 18 hours a day had taken on my sense of normalcy and happiness. I didn’t tell him how I, in desperation after having worked the first 8 weekends in a row, applied to engineering jobs after just two months in biglaw. I didn’t tell him about the numerous spreadsheets I had created and obsessively updated detailing how much money I would have each month, the day my net worth would be zero, and the day when my ROI on law school would overcome the opportunity cost of giving up my past engineering career. I didn’t tell him how many late nights I had spent in the past 6 months fighting to keep my eyes open while I watched old CS lectures and studied abstract data types and binary tree implementations. I didn’t tell him how scared I was to accept the position because I seriously doubted the viability of the startup and knew that it was completely leaving behind my legal background to perhaps never be used again. I didn’t tell him that despite all my fears and doubts, how easy it was me for to make the decision.

The decision to leave

What had caused my desperation to get out after just two months? It wasn’t just the long hours. I thought that might be the case, that perhaps I was just lazy, but in the aftermath of leaving my law firm, I still continued to work 50 hour weeks with about 20 hours of work a week on my personal projects. In fact, I was still putting in more time than I had at the firm. That actually led me to discover about myself that I didn’t shy away from work.

It wasn’t just the unpredictability of work, although that was a huge contributing factor. It’s really hard to describe what it’s like being tethered to your work phone and being on hook for any quantity of incoming work at any waking hour on any day of the week. I lived in fear of my phone, having been burned many times in the past by “short fuse” deals that needed me to drop everything and work the weekend. Like a traumatized animal, I learned to fear the words “what are you working on at the moment,” knowing that my answer would inevitably lead to more work. It turns out that it only took a few months of Friday night emails asking me to drop everything and work the weekend to break me.

It wasn’t just the nature of the work either, although each deal on which I was staffed meant the drudgery of hundreds of agreements, leases, and licenses being dropped into the dataroom at any time of day, waiting to be reviewed and summarized by me. It wasn’t solely the environment either, although it was astonishing to see the facades of contentment when so many associates were unhappy. You see, biglaw attorneys are exceedingly polite to one another. So polite, in fact, that the real feelings of biglaw attorneys rarely manifest themselves, except to those closest to the associate. This can’t be unknown by biglaw partners, but because associates don’t openly vocalize their discontent, biglaw partners have no incentive to improve conditions. Instead, we pretended to have fun, making casual jokes or observations about current affairs. We had RC car races, eating contents, a goodbye party for every associate or staffer who left. These events made for great PR to those outside the firm, but it was never mentioned how the attorneys would just sit there either awkwardly making small talk or checking their mobile devices waiting for an excuse to get back to work. It was of course all of these things, and more, that brought me to my breaking point.

Once I decided I was leaving, I knew that nothing my partner or anything other associates could say would convince me to stay. I knew that it was the right decision to leave, not in a year, not in six months, but at the very moment I was planning. It wasn’t about the startup or all the work I had put in to get back to the level of engineering competence to get hired. It turns out that it was just about the opportunity to leave. It turns out that any opportunity to leave was good enough. I’ll never forget the feeling when I gave my notice to that young partner. I didn’t begrudge him at all, not for the weekends he made me work, the workload, or anything else at the firm. I knew it was just part of his job. Instead, my sense of unadulterated joy came instead from the hope of a better future and of a happier life that leaving instilled. As I told that partner that I was leaving, I felt an enormous weight lifted off my shoulders like nothing in the world was or could go wrong. I felt emboldened, powerful and most importantly, free. I joke to my friends that I’ll never have as good a feeling in my life ever again, not unless I become a slave and receive my freedom.

The aftermath

So then, what’s the postscript after leaving -has my attitude changed in the last year?
No. The short answer is that I have not regretted once leaving biglaw for engineering. I regret some of the things biglaw afforded, like the salary or prestige of being able to call myself a lawyer. But when evaluating things as a whole, I would change nothing about my departure.

In the past year, I’ve set to work on several projects, one of which I’ve described on this blog called Dockumo. I have other projects in the queue, like building a solution to bluebooking, an insane kind of drudgery that law students subject themselves to involving following imprecise rules on how to cite certain legal works. As I stated before, I still work long hours, but the biggest difference is the fact that I’m now working for myself. The work that I do feels like it has purpose, that it is bettering me and my skills as an engineer, and can build upon itself to enable me to create bigger and more expansive projects -projects that can help others. Software, to me, is still the most efficient solution to many of the world’s biggest problems. Being able to program software is a powerful concept and skill, and it leads to ability to create anything I can dream of if I just put in the time and effort. That feeling of hope and potential to me is the greatest motivator of all; it is what gives my work purpose and it was exactly what I was missing when I was a biglaw attorney.

## Dockumo Public Document sharing is here

Hey.

I finally added public document sharing on Dockumo after two weeks of launching. It was my original goal to get something like this up and running, but it turned out to be slightly more complicated than I had thought. Since I had built everything from the perspective of a user who is logged in, I had to develop some workarounds for users who were just hitting my site. The whole experience actually turned me off a bit to having user login. It seems like such a huge barrier to conversion.

In any case, here’s what public documents do: Say a user wants to share a template (I’ve shared an example cover letter when I was leaving law behind and coming back to software engineering). The user will login, create a new article, click “private” off (it’s enabled by default), then input the article’s content. Once the user saves the article, Dockumo will run a mapreduce behind the scenes and “index” the article so it becomes publicly searchable. The mapreduce runs by category, which are fixed (via a JSON file) so it forces the user into certain options. Tags are user specific and are searchable through the interface. Then the main landing page gets updated with “trending” articles so users can see what’s popular. Here’s a screenshot:

Then, if a user finds an article that she likes, she can email the article to herself, or download it to an HTML or word file locally. If the user wants to make some tweaks, she’ll need to login herself, add a copy of the article and then edit it for her own use. Hence the cycle repeats. The hope is that over time, shared articles can get better. It goal is that all of us can use the benefit of our collective intelligences to make and share better documents, whether it’s something simple like a cover letter, something more in depth like a lease, or something more personal like a statement of purpose.

That’s the power behind Dockumo and my hope for the community that someday, people can find some use for it. Now if I could just get one user who wasn’t already a friend …