MongoDB performance enhancements and tweaks
In my travails in building and my work on a real time analytics engine, I’ve formed some opinions on how well mongoDB is suited for scalability and how to tweak queries and my node.js code to extract some extra performance. Here are some of my findings, from several standpoints (mongoDB itself, optimizations to the mongoose driver for Node, and node.js itself).
Mongoose Driver
1. Query optimization
A. Instead of using Model.findOne or Model.find and iterating, try to use Model.find().limit() – I encountered a several factor speed up when doing this. This is talked about in several other places online.
B. If you have excess CPU, you can return a bigger chunk of documents and process them using your server instead and free up some cycles for MongoDB.
Improvement: Large (saw peaks of 1500ms for reads in one collection using mongotop. Afterwards, saw this drop to 200ms)
Example:
//Before: Collection.findOne(query3, function(err, doc) { //Returns 1 mongoose document }); //After Collection.find(query3).limit(1).exec(function(err, docs) { //returns an array of mongoose documents });
See these links for some more information: Checking if a document exists – MongoDB slow findOne vs find
2. Use lean()
According to the docs, if you query a collection with lean(), plain javascript objects are returned and not mongoose.Document. I’ve found that in many instances, where I was just reading the data and presenting it to the user via REST or a visual interface, there was no need for the mongoose document because there was no manipulation after the read query.
Additionally, for relational data, if you have for instance a Schema that contains an array of refs (e.g. friends: [{ type: mongoose.ObjectId, ref: ‘User’}]), and you only need to return the first N number of friends to the user, you can use lean() to modify the returned javascript objects and then do population instead of populating the entire array of friends.
Improvement: Large (depending on how much data is returned)
Example:
//Before: User.find(query, function(err, users) { //Users will be mongoose Documents. Hence you can't add fields outside the Schema (unless you have an { type: Any } object var options = { path: 'friends', model: 'User', select: 'first last' }; Users.populate('friends', options, function(err, populated)) //will populate ALL friends in the array }); //After var query = new Query().lean(); User.find(query, function(err, users) { //Users will be javascript objects. Now you can go outside the schema and return data in line with what you need users.forEach(function(user) { user.friends = friends.splice(0, 10); //take the first ten friends returned, or whatever }); var options = { path: 'friends', model: 'User', select: 'first last' }; Users.populate('friends', options, function(err, populated)) //now Model.populate populates a potentially much smaller array });
Results (Example on my node.js server using mongoTop):
Load (ms)
Seconds No Lean() Lean()
5 561 524
10 371 303
15 310 295
20 573 563
25 292 291
30 302 291
35 544 520
40 316 307
45 289 286
50 537 503
Average 409.5 388.3
% improvement 0.051770452 = 5.177%
3. Keep mongoDB “warm”.
MongoDB implements pretty good caching. This can be evidenced by running a query several times in quick succession. When this occurs, my experience has been that the query time decreases (sometimes dramatically so). For instance, a query can go from 50ms to 10ms after running twice. We have one collection that is constantly queried – about 500 times per second for reads and also 500 times per second for writes. Keeping this collection “warm”, i.e. running the query that will be called at some point in the future, can help keep the call responsive when Mongo starts to slow down.
Improvement: Untested
Example:
function keepwarm() { setTimeout(function() { User.find(query); keepwarm(); }, 500); }
Mongo Native
1. Compound indexing
For heavy duty queries that run often, I decided to create compound indices using all the parameters that comprised the query. Even though intuitively, it didn’t jump out to me that indexing by timestamp for instance would make a difference, it does. According to the mongoDB documentation, if your query sorts based on timestamp (which ours did), indexing by timestamp can actually help.
Improvement: Large (depending on how large in documents your collection is and how efficiently mongoDB can make use of your indices)
Example:
//in mongo shell db.collection.ensureIndex({'timestamp': 1, 'user': 1}); //in mongoose schema definition Model.index({'timestamp': 1, 'user': 1});
Alternative? Aggregating documents into larger documents, such as time slices. Intuitively, that would mean that queries don’t have to traverse as large an index to reach the targeted documents. You may ask what the difference is between creating a compound index versus breaking the document down into aggregates like a day’s or hours slice. Here’s a few possibilities:
- A. MongoDB tries to match up queries with indices or compound indices, but there’s no guarantee that this match will occur. Supposedly, the algorithm used to determine which index to use is pretty good, but I question how good it is if for instance, the query you are using includes an additional parameter to search for. If MongoDB doesn’t see all parameters in the index, will it still know to use a compound index or a combination of compound indices?
- B. Using aggregates could actually be slower if it requires traversal of the document for the relevant flight data (which might not afford fast reads).
- C. If writes are very heavy for the aggregate (e.g. you use an aggregate document that is too large in scope), the constant reading and writing of the document may cause delays via mongoDB’s need to lock the collection/document.
- D. Aggregates could make indexing more difficult
- E. Aggregates could make aggregation/mapreduce more difficult because your document no longer represents a single instance of an “event” (or is not granular enough)
2. Use Mongotop to determine where your bottlenecks are.
Mongotop shows each collection in your database and the amount of time spent querying reads and writes. By default it updates every second. Bad things happen when the total query time jumps over a second. For instance, in Node, that means that the event queue will begin to block up because mongo is taking too long
Example:
//example output ns total read write 2014-07-31T17:02:06 mean-dev.packets 282ms 282ms 0ms mean-dev.sessions 0ms 0ms 0ms mean-dev.series 0ms 0ms 0ms mean-dev.reduces 0ms 0ms 0ms mean-dev.projects 0ms 0ms 0ms
3. Use explain()… sparingly
I’ve found that explain is useful initially, because it will show you the number of scanned documents to reach the result of the query. However, when trying to optimize queries further, I found that it was not that useful. If I’ve already created my compound indices and MongoDB is using them, how can I extract further performance using explain() when explain() may already show a 0 – 1ms duration?
Example:
//in mongo shell db.collection.find({ $and: [{ 'from.ID': 956481854 }, { 'to.ID': 1038472857 }, { 'metadata.searchable': false }, { 'to.IP_ADDRESS': '127.0.0.1' }, { 'from.timestamp': { $lt: new Date(ISODate().getTime() - 1000 * 60 * 18) } }] }).explain()
4. For fast inserts for a collection of limited size, consider using a capped collection.
A capped collection in mongoDB is essentially a queue-like data structure that enforces first-in first-out. According to the mongoDB docs, capped collections maintain insertion order, so they’re perfect for time series. You just have to specify what the max size of the collection should be in bytes. I used an average based on: db.collection.stats(), where I found that each record was about 450 bytes in size.
To enforce this, you can run this in the mongoDB shell:
db.runCommand({"convertToCapped": "mycoll", size: 100000}); //size in bytes
Node.js
1. Implement pacing for large updates.
I’ve found that in situations where there is a periodic update on a large subset of a collection while many updates are going on, the large update could cause the event queue in Node to backup as mongoDB tried to keep up. By throttling the number of updates that can go on based on total update time, I could adjust based on the load on the server currently. The philosophy is if node/mongoDB have extra cycles, we can dial up the pace of backfilling/updates a bit, whereas when node/mongoDB is overloaded, we can backoff.
Example:
//Runs periodically _aggregator.updateStatistics(undefined, updateStatisticsPace, function(result) { console.log('[AGGREGATOR] updateStatistics() complete. Result: [Num Updated: %d, Duration: %d, Average (ms) per update: %d]', result.updated, result.duration, result.average); if (result.average < 5) { //<5 ms, speed up by 10% updateStatisticsPace = Math.min(MAX_PACE, Math.floor(updateStatisticsPace * 1.1)); //MAX_PACE = all records updated } else if (result.average >= 5 && result.average < 10) { //5 < ms < 10, maintain pace updateStatisticsPace = Math.min(MAX_PACE, updateStatisticsPace); } else { //>= 10ms, slow down by 2/3, to a min of 10 updateStatisticsPace = Math.min(MAX_PACE, Math.max(updateStatisticsPace_min, Math.floor(updateStatisticsPace * .66))); } if (MAX_PACE === updateStatisticsPace) { console.log('[Aggregator] updateStatistics() - Max pace reached: ' + _count); } console.log('[AGGREGATOR] updateStatistics() Setting new pace: %d', updateStatisticsPace); callback(null, result) });