Archive | MongoDB RSS feed for this section

Using MongoDB $indexStats to identify and remove unused indexes

Available for Dedicated plans on mLab*

Proper indexing is critical to database performance. A single unindexed query is enough to cause significant performance degradation.

It is relatively easy to spot a missing index using mLab’s Slow Query Analyzer, but the tool doesn’t provide an obvious way to identify and remove indexes that aren’t actually being used.

Because unused indexes impact write performance and consume valuable resources, periodic index review and maintenance is recommended. MongoDB 3.2 has introduced a new feature to help identify unused indexes:

* This feature will also be available on our Sandbox and Shared plans once the fix for SERVER-26734 is available for supported versions.

The $indexStats operator

The $indexStats aggregation pipeline operator displays a running tally of index usage since the server was last started. The operator reports on the usage for all of the indexes within a specified collection, and (with additional aggregation operators) can be targeted at a particular index. Knowing if and how often indexes are being used allows an administrator to make informed decisions on which indexes are providing benefit.

How to analyze index usage with $indexStats

It’s not necessarily obvious which collections might contain unused indexes. To obtain a comprehensive list of all index usage, you’ll need to run $indexStats on each collection.

Until SERVER-26734 is resolved, you will need to connect with an admin database user to run $indexStats.

Connecting with an admin database user

You will need to create an admin database user if you don’t already have one.

To run the $indexStats command, you will need to connect to your database with an admin database user. The following command will use your “admin” database user credentials to authenticate to the “admin” database, then connect to the target database (“myDatabase”):

> mongo -u <adminUser> -p <adminUserPassword> --authenticationDatabase admin

Running $indexStats on an entire collection

Now that you are connected to the target database, the $indexStats command can be run using the MongoDB shell:

> db.myColl.aggregate( { $indexStats: { } } )


"name" : "color_1",

"key" : {

"color" : 1


"host" : "examplehost.local:27017",

"accesses" : {

"ops" : NumberLong(50),

"since" : ISODate("2017-01-06T16:52:50.744Z")




"name" : "type_1",

"key" : {

"type" : 1,


"host" : "examplehost.local:27017",

"accesses" : {

"ops" : NumberLong(0),

"since" : ISODate("2017-01-06T16:52:48.362Z")




"name" : "name_1",

"key" : {

"name" : 1


"host" : "examplehost.local:27017",

"accesses" : {

"ops" : NumberLong(100),

"since" : ISODate("2017-01-06T16:32:44.609Z")



The return document includes the following fields:

Output Field Description
name Index name
key Index key specification
host The hostname and port of the mongod process
accesses.ops The number of operations that used the index
accesses.since The time from which MongoDB started gathering the index usage statistics


Interpreting $indexStats

Every database query, update, or command that uses an index will be counted toward that index’s usage statistics.

  • The “name”, “key”, and “host” output fields provide the metadata for each index.
  • The “accesses.ops” value displays the number of operations that have used the index. Any indexes with zero access operations suggest a potentially unused index which can potentially be deleted.
  • The “accesses.since” value is the point in time from which MongoDB began gathering the index statistics. This value is set either upon index creation or as of a mongod server restart. 

Importantly, note that $indexStats shows index data as of the last database server process restart. Therefore, these running tallies are wiped out and restarted with each server restart. Keep in mind that a database server process restart can occur during maintenance events, plan changes, intentional deployment restarts, or as a part of unexpected failures requiring restart.

If you would like a fresh set of statistics you can choose to perform an intentional cluster restartIf you would prefer not to perform a cluster restart, you can use $indexStats to sample from two points in time and calculate the difference in access operations over time for each index.

Running $indexStats for a particular index

You can use the $match operator within the aggregation pipeline to specify a particular index. This allows you to match indexes based on index name or index key. You can run the following commands in the MongoDB shell:

Based on index name:

> db.myColl.aggregate([{$indexStats: {}}, {$match: {"name": "color_1"}}])

Based on index key:

> db.myColl.aggregate([{$indexStats: {}}, {$match: {"key": {"color": 1}}}])

The return document only displays the index statistics for the particular key.



"name" : "color_1",

"key" : {

"color" : 1


"host" : "examplehost.local:27017",

"accesses" : {

"ops" : NumberLong(50),

"since" : ISODate("2017-01-06T16:52:50.744Z")



Removing unused indexes

Proceed with caution

As with all delete operations on the database, always err on the side of caution when removing an index.

  • Do not drop an index if there is any uncertainty surrounding its use.
  • Accidentally removing a necessary index can result in significant performance degradation.
  • Closely monitor database performance immediately after making index changes.

In addition, here are some checks to perform before removing an unused index:

  • Are there infrequent operations which require the index?
  • Are there query patterns that are failing to use the index?
  • Are there plans to use the index in the near future?

More information about indexing can be found in our documentation:

Take a backup first (optional)

An additional precaution is to take a backup before dropping a series of unused indexes. We recommend a block storage snapshot (e.g., an EBS Snapshot) over a mongodump since this type of backup tends to be orders of magnitude faster to both take and restore.

Drop the unused index

After reviewing the considerations above, you can proceed with removing any unused indexes.
To drop the “color_1” index, perform the following command in the MongoDB shell:

> db.myColl.dropIndex( { "color": 1 } );

Thank you for reading!  If you have questions on this exciting new feature or on MongoDB indexing/performance in general, please email our team at for help.


Comments are closed

Recent MongoDB ransom attacks

Many of you have likely heard that an estimated 27,000 MongoDB databases have had their data removed and held at ransom by hackers. We have received many questions about the news and wanted to discuss and share MongoDB security best practices to prevent future incidents.

All database deployments hosted at mLab are safe from such attacks.

How could 27,000 databases be held at ransom?

First, it is important to understand the nature of these “breaches”. In a sense these were not breaches at all. All of the databases that were attacked:

  1. Were running without authentication enabled, and
  2. Had their MongoDB ports open to the public internet

This means these databases were configured to accept connections from any client, and to not require that clients authenticate to the database via valid credentials (e.g. username and password).

With this in mind, one can see how such an attack was implemented. Two years ago, one security researcher discovered that 30,000+ MongoDB databases were exposed on the internet running without authentication enabled or firewalls configured.

It is also important to note that there are no known vulnerabilities in MongoDB that would allow for such an attack against databases with authentication enabled.

Are mLab-hosted databases vulnerable to this attack?

No. All mLab databases are configured to require database authentication by clients. Furthermore, on our Dedicated plans, you may firewall your database to only accept connections from IP addresses that you whitelist; this allows you to enforce that only your application infrastructure can connect to your database.

You can read more about how mLab handles security at In particular, note that our Dedicated plans allow deployments to be firewalled from the public internet, have SSL enabled, and be housed in private networks (i.e., VPC peering) to limit communication between the application and database.

If you have any questions, please email for help.

What if I host my own MongoDB?

If you host your own MongoDB deployments, you should make sure that you enable authentication and firewall your database to restrict access from unauthorized IP addresses.

MongoDB has also published a security checklist, which you can follow and implement to protect your MongoDB installation.

Comments are closed

Configuring a MongoDB replica set for analytics

MongoDB replica sets make it easy for developers to ensure high availability for their database deployments.

A common replica set configuration is composed of three member nodes: two data-bearing nodes and one arbiter node. With two electable, data-bearing nodes, users are protected from scenarios that cause downtime for single-node deployments, such as maintenance events and hardware failures.

However, it may be tempting to read from the redundant, secondary server to scale reads and/or run queries for the purpose of analytics. We strongly advise against secondary reads when there are only two electable, data-bearing nodes in the replica set.

The main reason for this recommendation is that relying on secondary reads can compromise the high availability replica sets are meant to provide. While occasional use of the secondary for non-critical ad-hoc queries is fine, if your app requires both the primary and the secondary to shoulder the database load of your application, your system is no longer in a position to handle this load if one of the nodes in the cluster goes down or becomes unavailable.

This is discussed in more depth in the following resources:

Run analytics queries against hidden, analytics nodes instead

If you would like to run more than the occasional, ad-hoc or analytics query, we highly recommend that you properly configure your replica set to handle analytics queries.  In particular, we recommend adding a node designated for analytics as a hidden, non-electable member of the replica set.

Hidden members have properties that make them great for analytics. A hidden replica set member:

Maintains a copy of the primary’s data set – Querying on a hidden member will be nearly identical to querying the primary node (minus some replication delay).

Cannot become primary and is invisible to your application – It’s important to isolate analytics traffic from production application traffic. If the analytics node became the replica set primary, it may be unable to handle the combined analytics and production application traffic.

Can be useful for disaster recovery as well if a slaveDelay is configured – See advanced configuration considerations below.

If you’re interested in adding an analytics node to your mLab deployment:

  1. Email us at to request that the node be added.
  2. mLab will add the node seamlessly into your replica set as a hidden member and provide you with its address.
  3. You will then be able to start to create single-node connections using that address for your analytics queries.

Advanced configuration considerations

Enabling slaveDelay on the analytics node for replica set disaster recovery

MongoDB’s slaveDelay option allows you to configure a replication delay on a hidden replica set member. Configuring a delay is helpful for recovering from disaster scenarios such as accidentally dropping a collection or database.

For example, imagine that you configure a one-hour delay on an analytics node. If a developer accidentally drops/deletes data from the primary node, the changes will be applied to the analytics node an hour later (as opposed to immediately). This allows you to query the analytics node to retrieve the deleted data.

Reading from secondaries in a Sharded Cluster

If you are running a Sharded deployment and would like to read from the secondary members of your shards, there are important considerations you should be aware of.  We will be publishing a blog post on this advanced topic in the future.

Comments are closed

MongoDB tips & tricks: Collection-level access control

As your database or project grows, you may be tasked with configuring access controls to allow different stakeholders access to the database. Rather than create a new user with full database privileges, it may be more appropriate to create a user that only has access to the data or collections they need. This allows users to query against the collections you define and limits their access to the rest of the database.

Here’s a step-by-step example that demonstrates how to set up collection-level access control. This example will create a user named “finance” on the “acme” database. The “finance” user will only have “find” (read) access to the “billing” collection.

Step 1. Connect to the “acme” database using an existing user

> mongo -u dba -p password

Note that the “dba” user will need the userAdmin role to create and modify roles and users on the “acme” database. By default, mLab database users created through the UI are granted the dbOwner role, which combines the privileges granted by the readWrite, dbAdmin, and userAdmin roles.

Step 2. Create a new user-defined role for the “billing” collection

> db.createRole({ role: "readBillingOnly", privileges: [ { resource: { db: "acme", collection: "billing" }, actions: [ "find" ] } ], roles: [] })]

You can also add more privilege actions to the “actions” array, such as “insert” or “update”.

Step 3. Create a new user named “finance” with the role you just created

> db.createUser({ user: "finance", pwd: "password", roles: [ { role: "readBillingOnly", db: "acme" } ] })

Alternatively, if the user already exists, you can use the grantRolesToUser() method:

> db.grantRolesToUser("finance", [ { role: "readBillingOnly", db: "acme" } ])


And that’s it! You now have a user named “finance” that has read-only access on the “billing” collection in the “acme” database.

Comments are closed

Telemetry Series: Queues and Effective Lock Percent

A key component of optimizing application performance is tuning the performance of the database that supports it. Each post in our Telemetry series discusses an important metric used by developers and database administrators to tune the database and describes how MongoLab users can leverage Telemetry, MongoLab’s monitoring interface, to effectively review and take action on these metrics.

Queues and Effective Lock Percent

Any time an operation can’t acquire a lock it needs, it becomes queued and must wait for the lock to be released. Because operations that are queued on the database side often imply that operations are queued on the application side, Queues is an important Telemetry metric for assessing how well your MongoLab deployment is responding to the demands of your app.

In MongoDB 2.6 and earlier, you will find that Queues tend to rise and fall with Effective Lock %. This is because Queues refers specifically to the operations that are waiting on another operation (or series of operations) to release MongoDB’s database-level and global-level locks.

With MongoDB 3.0 (using the MMAP storage engine), locking is enforced at the collection level, and Effective Lock % is not reported as a server-level metric. This makes the Queues metric even more important. While it may not be clear from the Telemetry interface exactly which collection(s) is/are heavily locked, elevated queueing is usually a consequence of locking.

The focus on Queues is also preferable because, by design, locking is going to happen on any MongoDB that is receiving writes. As long as that locking isn’t resulting in queueing, it is usually not a concern.

Image of Telemetry charts

High locks leading to high queues

What is Effective Lock Percent?

MongoDB uses multi-granular reader-writer locking. Reads prevent a write from acquiring the lock, and a write prevents reads or other writes from acquiring the lock. But, reads do not block other reads. As well, each operation holds the lock at a granularity level appropriate for the operation itself.

In MongoDB 2.6 there are two granularity levels: a Global lock and Database lock for each database. In this scheme, operations performed on separate databases do not lock each other unless those operations also require the Global lock.

Effective Lock Percent in MongoDB 2.6 is a calculated metric that adds together the Global Lock % and the Lock % of the most-locked database at the time. Because of this computation, and because of the way operations are sampled, values greater than 100% may occur.    

In MongoDB 3.0 with the MMAP storage engine, MongoDB locks at the Global, Database, and Collection-level. A normal write operation holds the Database lock in MongoDB 2.6, but only holds a specific collection’s Collection lock in MongoDB 3.0. This improvement means separate collections can be concurrently read from or written to.

MongoDB 3.0 with the WiredTiger storage engine uses document-level locking for even greater parallelization. Writes to a single collection won’t block each other unless they are to the same document.

Note that locking operations can and do yield periodically, so incoming operations may still progress on a heavily locked server. For more detail, read MongoDB’s concurrency documentation.

What do I do if I see locking and queueing?

Locking is a normal part of databases so some level of locking and queueing is expected. First, consider if the locking and queueing is a problem. You should typically not be concerned with Effective Lock Percent values of less than 15%, but each app is different. Likewise, queueing can be fine as long as the app is not blocked on queued requests.

If you see a rise in Queues and Effective Lock % in Telemetry that corresponds to problems with your application, try the following steps:

  1. If queues and locks coincide with Page Faults, check out Telemetry Series: Page Faults–the previous blog in this series–for potential resolutions, such as optimizing indexes or ultimately increasing RAM.
  2. If locking and queueing don’t coincide with Page Faults, there are two potential causes:
    1. You may have an inefficient index. While poor indexing typically leads to page faulting, this is not the case if all of your data and indexes already fit into available RAM. Yet the CPU cost of collection scans can still cause a lock to be held for longer than necessary. In this case, reduce collection scanning using the index optimization steps in Telemetry Series: Page Faults.
    2. If operations are well-indexed, check your write operations and consider reducing the need for frequent incidents of:
      • updates to large documents
      • updates that require document moves
      • full-document updates (i.e., those that don’t use update operators)
      • updates using array update operators like $push, $pull, etc.
  3. If queuing and locking cannot be reduced by improving indexes, write strategies, or the data model, it is time to consider heavier hardware, and potentially sharding.

Importantly, queueing can occur because of a small number of long-running operations. If those operations haven’t finished yet, they won’t appear in the mongod logs.  Viewing and potentially killing the offending current operations can be a short-term fix until those operations can be examined for efficiency. To learn more about viewing and killing operations, refer to our documentation on Operation Management.

Have questions or feedback?

We’d love to hear from you as this Telemetry blog series continues. What topics would be most interesting to you? What types of performance problems have you struggled to diagnose?

Email us at to let us know your thoughts, or to get our help tuning your MongoLab deployment.


Telemetry Series: Page Faults

A key component of optimizing application performance is tuning the performance of the database that supports it. Each post in our Telemetry series discusses an important metric used by developers and database administrators to tune the database and describes how MongoLab users can leverage Telemetry, MongoLab’s monitoring interface, to effectively review and take action on these metrics.

Page Faults

Databases are optimized for working with data that is stored on disk, but usually cache as much data as possible in RAM in order to access disk as infrequently as possible. However, as it is cost-prohibitive to store in RAM all the data accessed by the application, the database must eventually go to disk. Because disks are slower than RAM, this incurs a significant time cost.

Effectively tuning a database deployment commonly involves assessing how often the database accesses disk with an eye towards reducing the need to do so. To that end, one of the best ways to analyze the RAM and disk needs of a MongoDB deployment is to focus on what are called Page Faults.

What is a Page Fault?

MongoDB manages documents and indexes in memory by using an OS facility called MMAP, which translates data files on disk to addresses in virtual memory. The database then accesses disk blocks as though it is accessing memory directly. Meanwhile, the operating system transparently keeps as much of the mapped data cached in RAM as possible, only going to disk to retrieve data when necessary.

When MMAP receives a request for a page that is not cached, a Page Fault occurs, indicating that the OS had to read the page from disk into memory.

What do Page Faults mean for my cluster?

The frequency of Page Faults indicates how often the OS goes to disk to read data. Operations that cause Page Faults are slower because they necessarily incur disk latency.

Page Faults are one of the most important metrics to look at when diagnosing poor database performance because they suggest the cluster does not have enough RAM for what you’re trying to do. Analyzing Page Faults will help you determine if you need more RAM, or need to use RAM more efficiently.

How does Telemetry help me interpret Page Faults?

Select a deployment and then look back through Telemetry over months or even years to determine the normal level of Page Faults. In instances where Page Faults deviate from that norm, check application and database logs for operations that could be responsible. If these deviations are transient and infrequent they may not pose a practical problem. However, if they are regular or otherwise impact application performance you may need to take action.

A burst in Page Faults corresponding to an increase in database activity.

A burst in Page Faults corresponding to an increase in database activity.

If Page Faults are steady but you suspect they are too high, consider the ratio of Page Faults to Operations. If this ratio is high it could indicate unindexed queries or insufficient RAM. The definition of “high” varies across deployments and requires knowledge of the history of the deployment, but consider taking action if any of the following are true:

  • The ratio of Page Faults to Operations is greater than or equal to 1.
  • Effective Lock % is regularly above 15%.
  • Queues are regularly above 0.
  • The app seems sluggish.

Note: Future Telemetry blog posts will cover additional metrics, such as Effective Lock % and Queues. See MongoDB’s serverStatus documentation for more information.

How do I reduce Page Faults?

How you reduce Page Faults depends on their source. There are three main reasons for excessive Page Faults.

  1. Not having enough RAM for the dataset. In this case, the solution is to add more RAM to the deployment by scaling either vertically to machines with more RAM, or horizontally by adding more shards to a sharded cluster.
  2. Inefficient use of RAM due to lack of appropriate indexes. The most inefficient queries are those that cause collection scans. When a collection scan occurs, the database is iterating over every document in a collection to identify the result set for a query. During the scan, the whole collection is read into RAM, where it is inspected by the query engine. Page Faults are generally acceptable when obtaining the actual results of a query, but collection scans cause Page Faults for documents that won’t be returned to the app. Worse, these unnecessary Page Faults are likely to evict “hot” data, resulting in even more Page Faults for subsequent queries.
  3. Inefficient use of RAM due to excess indexes. When the indexed fields of a document are updated, the indexes that include those fields must be updated. When a document is moved on disk, all indexes that contain the document must be updated. These affected indexes must enter RAM to be updated. As above, this can lead to thrashing memory.

Note: For assistance determining what indexes your deployment needs, MongoLab offers a Slow Query Analyzer that provides index recommendations to Shared and Dedicated plan users.

Have questions or feedback?

We’d love to hear from you as this Telemetry blog series continues. What topics would be most interesting to you? What types of performance problems have you struggled to diagnose?

Email us at to let us know your thoughts, or to get our help tuning your MongoLab deployment.

{ "comments": 1 }

A Primer on Geospatial Data and MongoDB

MongoDB offers new geospatial features in versions 2.4 and 2.6.  The core of these features is the introduction of GeoJSON, an open-source format for rich geospatial types that go beyond what MongoDB has supported in previous versions.

This post is a primer for developers new to geospatial data in MongoDB. We aim to familiarize you with geospatial fundamentals in MongoDB and help you get the most out of your data.

Continue Reading →

{ "comments": 12 }

Easily find & kill MongoDB operations from MongoLab’s UI

A few months ago, we wrote a blog post on finding and terminating long-running operations in MongoDB. To help make it even easier for MongoLab users* to quickly identify the cause behind database unresponsiveness, we’ve integrated the currentOp() and killOp() methods into our management portal. Continue Reading →