This Calculator Has Exceeded Rolling Usage Limits Please Try Again Later

This document provides a collection of difficult and soft limitations of the MongoDB system.

BSON Document Size

The maximum BSON document size is xvi megabytes.

The maximum document size helps ensure that a single document cannot apply excessive amount of RAM or, during transmission, excessive amount of bandwidth. To store documents larger than the maximum size, MongoDB provides the GridFS API. See mongofiles and the documentation for your driver for more data nearly GridFS.

Nested Depth for BSON Documents

MongoDB supports no more 100 levels of nesting for BSON documents. Each object or assortment adds a level.

Database Name Case Sensitivity

Database names are example-sensitive in MongoDB. They also have an additional restriction, instance cannot be the only difference between database names.

If the database salesDB already exists MongoDB will return an error if if you lot attempt to create a database named salesdb.

                                              
mixedCase = db.getSiblingDB('salesDB')
lowerCase = db.getSiblingDB('salesdb')
mixedCase.retail.insertOne( { "widgets": ane, "price": 50 })

The functioning succeeds and insertOne() implicitly creates the SalesDB database.

                                              
lowerCase.retail.insertOne( { "widgets": 1, "price": l })

The operation fails. insertOne() tries to create a salesdb database and is blocked by the naming restriction. Database names must differ on more than than just case.

This operation does not return any results considering the database names are case sensitive. In that location is no error because discover() doesn't implicitly create a new database.

Restrictions on Database Names for Windows

For MongoDB deployments running on Windows, database names cannot contain any of the post-obit characters:

Also database names cannot contain the aught graphic symbol.

Restrictions on Database Names for Unix and Linux Systems

For MongoDB deployments running on Unix and Linux systems, database names cannot incorporate any of the post-obit characters:

Also database names cannot comprise the null grapheme.

Length of Database Names

Database names cannot exist empty and must have fewer than 64 characters.

Restriction on Drove Names

Collection names should begin with an underscore or a letter character, and cannot:

  • contain the $.
  • be an empty string (east.thou. "").
  • contain the zero grapheme.
  • brainstorm with the system. prefix. (Reserved for internal use.)

If your collection name includes special characters, such as the underscore character, or begins with numbers, and so to access the drove use the db.getCollection() method in mongosh or a like method for your driver.

Namespace Length:

  • For featureCompatibilityVersion set to "4.4" or greater, MongoDB raises the limit on drove/view namespace to 255 bytes. For a collection or a view, the namespace includes the database name, the dot (.) separator, and the collection/view name (eastward.g. <database>.<collection>),
  • For featureCompatibilityVersion set to "four.2" or before, the maximum length of the collection/view namespace remains 120 bytes.
Restrictions on Field Names
  • Field names cannot contain the cypher grapheme.
  • The server permits storage of field names that contain dots (.) and dollar signs ($).
  • MongodB 5.0 adds improved support for the use of ($) and (.) in field names. There are some restrictions. See Field Name Considerations for more than details.
Restrictions on _id

The field name _id is reserved for utilize as a primary key; its value must be unique in the drove, is immutable, and may be of any type other than an assortment. If the _id contains subfields, the subfield names cannot begin with a ($) symbol.

Use caution, the issues discussed in this department could lead to data loss or corruption.

The MongoDB Query Language is undefined over documents with duplicate field names. BSON builders may support creating a BSON document with duplicate field names. While the BSON builder may non throw an error, inserting these documents into MongoDB is non supported even if the insert succeeds. For case, inserting a BSON document with duplicate field names through a MongoDB driver may result in the commuter silently dropping the duplicate values prior to insertion.

Starting in MongoDB 5.0, document field names can exist dollar ($) prefixed and can contain periods (.). However, mongoimport and mongoexport may not piece of work as expected in some situations with field names that make use of these characters.

MongoDB Extended JSON v2 cannot differentiate between type wrappers and fields that happen to take the same name every bit type wrappers. Do not utilise Extended JSON formats in contexts where the respective BSON representations might include dollar ($) prefixed keys. The DBRef mechanism is an exception to this general rule.

There are besides restrictions on using mongoimport and mongoexport with periods (.) in field names. Since CSV files use the catamenia (.) to represent data hierarchies, a menstruation (.) in a field name will be misinterpreted as a level of nesting.

There is a minor chance of data loss when using dollar ($) prefixed field names or field names that contain periods (.) if these field names are used in conjunction with unacknowledged writes (write business concern due west=0) on servers that are older than MongoDB 5.0.

When running insert, update, and findAndModify commands, drivers that are v.0 compatible remove restrictions on using documents with field names that are dollar ($) prefixed or that comprise periods (.). These field names generated a client-side error in before driver versions.

The restrictions are removed regardless of the server version the driver is continued to. If a 5.0 commuter sends a certificate to an older server, the document volition be rejected without sending an error.

Namespace Length
  • For featureCompatibilityVersion set up to "4.4" or greater, MongoDB raises the limit on collection/view namespace to 255 bytes. For a drove or a view, the namespace includes the database name, the dot (.) separator, and the drove/view name (e.g. <database>.<collection>),
  • For featureCompatibilityVersion set up to "4.2" or earlier, the maximum length of the collection/view namespace remains 120 bytes.
Alphabetize Key Limit

For MongoDB 2.6 through MongoDB versions with fCV set to "4.0" or earlier, the total size of an index entry, which tin can include structural overhead depending on the BSON blazon, must be less than 1024 bytes.

When the Index Key Limit applies:

  • MongoDB will not create an alphabetize on a collection if the index entry for an existing document exceeds the index key limit.
  • Reindexing operations will fault if the index entry for an indexed field exceeds the index key limit. Reindexing operations occur as part of the compact command equally well as the db.collection.reIndex() method. Considering these operations driblet all the indexes from a collection and then recreate them sequentially, the error from the index key limit prevents these operations from rebuilding whatsoever remaining indexes for the collection.
  • MongoDB volition not insert into an indexed collection any certificate with an indexed field whose corresponding index entry would exceed the alphabetize key limit, and instead, will render an error. Previous versions of MongoDB would insert but not index such documents.
  • Updates to the indexed field will fault if the updated value causes the index entry to exceed the index fundamental limit. If an existing certificate contains an indexed field whose alphabetize entry exceeds the limit, whatever update that results in the relocation of that document on deejay will error.
  • mongorestore and mongoimport will non insert documents that contain an indexed field whose corresponding alphabetize entry would exceed the index central limit.
  • In MongoDB 2.6, secondary members of replica sets volition go on to replicate documents with an indexed field whose respective index entry exceeds the index central limit on initial sync but will print warnings in the logs. Secondary members also permit index build and rebuild operations on a collection that contains an indexed field whose respective index entry exceeds the index key limit just with warnings in the logs. With mixed version replica sets where the secondaries are version 2.6 and the main is version ii.four, secondaries will replicate documents inserted or updated on the 2.4 chief, but will print error messages in the log if the documents incorporate an indexed field whose corresponding alphabetize entry exceeds the alphabetize fundamental limit.
  • For existing sharded collections, chunk migration will fail if the chunk has a certificate that contains an indexed field whose index entry exceeds the index central limit.
Number of Indexes per Collection

A unmarried collection can have no more than 64 indexes.

Index Proper noun Length

Inverse in version four.two

Starting in version iv.2, MongoDB removes the Index Name Length limit for MongoDB versions with featureCompatibilityVersion (fCV) prepare to "4.ii" or greater.

In previous versions of MongoDB or MongoDB versions with fCV set to "four.0" or earlier, fully qualified index names, which include the namespace and the dot separators (i.e. <database name>.<collection name>.$<index name>), cannot be longer than 127 bytes.

By default, <alphabetize name> is the concatenation of the field names and index type. You tin explicitly specify the <index proper noun> to the createIndex() method to ensure that the fully qualified alphabetize proper name does not exceed the limit.

Number of Indexed Fields in a Compound Index

There can be no more than 32 fields in a compound index.

Queries cannot utilise both text and Geospatial Indexes

You cannot combine the $text query , which requires a special text alphabetize, with a query operator that requires a different blazon of special index. For example you cannot combine $text query with the $near operator.

Fields with 2dsphere Indexes tin can only concord Geometries

Fields with 2dsphere indexes must hold geometry data in the class of coordinate pairs or GeoJSON data. If y'all endeavor to insert a document with non-geometry information in a 2dsphere indexed field, or build a 2dsphere index on a collection where the indexed field has non-geometry data, the functioning will fail.

Come across also:

The unique indexes limit in Sharding Operational Restrictions.

NaN values returned from Covered Queries by the WiredTiger Storage Engine are always of type double

If the value of a field returned from a query that is covered past an index is NaN, the type of that NaN value is always double.

Multikey Index

Multikey indexes cannot comprehend queries over assortment field(s).

Geospatial Index

Geospatial indexes cannot cover a query.

Memory Usage in Alphabetize Builds

createIndexes supports edifice i or more indexes on a collection. createIndexes uses a combination of memory and temporary files on disk to consummate alphabetize builds. The default limit on memory usage for createIndexes is 200 megabytes (for versions 4.2.3 and afterward) and 500 (for versions 4.2.2 and before), shared betwixt all indexes built using a single createIndexes command. One time the memory limit is reached, createIndexes uses temporary deejay files in a subdirectory named _tmp within the --dbpath directory to complete the build.

You lot tin can override the retentiveness limit by setting the maxIndexBuildMemoryUsageMegabytes server parameter. Setting a higher memory limit may result in faster completion of alphabetize builds. Even so, setting this limit also high relative to the unused RAM on your system can result in memory exhaustion and server shutdown.

Changed in version four.2.

  • For feature compatibility version (fcv) "4.ii", the index build memory limit applies to all index builds.
  • For feature compatibility version (fcv) "4.0", the index build memory limit only applies to foreground alphabetize builds.

Index builds may exist initiated either by a user control such as Create Alphabetize or by an administrative process such every bit an initial sync. Both are subject to the limit set by maxIndexBuildMemoryUsageMegabytes.

An initial sync functioning populates merely one drove at a fourth dimension and has no risk of exceeding the retentiveness limit. However, it is possible for a user to starting time index builds on multiple collections in multiple databases simultaneously and potentially consume an corporeality of retentivity greater than the limit fix in maxIndexBuildMemoryUsageMegabytes.

To minimize the touch on of building an index on replica sets and sharded clusters with replica prepare shards, use a rolling index build procedure as described on Rolling Alphabetize Builds on Replica Sets.

Collation and Index Types

The following alphabetize types but support simple binary comparison and do non support collation:

  • text indexes,
  • 2d indexes, and
  • geoHaystack indexes.

To create a text, a 2d, or a geoHaystack alphabetize on a collection that has a non-simple collation, you must explicitly specify {collation: {locale: "simple"} } when creating the alphabetize.

Subconscious Indexes
  • You cannot hibernate the _id index.
  • You cannot use hint() on a subconscious index.
Maximum Number of Sort Keys

Yous can sort on a maximum of 32 keys.

Maximum Number of Documents in a Capped Collection

If y'all specify a maximum number of documents for a capped collection using the max parameter to create, the limit must exist less than 2 32 documents. If y'all do non specify a maximum number of documents when creating a capped collection, in that location is no limit on the number of documents.

Number of Members of a Replica Set

Replica sets can have up to 50 members.

Number of Voting Members of a Replica Set

Replica sets can have up to 7 voting members. For replica sets with more than 7 full members, encounter Non-Voting Members.

Maximum Size of Motorcar-Created Oplog

If y'all practice not explicitly specify an oplog size (i.e. with oplogSizeMB or --oplogSize) MongoDB will create an oplog that is no larger than 50 gigabytes. [ 1 ]

[1] Starting in MongoDB 4.0, the oplog can grow by its configured size limit to avoid deleting the majority commit point.

Sharded clusters have the restrictions and thresholds described here.

Operations Unavailable in Sharded Environments

$where does non permit references to the db object from the $where function. This is uncommon in un-sharded collections.

The geoSearch control is not supported in sharded environments.

Covered Queries in Sharded Clusters

Starting in MongoDB 3.0, an index cannot encompass a query on a sharded collection when run against a mongos if the index does not comprise the shard key, with the following exception for the _id index: If a query on a sharded collection only specifies a condition on the _id field and returns simply the _id field, the _id alphabetize can embrace the query when run confronting a mongos fifty-fifty if the _id field is not the shard cardinal.

In previous versions, an index cannot comprehend a query on a sharded collection when run against a mongos.

Sharding Existing Collection Information Size

An existing drove can only be sharded if its size does non exceed specific limits. These limits can be estimated based on the boilerplate size of all shard key values, and the configured chunk size.

These limits merely employ for the initial sharding functioning. Sharded collections can grow to any size after successfully enabling sharding.

Use the following formulas to calculate the theoretical maximum collection size.

                                              
maxSplits = 16777216 ( bytes) / < average size of shard cardinal values in bytes >
maxCollectionSize (MB) = maxSplits * (chunkSize / 2)

The maximum BSON document size is 16MB or 16777216 bytes.

All conversions should employ base-2 scale, e.g. 1024 kilobytes = one megabyte.

If maxCollectionSize is less than or near equal to the target collection, increment the chunk size to ensure successful initial sharding. If there is dubiousness equally to whether the result of the calculation is too 'close' to the target collection size, it is likely better to increase the chunk size.

After successful initial sharding, you can reduce the clamper size equally needed. If y'all afterward reduce the chunk size, it may accept time for all chunks to split to the new size. Encounter Modify Chunk Size in a Sharded Cluster for instructions on modifying clamper size.

This table illustrates the approximate maximum collection sizes using the formulas described above:

Average Size of Shard Primal Values

512 bytes

256 bytes

128 bytes

64 bytes

Maximum Number of Splits

32,768

65,536

131,072

262,144

Max Collection Size (64 MB Chunk Size)

ane TB

2 TB

4 TB

8 TB

Max Collection Size (128 MB Clamper Size)

2 TB

4 TB

8 TB

16 TB

Max Collection Size (256 MB Chunk Size)

4 TB

8 TB

16 TB

32 TB

Single Document Modification Operations in Sharded Collections

All update and remove() operations for a sharded collection that specify the justOne or multi: imitation option must include the shard key or the _id field in the query specification.

update and remove() operations specifying justOne or multi: false in a sharded collection which do not contain either the shard key or the _id field return an fault.

Unique Indexes in Sharded Collections

MongoDB does not support unique indexes across shards, except when the unique alphabetize contains the full shard central as a prefix of the index. In these situations MongoDB volition enforce uniqueness across the total fundamental, not a single field.

Maximum Number of Documents Per Clamper to Migrate

By default, MongoDB cannot move a clamper if the number of documents in the chunk is greater than 1.3 times the result of dividing the configured chunk size by the average document size. db.collection.stats() includes the avgObjSize field, which represents the average document size in the drove.

For chunks that are also large to migrate, starting in MongoDB 4.4:

  • A new balancer setting attemptToBalanceJumboChunks allows the balancer to migrate chunks too large to move as long as the chunks are non labeled jumbo. See Balance Chunks that Exceed Size Limit for details.
  • The moveChunk command can specify a new choice forceJumbo to let for the migration of chunks that are too large to move. The chunks may or may not be labeled jumbo.
Shard Fundamental Size

Starting in version 4.4, MongoDB removes the limit on the shard key size.

For MongoDB four.2 and before, a shard fundamental cannot exceed 512 bytes.

Shard Cardinal Alphabetize Type

A shard fundamental index tin can be an ascending alphabetize on the shard key, a compound index that starting time with the shard primal and specify ascending order for the shard fundamental, or a hashed index.

A shard key index cannot be an alphabetize that specifies a multikey index, a text index or a geospatial index on the shard key fields.

Shard Key Selection is Immutable in MongoDB four.two and Earlier

Your options for irresolute a shard fundamental depend on the version of MongoDB that you are running:

  • Starting in MongoDB 5.0, you can reshard a drove by changing a document's shard key.
  • Starting in MongoDB 4.4, you tin refine a shard key by adding a suffix field or fields to the existing shard primal.
  • In MongoDB iv.2 and earlier, the pick of shard central cannot be changed after sharding.

In MongoDB iv.2 and earlier, to change a shard key:

  • Dump all data from MongoDB into an external format.
  • Drop the original sharded collection.
  • Configure sharding using the new shard key.
  • Pre-split up the shard key range to ensure initial even distribution.
  • Restore the dumped data into MongoDB.
Monotonically Increasing Shard Keys Tin Limit Insert Throughput

For clusters with high insert volumes, a shard key with monotonically increasing and decreasing keys tin can affect insert throughput. If your shard central is the _id field, be aware that the default values of the _id fields are ObjectIds which have generally increasing values.

When inserting documents with monotonically increasing shard keys, all inserts belong to the same chunk on a unmarried shard. The system eventually divides the chunk range that receives all write operations and migrates its contents to distribute data more than evenly. However, at any moment the cluster directs insert operations only to a unmarried shard, which creates an insert throughput bottleneck.

If the operations on the cluster are predominately read operations and updates, this limitation may not affect the cluster.

To avert this constraint, utilise a hashed shard key or select a field that does not increment or decrease monotonically.

Hashed shard keys and hashed indexes store hashes of keys with ascending values.

Sort Operations

If MongoDB cannot use an index or indexes to obtain the sort gild, MongoDB must perform a blocking sort operation on the information. The name refers to the requirement that the SORT phase reads all input documents before returning any output documents, blocking the flow of data for that specific query.

If MongoDB requires using more than 100 megabytes of arrangement retentiveness for the blocking sort operation, MongoDB returns an error unless the query specifies cursor.allowDiskUse() (New in MongoDB iv.4). allowDiskUse() allows MongoDB to use temporary files on deejay to store data exceeding the 100 megabyte system memory limit while processing a blocking sort operation.

Changed in version four.4: For MongoDB 4.2 and prior, blocking sort operations could not exceed 32 megabytes of system retentivity.

For more data on sorts and index employ, encounter Sort and Index Use.

Assemblage Pipeline Operation

Each individual pipeline stage has a limit of 100 megabytes of RAM. By default, if a stage exceeds this limit, MongoDB produces an error. For some pipeline stages y'all can allow pipeline processing to take upward more infinite by using the allowDiskUse option to enable aggregation pipeline stages to write data to temporary files.

The $search aggregation stage is not restricted to 100 megabytes of RAM because it runs in a dissever procedure.

Examples of stages that can spill to deejay when allowDiskUse is true are:

  • $bucket
  • $bucketAuto
  • $group
  • $sort when the sort functioning is not supported by an index
  • $sortByCount

Pipeline stages operate on streams of documents with each pipeline phase taking in documents, processing them, then outputing the resulting documents.

Some stages can't output whatever documents until they have candy all incoming documents. These pipeline stages must proceed their stage output in RAM until all incoming documents are processed. As a result, these pipeline stages may require more space than the 100 MB limit.

If the results of ane of your $sort pipeline stages exceed the limit, consider adding a $limit phase.

Starting in MongoDB 4.2, the profiler log messages and diagnostic log messages includes a usedDisk indicator if any aggregation stage wrote data to temporary files due to retentivity restrictions.

Aggregation and Read Concern
  • Starting in MongoDB 4.two, the $out phase cannot be used in conjunction with read concern "linearizable". That is, if you specify "linearizable" read concern for db.collection.amass(), yous cannot include the $out stage in the pipeline.
  • The $merge stage cannot be used in conjunction with read concern "linearizable". That is, if you lot specify "linearizable" read concern for db.collection.aggregate(), you cannot include the $merge stage in the pipeline.
2d Geospatial queries cannot employ the $or operator
Geospatial Queries

For spherical queries, use the 2dsphere index result.

The use of 2d index for spherical queries may lead to incorrect results, such equally the utilise of the 2d alphabetize for spherical queries that wrap around the poles.

Geospatial Coordinates
  • Valid longitude values are betwixt -180 and 180, both inclusive.
  • Valid latitude values are between -90 and 90, both inclusive.
Area of GeoJSON Polygons

For $geoIntersects or $geoWithin, if you specify a single-ringed polygon that has an area greater than a single hemisphere, include the custom MongoDB coordinate reference organization in the $geometry expression; otherwise, $geoIntersects or $geoWithin queries for the complementary geometry. For all other GeoJSON polygons with areas greater than a hemisphere, $geoIntersects or $geoWithin queries for the complementary geometry.

Multi-document Transactions

For multi-certificate transactions:

  • You tin specify read/write (CRUD) operations on existing collections. For a list of CRUD operations, meet Grime Operations.
  • Starting in MongoDB 4.iv, you can create collections and indexes in transactions. For details, see Create Collections and Indexes In a Transaction
  • The collections used in a transaction tin be in dissimilar databases.

    You cannot create new collections in cross-shard write transactions. For example, if y'all write to an existing collection in 1 shard and implicitly create a collection in a different shard, MongoDB cannot perform both operations in the same transaction.

  • You cannot write to capped collections. (Starting in MongoDB 4.2)
  • Y'all cannot use read concern "snapshot" when reading from a capped drove. (Starting in MongoDB 5.0)
  • You cannot read/write to collections in the config, admin, or local databases.
  • You cannot write to organization.* collections.
  • You cannot return the supported functioning'southward query plan (i.eastward. explain).
  • For cursors created outside of a transaction, you lot cannot call getMore inside the transaction.
  • For cursors created in a transaction, you cannot call getMore outside the transaction.
  • Starting in MongoDB 4.ii, yous cannot specify killCursors equally the first operation in a transaction.

Inverse in version iv.4.

The following operations are non immune in transactions:

  • Operations that touch on the database catalog, such equally creating or dropping a collection or an alphabetize when using MongoDB 4.ii or lower. Starting in MongoDB four.4, you tin can create collections and indexes in transactions unless the transaction is a cross-shard write transaction. For details, run into Create Collections and Indexes In a Transaction.
  • Creating new collections in cross-shard write transactions. For case, if you write to an existing collection in i shard and implicitly create a drove in a different shard, MongoDB cannot perform both operations in the same transaction.
  • Explicit creation of collections, e.chiliad. db.createCollection() method, and indexes, e.yard. db.collection.createIndexes() and db.collection.createIndex() methods, when using a read business organization level other than "local".
  • The listCollections and listIndexes commands and their helper methods.
  • Other non-CRUD and not-advisory operations, such equally createUser, getParameter, count, etc. and their helpers.

Transactions take a lifetime limit as specified by transactionLifetimeLimitSeconds. The default is lx seconds.

Write Command Batch Limit Size

100,000 writes are allowed in a unmarried batch functioning, defined past a single request to the server.

Changed in version 3.6: The limit raises from 1,000 to 100,000 writes. This limit too applies to legacy OP_INSERT messages.

The Bulk() operations in mongosh and comparable methods in the drivers do non have this limit.

Views

The view definition pipeline cannot include the $out or the $merge stage. If the view definition includes nested pipeline (e.g. the view definition includes $lookup or $facet stage), this restriction applies to the nested pipelines as well.

Views have the following operation restrictions:

  • Views are read-just.
  • You cannot rename views.
  • find() operations on views practise not support the post-obit projection operators:
    • $
    • $elemMatch
    • $slice
    • $meta
  • Views practice non support text search.
  • Views do non back up map-reduce operations.
  • Views do non support geoNear operations (i.east. $geoNear pipeline stage).
Projection Restrictions

New in version 4.4:

$-Prefixed Field Path Restriction
Starting in MongoDB 4.4, the find() and findAndModify() projection cannot projection a field that starts with $ with the exception of the DBRef fields. For example, starting in MongoDB four.4, the following performance is invalid:
                                                      
db.inventory.find( { } , { "$instock.warehouse": 0, "$item": 0, "detail.$cost": 1 } ) // Invalid starting in 4.4
In earlier version, MongoDB ignores the $-prefixed field projections.
$ Positional Operator Placement Restriction
Starting in MongoDB 4.4, the $ projection operator can only announced at the finish of the field path; e.grand. "field.$" or "fieldA.fieldB.$". For case, starting in MongoDB 4.4, the post-obit operation is invalid:
                                                      
db.inventory.find( { } , { "instock.$.qty": 1 } ) // Invalid starting in iv.4
To resolve, remove the component of the field path that follows the $ projection operator. In previous versions, MongoDB ignores the part of the path that follows the $; i.e. the project is treated equally "instock.$".
Empty Field Name Projection Restriction
Starting in MongoDB 4.4, detect() and findAndModify() projection cannot include a projection of an empty field proper name. For example, starting in MongoDB 4.four, the following performance is invalid:
                                                      
db.inventory.notice( { } , { "": 0 } ) // Invalid starting in 4.4
In previous versions, MongoDB treats the inclusion/exclusion of the empty field as it would the project of non-existing fields.
Path Collision: Embedded Documents and Its Fields
Starting in MongoDB iv.4, information technology is illegal to projection an embedded document with any of the embedded document'due south fields. For case, consider a collection inventory with documents that contain a size field:
                                                      
{ ... , size: { h: x, westward: 15.25, uom: "cm" } , ... }
Starting in MongoDB 4.4, the following performance fails with a Path collision mistake considering it attempts to project both size document and the size.uom field:
                                                      
db.inventory.detect( { } , { size: ane, "size.uom": 1 } ) // Invalid starting in iv.four
In previous versions, lattermost projection between the embedded documents and its fields determines the projection:
  • If the projection of the embedded document comes afterwards whatsoever and all projections of its fields, MongoDB projects the embedded document. For example, the projection document { "size.uom": 1, size: 1 } produces the same result as the projection certificate { size: i }.
  • If the project of the embedded certificate comes before the projection any of its fields, MongoDB projects the specified field or fields. For example, the projection document { "size.uom": ane, size: i, "size.h": 1 } produces the same effect every bit the project certificate { "size.uom": 1, "size.h": 1 }.
Path Collision: $slice of an Array and Embedded Fields
Starting in MongoDB 4.iv, find() and findAndModify() projection cannot contain both a $slice of an array and a field embedded in the array. For case, consider a collection inventory that contains an array field instock:
                                                      
{ ... , instock: [ { warehouse: "A", qty: 35 } , { warehouse: "B", qty: 15 } , { warehouse: "C", qty: 35 } ] , ... }
Starting in MongoDB iv.4, the following operation fails with a Path collision error:
                                                      
db.inventory.find( { } , { "instock": { $slice: 1 } , "instock.warehouse": 0 } ) // Invalid starting in four.four
In previous versions, the project applies both projections and returns the first element ($slice: i) in the instock array but suppresses the warehouse field in the projected element. Starting in MongoDB 4.4, to attain the same result, utilize the db.drove.amass() method with two dissever $project stages.
$ Positional Operator and $slice Brake
Starting in MongoDB 4.4, find() and findAndModify() projection cannot include $piece projection expression every bit part of a $ projection expression. For example, starting in MongoDB four.four, the following operation is invalid:
                                                      
db.inventory.find( { "instock.qty": { $gt: 25 } } , { "instock.$": { $slice: one } } ) // Invalid starting in 4.4
In previous versions, MongoDB returns the outset element (instock.$) in the instock array that matches the query status; i.eastward. the positional projection "instock.$" takes precedence and the $slice:1 is a no-op. The "instock.$": { $piece: 1 } does non exclude whatsoever other document field.
Sessions and $external Username Limit

To utilize Customer Sessions and Causal Consistency Guarantees with $external authentication users (Kerberos, LDAP, or x.509 users), the usernames cannot be greater than 10k bytes.

Session Idle Timeout

Sessions that receive no read or write operations for 30 minutes or that are not refreshed using refreshSessions within this threshold are marked as expired and tin can be closed by the MongoDB server at any time. Closing a session kills whatever in-progress operations and open up cursors associated with the session. This includes cursors configured with noCursorTimeout() or a maxTimeMS() greater than 30 minutes.

Consider an application that issues a db.collection.detect(). The server returns a cursor forth with a batch of documents defined by the cursor.batchSize() of the discover(). The session refreshes each time the application requests a new batch of documents from the server. However, if the application takes longer than thirty minutes to process the current batch of documents, the session is marked as expired and closed. When the application requests the next batch of documents, the server returns an mistake as the cursor was killed when the session was closed.

For operations that render a cursor, if the cursor may be idle for longer than 30 minutes, consequence the functioning within an explicit session using Mongo.startSession() and periodically refresh the session using the refreshSessions command. For example:

                                          
var session = db.getMongo().startSession()
var sessionId = session.getSessionId().id
var cursor = session.getDatabase("examples").getCollection("data").find().noCursorTimeout()
var refreshTimestamp = new Appointment() // take note of time at performance offset
while (cursor.hasNext()) {
// Cheque if more than 5 minutes have passed since the last refresh
if ( (new Date()-refreshTimestamp)/1000 > 300 ) {
impress("refreshing session")
db.adminCommand({"refreshSessions" : [sessionId]})
refreshTimestamp = new Date()
}
// process cursor unremarkably
}

In the example operation, the db.collection.find() method is associated with an explicit session. The cursor is configured with noCursorTimeout() to prevent the server from closing the cursor if idle. The while loop includes a block that uses refreshSessions to refresh the session every 5 minutes. Since the session volition never exceed the thirty infinitesimal idle timeout, the cursor can remain open indefinitely.

For MongoDB drivers, defer to the driver documentation for instructions and syntax for creating sessions.

macdowellhavercy.blogspot.com

Source: https://www.mongodb.com/docs/manual/reference/limits/

0 Response to "This Calculator Has Exceeded Rolling Usage Limits Please Try Again Later"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel