Details
-
Bug
-
Resolution: Unresolved
-
Major - P3
-
None
-
None
-
ALL
-
Description
The Bulk API in the shell has been written such that once a valid response has been delivered from .execute(), the bulk cannot be re-executed; you get this error:
> bulk.execute()
|
2014-02-06T13:50:58.157-0500 batch cannot be re-executed at src/mongo/shell/bulk_api.js:815
|
(It should say "bulk" instead of "batch" in this message)
However, if an error (such as a user killOp) happens that prevents the bulk execute from delivering a final report, you ARE allowed to re-execute the bulk, even though this will almost never work, since _id objects have (apparently) already been assigned by the shell prior to .execute():
> bulk.execute()
|
2014-02-06T13:44:31.035-0500 batch failed, cannot aggregate results: operation was interrupted at src/mongo/shell/bulk_api.js:612
|
> bulk.execute()
|
BulkWriteResult({
|
"writeErrors" : [
|
{
|
"index" : 0,
|
"code" : 11000,
|
"errmsg" : "insertDocument :: caused by :: 11000 E11000 duplicate key error index: test.coll.$_id_ dup key: { : ObjectId('52f3d7e1d9f831437fd3fc84') }",
|
"op" : {
|
"_id" : ObjectId("52f3d7e1d9f831437fd3fc84"),
|
"i" : 0
|
}
|
}
|
],
|
"writeConcernErrors" : [ ],
|
"nInserted" : 999,
|
"nUpserted" : 0,
|
"nUpdated" : 0,
|
"nModified" : 0,
|
"nRemoved" : 0,
|
"upserted" : [ ]
|
})
|
Note how the second attempt at running the bulk insert fails with a unique index constraint violation on the _id index, which would only happen if it were trying to re-insert the same documents with the same _id's as before.
Attachments
Issue Links
- duplicates
-
SERVER-12576 Human readable Bulk state and SingleWriteResult
-
- Closed
-
- related to
-
SERVER-13430 Bulk API should prevent additional operations from being added after execute() is run
-
- Backlog
-